Saturday, October 21, 2017

Infineon, Autoliv on Automotive Imaging Market

Infineon publishes an "Automotive Conference Call" presentation dated by Oct. 10, 2017. Few interesting slides showing camera and LiDAR content in cars of the future:

Autoliv CEO presentation dated by Sept. 28, 2017 gives a bright outlook on automotive imaging:

Friday, October 20, 2017

Trinamix Distance Sensing Technology Explained

BASF spin-off Trinamix publishes a nice technology page with Youtube videos explaining its depth sensing principles. They call it "Focus-Induced Photoresponse (FIP):"

"FIP takes advantage of a particular phenomenon in photodetector devices: an irradiance-dependent photoresponse. The photoresponse of these devices depends not only on the amount of light incident, but also on the size of the light spot on the detector. This phenomenon allows to distinguish whether the same amount of light is focused or defocused on the sensor. We call this the “FIP effect” and use it to measure distance.

The picture illustrates how the FIP effect can be utilized for distance measurements. The photocurrent of the photodetector reaches its maximum when the light is in focus and decreases symmetrically outside the focus. A change of the distance between light source and lens results in such a change of the spot size on the sensor. By analyzing the photoresponse, the distance between light source and lens can be deduced.

Trinamix also started production of Hertzstück PbS SWIR photodetectors with PbSe ones to follow:

Thursday, October 19, 2017

Invisage Acquired by Apple?

Reportedly, there is some sort of acquisition deal reached between Invisage and Apple. A part of Invisage employees joined Apple. Another part is looking for jobs, apparently. While the deal has never been officially announced, I got an unofficial confirmation of this story from 3 independent sources.

Update: According to 2 sources, the deal was closed in July this year.

Somewhat old Invisage Youtube videos are still available and show the company's visible-light technology, although Invisage worked on IR sensing in more recent years:

Update #2: There are few more indications that Invisage has been acquired. Nokia Growth Partners (NGP) that participated the 2014 investing round shows Invisage in its exits list:

InterWest Partners too invested in 2014 and now lists Invisage as its non-current investments:

Samsung VR Camera Features 17 Imagers

Samsung introduces the 360 Round, a camera for developing and streaming high-quality 3D content for VR experience. The 360 Round uses 17 lenses—eight stereo pairs positioned horizontally and one single lens positioned vertically—to livestream 4K 3D video and spatial audio, and create engaging 3D images with depth.

With such cameras getting widely adopted on the market, it can easily become a major market for image sensors:

Google Pixel 2 Smartphone Features Stand-Alone HDR+ Processor

Ars Technica reports that Google Pixel 2 smartphone features a separate Google-designed image processor chip, "Pixel Visual Core." It's said "to handle the most challenging imaging and machine learning applications" and that the company is "already preparing the next set of applications" designed for the hardware. The Pixel Visual Core has its own CPU, a low power ARM A53 core, DDR4 RAM, the eight IPU cores, and a PCIe and MIPI interfaces. Google says the company's HDR+ image processing can run "5x faster and at less than 1/10th the energy" than it currently does on the main CPU. The new core will be enabled in the forthcoming Android Oreo 8.1 (MR1) update.

The new IPU cores are intended to use Halide language for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.

Google also publishes an article explaining its HDR+ and portrait mode that the new core is supposed to accelerate. Google also publishes a video explaining the Pixel 2 camera features:

Wednesday, October 18, 2017

Basler Compares Image Sensors for Machine Vision and Industrial Applications

Basler presents EMVA 1288 measurements of the image sensors in its cameras. It's quite interesting to compare CCD with CMOS sensors and Sony with other companies in terms of QE. Qsat, Dark Noise, etc.:

5 Things to Learn from AutoSens 2017

EMVA publishes "AutoSens Show Report: 5 Things We Learned This Year" by Marco Jacobs, VP of Marketing, Videantis. The five important things are:
  1. The devil is in the detail
    Sort of obvious. See some examples in the article.
  2. No one sensor to rule them all
    Different image sensors, Lidars, each optimized for a different sub-task
  3. No bold predictions
    That is, nobody knows what the autonomous driving arrives to the market
  4. Besides the drive itself, what will an autonomous car really be like?
  5. Deep learning a must-have tool for everyone
    Sort of a common statement although the approaches vary. Some put the intelligence into the sensors, others keep sensors dumb while concentrating the processing in a central unit.

DENSO and Fotonation Collaborate

BusinessWire: DENSO and Xperi-Fotonation start joint technology development of cabin sensing based on image recognition. DENSO expects to significantly improve the performance of its Driver Status Monitor, an active safety product used in tracks since 2014. Improvements of such products also will be used in next-generation passenger vehicles, including a system to help drivers return to driving mode during Level 3 of autonomous drive.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Understanding the status of the driver and engaging them at the right time is an important component for enabling the future of autonomous driving,” said Yukihiro Kato, senior executive director, Information & Safety Systems Business Group of DENSO. “I believe this collaboration with Xperi will help accelerate our innovative ADAS product development by bringing together the unique expertise of both our companies.

We are excited to partner with DENSO to innovate in such a dynamic field,” said Jon Kirchner, CEO of Xperi Corporation. “This partnership will play a significant role in paving the way to the ultimate goal of safer roadways through use of our imaging and facial analytics technologies and DENSO’s vast experience in the space.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Tuesday, October 17, 2017

AutoSens 2017 Awards

AutoSens conference held on Sept. 20-21 in Brussels, Belgium publishes its Awards. Some of the image sensor relevant ones:

Most Engaging Content
  • First place: Vladimir Koifman, Image Sensors World (yes, this is me!)
  • Highly commended: Junko Yoshida, EE Times

Hardware Innovation
  • First place: Renesas
  • Highly commended: STMicroelectronics

Most Exciting Start-Up
  • Winner: Algolux
  • Highly commended: Innoviz Technologies

LG, Rockchip and CEVA Partner on 3D Imaging

PRNewswire: CEVA partners with LG to deliver a high-performance, low-cost smart 3D camera for consumer electronics and robotic applications.

The 3D camera module incorporates a Rockchip RK1608 coprocessor with multiple CEVA-XM4 imaging and vision DSPs to perform biometric face authentication, 3D reconstruction, gesture/posture tracking, obstacle detection, AR and VR.

"There is a clear demand for cost-efficient 3D camera sensor modules to enable an enriched user experience for smartphones, AR and VR devices and to provide a robust localization and mapping (SLAM) solution for robots and autonomous cars," said Shin Yun-sup, principal engineer at LG Electronics. "Through our collaboration with CEVA, we are addressing this demand with a fully-featured compact 3D module, offering exceptional performance thanks to our in-house algorithms and the CEVA-XM4 imaging and vision DSP."

Monday, October 16, 2017

Ambarella Loses Key Customers

The Motley Fool publishes an analysis of Ambarella performance over the last year. The company lost some of its key customers GoPro, Hikvision and DJI, while the new Google Clips camera opted for non-Ambarella processor as well:

"Faced with shrinking margins, GoPro needed to buy cheaper chipsets to cut costs. It also wanted a custom design which wasn't readily available to competitors like Ambarella's SoCs. That's why it completely cut Ambarella out of the loop and hired Japanese chipmaker Socionext to create a custom GP1 SoC for its new Hero 6 cameras.

DJI also recently revealed that its portable Spark drone didn't use an Ambarella chipset. Instead, the drone uses the Myriad 2 VPU (visual processing unit) from Intel's Movidius. DJI previously used the Myriad 2 alongside an Ambarella chipset in its flagship Phantom 4, but the Spark uses the Myriad 2 for both computer vision and image processing tasks.

Google also installed the Myriad 2 in its Clips camera, which automatically takes burst shots by learning and recognizing the faces in a user's life.

Ambarella needs the CV1 to catch up to the Myriad 2, but that could be tough with the Myriad's first-mover's advantage and Intel's superior scale.

To top it all off, Chinese chipmakers are putting pressure on Ambarella's security camera business in China.

Pikselim Demos Low-Light Driver Vision Enhancement

Pikselim publishes a night-time Driver Vision Enhancement (DVE) video using its low-light CMOS sensor behind the windshield of the vehicle with the headlights off (sensor is operated in the 640x512 format at 15 fps in the Global Shutter mode, using an f/0.95 optics and off-chip de-noising):

Sunday, October 15, 2017

Yole on Automotive LiDAR Market

Yole Developpement publishes its AutoSens Brussels 2017 presentation "Application, market & technology status of the automotive LIDAR." Few slides form the presentation:

Sony Announces Three New Sensors

Sony added three new sensors to its flyers table: 8.3MP 2um pixel based IMX334LQR and 4.5um global shutter pixel based 2.9MP IMX429LLJ and 2MP IMX430LLJ. The news sensors are said to have high sensitivity and aimed to security and surveillance applications.

Yole Image Sensors M&A Review

IMVE publishes article "Keeping Up With Consolidation" by Pierre Cambou, Yole Developpement image sensor analyst. There is a nice chart showing the large historical mergers and acquisitions:

"For the source of future M&A, one should rather look toward the decent number of machine vision sensor technology start-ups, companies like Softkinetic, which was purchased by Sony in 2015, and Mesa, which was acquired by Ams, in 2014. There are a certain number of interesting start-ups right now, such as PMD, Chronocam, Fastree3D, SensL, Sionyx, and Invisage. Beyond the start-ups, and from a global perspective, there is little room for a greater number of deals at sensor level, because almost all players have recently been subject to M&A."

Saturday, October 14, 2017

Waymo Self-Driving Car Relies on 5 LiDARs and 1 Surround-View Camera

Alphabet Waymo publishes Safety Report with some details on its self-driving car sensors - 5 LiDARs and one 360-deg color camera:

LiDAR (Laser) System
LiDAR (Light Detection and Ranging) works day and night by beaming out millions of laser pulses per second—in 360 degrees—and measuring how long it takes to reflect off a surface and return to the vehicle. Waymo’s system includes three types of LiDAR developed in-house: a short-range LiDAR that gives our vehicle an uninterrupted view directly around it, a high-resolution mid-range LiDAR, and a powerful new generation long-range LiDAR that can see almost three football fields away.

Vision (Camera) System

Our vision system includes cameras designed to see the world in context, as a human would, but with a simultaneous 360-degree field of view, rather than the 120-degree view of human drivers. Because our high-resolution vision system detects color, it can help our system spot traffic lights, construction zones, school buses, and the flashing lights of emergency vehicles. Waymo’s vision system is comprised of several sets of high-resolution cameras, designed to work well at long range, in daylight and low-light conditions.

Half a year ago, Bloomberg published an animated gif image showing the cleaning of Waymo 360-deg camera:

Chronocam Partners with Huawei

French sites L'usine Novelle, InfoDSI, Chine report that Chronocam partners with Huawei. Huawei is said to cooperate with Chronocam on face recognition technology in its smartphones, similar to Face ID in iPhone X.

Friday, October 13, 2017

Hynix Proposes TrenchFET TG

SK Hynix patent application US20170287959 "Image Sensor" by Pyong-su Kwag, Yun-hui Yang, and Young-jun Kwon leverages the company's DRAM trench technology:

Omron Improves Its Driver Monitoring System

OMRON driver monitoring system uses three barometers to judge whether the driver is capable of focusing on driving responsibilities: (1) whether the driver is observing the vehicle's operation (Eyes ON/OFF); (2) how quickly the driver will be able to resume driving (Readiness High/Mid/Low); and (3) whether the driver is behind the wheel (Seating ON/OFF). Additionally, the company's facial image sensing technology, OKAO Vision, now makes it possible to sense the state of the driver even if wearing a mask or sunglasses - something that had previously not been possible.

Magic Leap Seeks $1b Funding on $6b Valuation

Reuters reports that AR glasses startup Magic Leap files in SEC that it's seeking to raise $1b on $6b valuation. The filing does not indicate the amount that Magic Leap had so far secured from investors. It may end up raising less than $1b.

Thursday, October 12, 2017

Compressed Sensing Said to Save Image Sensor Power

Pravir Singh Gupta and Gwan Seong Choi from Texas A&M University publish an open access paper "Image Acquisition System Using On Sensor Compressed Sampling Technique." They say that "Compressed Sensing has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23%-65%."

The proposed sensor architecture implementing this claim is given below:

"Now we demonstrate the reconstruction results of our proposed novel system flow. We use both binary and non-binary block diagonal matrix to compressively sample the image. The binary block diagonal(ΦB) and non-binary block diagonal(ΦNB) sampling matrix are mentioned below."

EI 2018, "Image Sensors and Imaging Systems" Preliminary Program

Electronic Imaging 2018, "Image Sensors and Imaging Systems" Symposium is about to publish its preliminary program. I was given an early preview:

There will be five invited keynotes:
  • "Dark Current Limiting Mechanisms in CMOS Image Sensors"
    Dan McGrath, BAE Systems (California)
  • "Security imaging in an unsecure world"
    Anders Johanesson, AXIS COMMUNICATIONS AB (Sweden)
  • "Quantum Efficiency and Color"
    Jörg Kunze, Basler AG (Germany)
  • "Sub-Electron Low Noise CMOS image sensors"
    Angel Rodriguez Vasquez, University of Sevilla (Spain)
  • "Advances in automotive image sensors"
    Boyd Fowler, OmniVision Technologies (California)
The regular papers are grouped into several sessions with the following themes (the exact names are still under discussion):
  • QE curves, color and spectral imaging
  • Depth sensing
  • High speed and ultra high speed imaging
  • Noise, performance and characterization
  • Technology and design for high performance image sensors
  • Image sensors and technologies for automotive and autonomous vehicles
  • Applications
  • Interactive posters
It is a program over two days within the 5 days of the Electronic Imaging symposium. It is held at the same time as Photonics West and the week after the P2020 meeting.

Intel Unveils D400 Realsense Camera Family

Intel publishes an official page of D400 camera family, currently consisting of D415 and D435 active stereo cameras. Reportedly, the earlier Realsense cameras SR300, R200 and F200 are being discontinued, while D400 series will be expanded to include passive and active stereo models:

Wednesday, October 11, 2017

Velodyne More Than Quadruples LiDAR Manufacturing

BusinessWire: Velodyne has more than quadrupled production for its LiDAR sensors to meet strong global demand. As a result, Velodyne LiDAR’s sensors are immediately available via distribution partners in Europe, Asia Pacific, and North America, with industry standard lead-times for direct contracts.

To support that expansion, Velodyne has doubled the number of its full-time employees over the past six months. These employees operate across three facilities in California, including the company’s new Megafactory in San Jose, its long-standing manufacturing facility in Morgan Hill, and the Velodyne Labs research center in Alameda.

Velodyne leads the market in real-time 3D LiDAR systems for fully autonomous vehicles,” said David Hall, Velodyne LiDAR Founder and CEO. “With the tremendous surge in autonomous vehicle orders and new installations across the last 12 months, we scaled capacity to meet this demand, including a significant increase in production from our 200,000 square-foot Megafactory.

Velodyne Megafactory in San Jose, CA

Looking at GM autonomous driving fleet one can understand why Velodyne needs so much production capacity:

Samsung Announces 0.9um Pixel Sensor for Smartphones, More

BusinessWire: Samsung introduces two new ISOCELL sensors: 1.28 μm 12MP Fast 2L9 with Dual Pixel technology, and ultra-small 0.9μm 24Mp Slim 2X7 with Tetracell technology.

The Fast 2L9 features reduced pixel size from the previous Dual Pixel sensor’s 1.4μm to 1.28μm.

At 0.9μm, the Slim 2X7 is said to be the first sensor in the industry with pixel size below 1.0μm. The pixel uses improved ISOCELL technology with deeper DTI that reduces color crosstalk and expands the full-well capacity to hold more light information. In addition, the small 0.9μm pixel size enables a 24Mp image sensor to be fitted in a thinner camera module.

The Slim 2X7 is also features Tetracell technology. Tetracell improves performance in low-light situations by merging four neighboring pixels to work as one to increase light sensitivity. In bright environments, Tetracell uses a re-mosaic algorithm to produce full resolution images. This enables consumers to use the same front camera to take photos in various lighting conditions.

Samsung ISOCELL Fast 2L9 and ISOCELL Slim 2X7 are new image sensors that fully utilize Samsung’s advanced pixel technology, and are highly versatile as they can be placed in both front and rear of a smartphone,” said Ben K. Hur, VP of System LSI Marketing at Samsung.

In an earlier news, Samsung Tetracell technology received Korea Multimedia Technology Award:

ON Semi Announces Two 1MP Sensors

BusinessWire: ON Semi announces 3um pixel-based AS0140 and AS0142 1/4-inch 1MP sensors with integrated ISP for automotive applications. The new sensors support 45 fps at full resolution or 60 fps at 720p. Key features include distortion correction, multi-color overlays and both analog (NTSC) and digital (Ethernet) interfaces. Both SoC devices achieve enhanced image quality by making use of the adaptive local tone mapping (ALTM) in order to eliminate artifacts that impinge on the acquisition process while achieving DR of 93 dB.

Both new devices are said to have class-leading power efficiency; when running at 30 fps in HDR mode, they consume just 530 mW. Operating temperature range is -40°C to +105°C. Engineering samples are available now. The AS0140 will be in production in 4Q17, and AS0142 will be in production in 1Q18.

AS0140 ISP pipeline

Tuesday, October 10, 2017

Image Fusion in Dual Cameras

Corephotonics publishes a presentation on image fusion in dual cameras:

Eldim Supplies iPhone X Face ID Components

VentureBeat reports that Apple CEO Tim Cook visited France Eldim optical component maker. A local reporter said the two companies had been working together for almost a decade, mostly in an R&D capacity. It was only with the release of the iPhone X that the facial recognition system is being baked into a product.

Eldim CEO Thierry Leroux told reporters that working with Apple was “an incredible adventure,” but added that there have also been huge technical challenges over the years. “For us, it was a little like sending someone to the moon,” Leroux told reporters. Cook responded, “It’s great what you have done for us.

Thanks to JB for the link!