Saturday, October 21, 2017

Infineon, Autoliv on Automotive Imaging Market

Infineon publishes an "Automotive Conference Call" presentation dated by Oct. 10, 2017. Few interesting slides showing camera and LiDAR content in cars of the future:


Autoliv CEO presentation dated by Sept. 28, 2017 gives a bright outlook on automotive imaging:

Friday, October 20, 2017

Trinamix Distance Sensing Technology Explained

BASF spin-off Trinamix publishes a nice technology page with Youtube videos explaining its depth sensing principles. They call it "Focus-Induced Photoresponse (FIP):"






"FIP takes advantage of a particular phenomenon in photodetector devices: an irradiance-dependent photoresponse. The photoresponse of these devices depends not only on the amount of light incident, but also on the size of the light spot on the detector. This phenomenon allows to distinguish whether the same amount of light is focused or defocused on the sensor. We call this the “FIP effect” and use it to measure distance.

The picture illustrates how the FIP effect can be utilized for distance measurements. The photocurrent of the photodetector reaches its maximum when the light is in focus and decreases symmetrically outside the focus. A change of the distance between light source and lens results in such a change of the spot size on the sensor. By analyzing the photoresponse, the distance between light source and lens can be deduced.
"

Trinamix also started production of Hertzst├╝ck PbS SWIR photodetectors with PbSe ones to follow:

Thursday, October 19, 2017

Invisage Acquired by Apple?

Reportedly, there is some sort of acquisition deal reached between Invisage and Apple. A part of Invisage employees joined Apple. Another part is looking for jobs, apparently. While the deal has never been officially announced, I got an unofficial confirmation of this story from 3 independent sources.

Update: According to 2 sources, the deal was closed in July this year.

Somewhat old Invisage Youtube videos are still available and show the company's visible-light technology, although Invisage worked on IR sensing in more recent years:



Update #2: There are few more indications that Invisage has been acquired. Nokia Growth Partners (NGP) that participated the 2014 investing round shows Invisage in its exits list:


InterWest Partners too invested in 2014 and now lists Invisage as its non-current investments:

Samsung VR Camera Features 17 Imagers

Samsung introduces the 360 Round, a camera for developing and streaming high-quality 3D content for VR experience. The 360 Round uses 17 lenses—eight stereo pairs positioned horizontally and one single lens positioned vertically—to livestream 4K 3D video and spatial audio, and create engaging 3D images with depth.

With such cameras getting widely adopted on the market, it can easily become a major market for image sensors:

Google Pixel 2 Smartphone Features Stand-Alone HDR+ Processor

Ars Technica reports that Google Pixel 2 smartphone features a separate Google-designed image processor chip, "Pixel Visual Core." It's said "to handle the most challenging imaging and machine learning applications" and that the company is "already preparing the next set of applications" designed for the hardware. The Pixel Visual Core has its own CPU, a low power ARM A53 core, DDR4 RAM, the eight IPU cores, and a PCIe and MIPI interfaces. Google says the company's HDR+ image processing can run "5x faster and at less than 1/10th the energy" than it currently does on the main CPU. The new core will be enabled in the forthcoming Android Oreo 8.1 (MR1) update.

The new IPU cores are intended to use Halide language for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.


Google also publishes an article explaining its HDR+ and portrait mode that the new core is supposed to accelerate. Google also publishes a video explaining the Pixel 2 camera features:

Wednesday, October 18, 2017

Basler Compares Image Sensors for Machine Vision and Industrial Applications

Basler presents EMVA 1288 measurements of the image sensors in its cameras. It's quite interesting to compare CCD with CMOS sensors and Sony with other companies in terms of QE. Qsat, Dark Noise, etc.:

5 Things to Learn from AutoSens 2017

EMVA publishes "AutoSens Show Report: 5 Things We Learned This Year" by Marco Jacobs, VP of Marketing, Videantis. The five important things are:
  1. The devil is in the detail
    Sort of obvious. See some examples in the article.
  2. No one sensor to rule them all
    Different image sensors, Lidars, each optimized for a different sub-task
  3. No bold predictions
    That is, nobody knows what the autonomous driving arrives to the market
  4. Besides the drive itself, what will an autonomous car really be like?
  5. Deep learning a must-have tool for everyone
    Sort of a common statement although the approaches vary. Some put the intelligence into the sensors, others keep sensors dumb while concentrating the processing in a central unit.

DENSO and Fotonation Collaborate

BusinessWire: DENSO and Xperi-Fotonation start joint technology development of cabin sensing based on image recognition. DENSO expects to significantly improve the performance of its Driver Status Monitor, an active safety product used in tracks since 2014. Improvements of such products also will be used in next-generation passenger vehicles, including a system to help drivers return to driving mode during Level 3 of autonomous drive.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Understanding the status of the driver and engaging them at the right time is an important component for enabling the future of autonomous driving,” said Yukihiro Kato, senior executive director, Information & Safety Systems Business Group of DENSO. “I believe this collaboration with Xperi will help accelerate our innovative ADAS product development by bringing together the unique expertise of both our companies.

We are excited to partner with DENSO to innovate in such a dynamic field,” said Jon Kirchner, CEO of Xperi Corporation. “This partnership will play a significant role in paving the way to the ultimate goal of safer roadways through use of our imaging and facial analytics technologies and DENSO’s vast experience in the space.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Tuesday, October 17, 2017

AutoSens 2017 Awards

AutoSens conference held on Sept. 20-21 in Brussels, Belgium publishes its Awards. Some of the image sensor relevant ones:

Most Engaging Content
  • First place: Vladimir Koifman, Image Sensors World (yes, this is me!)
  • Highly commended: Junko Yoshida, EE Times

Hardware Innovation
  • First place: Renesas
  • Highly commended: STMicroelectronics

Most Exciting Start-Up
  • Winner: Algolux
  • Highly commended: Innoviz Technologies

LG, Rockchip and CEVA Partner on 3D Imaging

PRNewswire: CEVA partners with LG to deliver a high-performance, low-cost smart 3D camera for consumer electronics and robotic applications.

The 3D camera module incorporates a Rockchip RK1608 coprocessor with multiple CEVA-XM4 imaging and vision DSPs to perform biometric face authentication, 3D reconstruction, gesture/posture tracking, obstacle detection, AR and VR.

"There is a clear demand for cost-efficient 3D camera sensor modules to enable an enriched user experience for smartphones, AR and VR devices and to provide a robust localization and mapping (SLAM) solution for robots and autonomous cars," said Shin Yun-sup, principal engineer at LG Electronics. "Through our collaboration with CEVA, we are addressing this demand with a fully-featured compact 3D module, offering exceptional performance thanks to our in-house algorithms and the CEVA-XM4 imaging and vision DSP."