Day 2: Small Eyes & Smart Minds Incubator
Day 2: Small Eyes & Smart Minds Incubator
Sandra A. Gutierrez Razo, University of Maryland
Yesterday during the Small Eyes & Smart Minds “Sensors & Systems” session experts discussed novel imaging techniques and hardware. Panelists agreed that data processing is a major problem in imaging. They also discussed that the drivers for new developments include astronomy, microscopy, entertainment, and other human-centric applications.
The second session, “Applications -- Health, Automotive, VR, Mobile & Scientific,” showcased some human-centric imaging applications. Gordon Wetzstein from Stanford University showed a wearable computer with a near-eye display that goes beyond computational imaging and sensing to include a display that can mimic and even correct human vision. His technology could be worn as glasses and perform as glasses as well. Sanjeev Agarwal from the Army Night Vision Laboratory suggested learning from how humans process imagery. He wants to couple sensing and perception in order to create smarter sensors that provide some on-the-fly data processing which will hasten downstream data processing. Raphael Pietsun from the University of Colorado, Boulder showed how he can reconstruct an image through turbid, complex media. By transmitting images through optical fiber he showed how this gives him the superhuman ability to peer through biological tissue at depths of 2mm or more. Sanjeev Koppal from the University of Florida took inspiration from nature to reduce energy consumption with tiny sensors that can monitor a wide field of view. These sensors, coupled with face-tracking technology, could be watching us all one day, so it’s a good thing he’s also incorporated privacy-preserving encodings. Florian Willomitzer from Northwestern University showed us a masterful rendition of his own head that could move, blink, and mouth words. The most impressive part is that he triangulated his features using the smart placement of only two cameras and a structured illumination of vertical lines of visible light. In fact, this week his image happened to be an OSA’s Image of the Week.
The panel discussion for this section reiterated the importance of data processing in all imaging applications. Although the experts did not agree on whether standardization would alleviate the processing bottleneck or choke progress even more, most did think that organizing data before processing could be a big help with this problem.
Rajiv Laroia with Light showcased what Smart Minds can do with his company’s new L16 Camera that combines optics design with imaging technology to captures the details of a scene at multiple focal lengths, then uses sophisticated algorithms to combine multiple exposures into a single high-resolution photo.
The final session “Towards Smart Minds” discussed some processing solutions. Nathan Kutz, University of Washington, has developed a video processing technique that discards information for areas in the image that do not change with time. By ignoring backgrounds, a lot of data space can be saved. He came up with the idea after finding his hard drive filled with a huge number of videos his young daughters had made of their new puppy. Aswin Sankaranarayanan from Carnegie Mellon University showed his white balancing technique that enables post-processing re-lighting. If only we had this when we were all trying to decide if the famous Internet dress was gold and white or black and blue! Alex Drimbarean from FotoNation showed an architectural approach to data processing. His company splits up the processing load among various dedicated hardware processing cores so that processing can be done concurrently for many different features. This innovation could provide consumer products with higher functionality and lower power consumption. Tom Goldstein
from the University of Maryland suggested an improved training algorithm that could enable deep learning on handheld and other low-power devices. In the last panel session, the experts concurred that, in general, it is better for the computer to be closer to the camera. Application-specific sensors were also favored over a unit that might try to do it all. Sankaranarayanan summed up the latter sentiment by saying that “smart eyes are good, but if you need to hear something, you just have ears.”
The program ended with another hour of discussion that included talk of potential follow-up Incubators. Rama Chellappa, one of the hosts of this Incubator wanted to further explore the role of optics in the reasoning aspect of AI with Eric Fossum suggesting the title “AI wave for the EM wave.” Other topics to explore included future of image processing, connecting with silicon photonics community and embedding inference into an optical system - so stay tuned for what might come next!
Peter Catrysse, Stanford University examin's Rajiv Laroia's L16 camera.
The program ended with another hour of discussion that included talk of potential follow-up Incubators. Rama Chellappa, one of the hosts of this Incubator wanted to further explore the role of optics in the reasoning aspect of AI with Eric Fossum suggesting the title “AI wave for the EM wave.” Other topics to explore included future of image processing, connecting with silicon photonics community and embedding inference into an optical system - so stay tuned for what might come next!
Peter Catrysse, Stanford University examin's Rajiv Laroia's L16 camera.