From cameras to displays, visual computing systems are becoming ubiquitous in our daily life. However, their underlying design principles have stagnated after decades of evolution. Existing imaging devices require dedicated hardware that is not only complex and bulky, but also exhibits only suboptimal results in certain visual computing scenarios. This shortcoming is due to a lack of joint design between hardware and software, importantly, impeding the delivery of vivid 3D visual experience of displays. By bridging advances in computer science and optics with extensive machine intelligence strategies, my work engineers physically compact, yet functionally powerful imaging solutions of cameras and displays for applications in photography, wearable computing, IoT products, autonomous driving, medical imaging, and VR/AR/MR. In this talk, I will describe two classes of computational imaging modalities. Firstly, in Deep Optics, we jointly optimize lightweight diffractive optics and differentiable image processing algorithms to enable high-fidelity imaging in domain-specific cameras. Additionally, I will discuss Neural Holography, which also applies the unique combination of machine intelligence and physics to solve long-standing problems of computer-generated holography. Specifically, I will describe several holographic display architectures that leverage the advantages of camera-in-the-loop optimization and neural network model representation to deliver full-color, high-quality holographic images. Driven by trending machine intelligence, these hardware-software jointly optimized imaging solutions can unlock the full potential of traditional cameras and displays and enable next-generation visual computing systems.
|