Lensless Cameras for Single-shot 3D Imaging and Optical Encryption

  • 21



Name of the Speaker: Salman Siddique Khan (EE18D406)
Guide: Dr. Kaushik Mitra
Venue/Online meeting link:
Date/Time: 21st March 2023, 2pm

Passive mask-based lensless cameras encode depth information in their measurements for a certain depth range. Early works have shown that this encoded depth can be used to perform 3D reconstruction of close-range scenes. However, these approaches for 3D reconstructions are typically optimization based and require strong hand-crafted priors and hundreds of iterations to reconstruct. In this work, we propose FlatNet3D—a feed-forward deep network that can estimate both depth and intensity from a single lensless capture. FlatNet3D is an end-to-end trainable deep network that directly reconstructs depth and intensity from a single lensless measurement using an efficient physics-based 3D mapping stage and a fully convolutional network. Our algorithm is fast and produces high-quality results, which we validate using both simulated and real scenes captured using PhlatCam.

Another interesting aspect of mask-based lensless cameras is their ability to perform computation in the optical domain with operations defined by the mask patterns and configurations. In this work, we exploit this particular ability of lensless cameras to design privacy-preserving cameras using optical encryption. Existing lensless camera designs fail to preserve the privacy of the scene in which they are deployed, due to inherent flaws in their design. To overcome this, we propose OpEnCam - novel lensless camera design for optical encryption. OpEnCam is a lensless camera that encrypts the incoming light before capturing it using optical masks’ modulating ability. Recovery of the original scene from an OpEnCam measurement is possible only if one has access to the camera’s encryption key, defined by the unique optical elements of each camera. Our OpEnCam design suggests two major improvements over existing lensless camera designs - (a) the use of two co-axially located optical masks, one stuck to the sensor and the other a few millimeters above the sensor, and (b) the design of mask patterns derived heuristically from signal processing ideas. We show, through experiments, that OpEnCam is robust against point-source and blind ciphertext-only attacks while still maintaining the imaging capabilities of existing lensless cameras when the key is known. We validate the efficacy of OpEnCam using simulated and real data.

Finally, we explore the possibility of designing the masks of lensless cameras together with inference algorithms. In this work, we propose a learning-based framework for designing the mask for thin lensless cameras. To highlight the effectiveness of the learned lensless systems, we learn a phase mask for multiple computer vision tasks using physics-based neural networks. Specifically, we learn the optimal mask for the following tasks- 2D scene reconstructions, optical flow estimation, and face detection. We show that the mask learned through this framework is better than heuristically designed masks, especially for small sensor sizes that allow lower bandwidth and faster readout. We verify the performance of our learned phase mask on real data.