Towards Photorealistic Scene Reconstructions from Lensless Measurements

  • 20



Name of the Speaker: Salman Siddique Khan (EE18D406)
Guide: Dr. Kaushik Mitra
Date/Time: 20th January 2023 (Friday), 3:00 PM

Recent advancements in fields like Internet of Things (IoT), augmented reality and robotics have led to an unprecedented demand for miniature cameras with low cost that can be integrated anywhere and can be used for distributed monitoring. Lensless imaging has emerged as a potential solution for realizing ultra-miniature cameras by eschewing the bulky lens in a traditional camera. However, the reduction in the size and cost of these imagers comes at the expense of their image quality due to the high degree of multiplexing inherent in their design. Without a focusing lens, lensless cameras rely on computational algorithms to recover the scenes from multiplexed measurements.

Current iterative optimization-based lensless scene reconstruction algorithms produce noisier and perceptually poorer images. In this work, we propose a non-iterative deep learning-based reconstruction approach that results in a significant improvement in image quality for lensless scene reconstructions. Our approach, called FlatNet-separable, integrates the lensless imaging model with a convolutional neural network to provide high-quality reconstructions. It does so by having two stages—(a) an inversion stage that uses the physics of the separable forward model to learn a mapping from the measurement to intermediate image space, and (b) an enhancement stage that uses a fully convolutional network to enhance the intermediate image. We use a separable FlatCam prototype to perform our experiments. We demonstrate the efficacy of FlatNet-separable on a large-scale real paired dataset collected using a monitor display capture setup and unpaired measurements of challenging real scenes in front of the camera.

Although FlatNet-separable provides photorealistic reconstructions for a separable lensless prototype, it cannot provide any meaningful reconstruction when the separability of the model is broken. Moreover, recent trends in lensless imaging have shown that non-separable models have superior properties compared to separable ones. To overcome this shortcoming of FlatNet-separable, we propose a novel architecture called FlatNet-gen. FlatNet-gen, like FlatNet-separable, uses two stages—(a) an inversion stage and (b) an enhancement stage. However, to account for the non-separability of the model, we learn an inverse of the forward process in the Fourier domain. We use a non-separable PhlatCam prototype to perform our experiments. We demonstrate the efficacy of FlatNet-gen on both display and direct captured data that we collected. To demonstrate the ability of FlatNet-gen to reconstruct in uncontrolled illuminations, we collect a paired dataset using a PhlatCam-Webcam. Finally, with the help of this dataset, we show that FlatNet-gen can provide high-quality scene reconstructions in uncontrolled illuminations.