| MS Seminar


Name of the Speaker: Mr. RONGALI SIMHACHALA VENKATA GIRISH (EE21S115)
Guide: Dr. Kaushik Mitra
Online meeting link: meet.google.com/msh-jsij-zbc
Date/Time: 13th September 2024 (Friday), 2:00 PM
Title: Stereo Restoration, Depth Perception, and Novel View Synthesis in Low-Light Environments.

Abstract :

Given the rising demand for advanced computer vision solutions across diverse domains, including autonomous driving, virtual reality, and augmented reality, the ability to accurately perceive and process visual information in challenging environments has become increasingly critical. One such challenge is the need for robust vision systems that can operate effectively in low-light conditions, which are common in real-world scenarios. Our work addresses these key challenges by focusing on Stereo low-light depth estimation, image enhancement, and novel view synthesis in low-light conditions.

To address the first challenge of extreme stereo low-light enhancement and depth estimation, this thesis proposes a comprehensive low-light stereo dataset SSID: Stereo See in the Dark designed to facilitate the development and evaluation of robust stereo vision algorithms under extremely low lighting conditions. Using the proposed dataset, we have developed an approach that leverages enhancement features as cues for estimating disparity in extreme low-light conditions. We conducted a comprehensive evaluation of several state-of-the-art enhancement models using our dataset, demonstrating the critical importance of real images over synthetic ones for effective low-light enhancement. Our experiments further demonstrate that stereo images provide more information compared to single images, leading to superior enhancement results.

Our dataset unlocks new opportunities in the field of stereo low-light enhancement and disparity estimation, which could significantly benefit applications such as autonomous driving and VR/AR.

Secondly, we present “GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views” to address the challenges of novel view synthesis in low-light conditions. GAURA can achieve near-photorealistic image synthesis of scenes from posed input images when the images are imperfect, e.g., captured in very low-light conditions where state-of-the-art methods fail to reconstruct high-quality 3D scenes. GAURA is a generalizable neural rendering method that can perform high-fidelity novel view synthesis in low-light conditions and can generalize to several degradations. Our method is learning-based and does not require any test-time scene-specific optimization. It is trained on a synthetic dataset that includes several degradation types. GAURA outperforms state-of-the-art methods on several benchmarks for low-light enhancement, dehazing, deraining, and on-par for motion deblurring. Further, our model can be efficiently fine-tuned to any new incoming degradation using minimal data. We thus demonstrate adaptation results on two unseen degradations, desnowing and removing defocus blur.