Events

Deep Learning Approaches for Handling Geometric and Photometric Distortions

  • 19

    Sep

    2023


Name of the Speaker: Mr. Praveen Kandula (EE17D026)
Guide: Dr. Rajagopalan AN
Venue/Online meeting link: https://meet.google.com/mdz-wtdw-hfv
Date/Time: 19th September 2023 (Tuesday) 11:30 AM

Abstract
Restoration of images with photometric and geometric distortions is a fundamental challenge in the field of computer vision. These distortions are caused by different factors, including sensor size limitations, motion artifacts, capturing medium constraints, and many others. Some of these distortions include rolling shutter (RS) distortions, hazy underwater images, low-light images, and satellite image distortions. In this research, we explore different restoration techniques to improve the quality of the image.

First, we start with a method to generate rolling shutter rectified images in an end-to-end manner. In handheld cameras, the CMOS sensor array sequentially exposes rows of the scene, resulting in inter-row delays. This sequential exposure introduces the undesired effect of rolling shutter (RS) when there is relative motion between the camera and the scene during imaging. We analyze how various camera motions during inter-row delays impact the final captured image. Subsequently, we present a deep learning-based approach for rectifying the rolling shutter distortion. We estimate the underlying row-wise camera motions as well as the row motion associated with each pixel in the target image. Using this information, we generate the RS rectified images in an end-to-end manner.

We then present an unsupervised technique for restoring low-light images, specifically addressing the issue of spatially varying illumination levels. Existing works on low-light enhancement suffer from two main drawbacks. First, supervised methods require paired image data for training, i.e., for every low-light image, there should be a clean image with pixel-wise alignment. Second, prior methods assume uniform illumination levels, which is not valid for real low-light images. Our proposed approach operates without paired mapping, utilizing only a dataset comprising low-light and clean images (not paired). To tackle spatially illumination levels in low-light images, we introduce a novel norm function that effectively identifies and handles the spatially varying illumination levels present in low-light images. This formulation allows us to restore the low-light images accurately. Additionally, our methodology enables the generation of multiple enhanced images from a single low-light input, presenting an alternative framework when multiple enhanced images are required.

We subsequently move to an unsupervised approach for the restoration of underwater images. The subpar quality of underwater images stems from multiple factors, such as turbidity, water depth, and the presence of suspended particles. These factors collectively contribute to substantial attenuation, which ultimately diminish image contrast and colour saturation. We first demonstrate the limitations of current unsupervised frameworks for underwater restoration, specifically their utilization of inaccurate cycle consistency. To overcome this challenge, we introduce an unsupervised framework that effectively separates the haze and content components from underwater images. The isolated haze component is subsequently incorporated into our framework to ensure accurate cycle consistency. The content images, on the other hand, undergo processing through a residual network to achieve the final restoration outcome.

Lastly, we tackle the challenge of image restoration in situations where multiple distortions are present, and training data is limited. Unlike previous approaches that either demand extensive training data or exclusively handle individual distortions, our proposed methodology achieves the restoration of satellite images with multiple distortions using just a pair of input images. In satellite imagery, the occurrence of multiple distortions is more prevalent due to factors such as haze, fog, smog, clouds, atmospheric conditions, sun illumination, and other variables. Our model incorporates a distortion disentanglement network that effectively separates the distortion and content components within the satellite image. Subsequently, a distortion transfer mechanism is employed to generate supervised pairs consisting of distortion-clean images. A restoration network is then trained on these generated pairs, enabling it to learn the intricate mapping between distorted inputs and their corresponding restored versions. When presented with a distorted satellite image, it is passed through the trained restoration network, resulting in the final restored image.