Name of the Speaker : Praveen Kandula (EE17D026)
Name of the Guide: Dr. Rajagopalan A. N
Date/Time : 29th September 2022, 3.00 pm
Satellite images are typically subject to multiple photometric distortions. Different factors affect the quality of satellite images, including changes in atmosphere, surface reflectance, sun illumination, viewing geometries etc., resulting in multiple photometric distortions in the satellite images. In supervised networks, the availability of paired datasets is a strong assumption. Consequently, many unsupervised algorithms have been proposed to address this problem. These methods synthetically generate a large dataset of degraded images using image formation models. A neural network is then trained with an adversarial loss to discriminate between images from distorted and clean domains. However, these methods yield suboptimal performance when tested on real images that do not necessarily conform to the generation mechanism. Also, they require a large amount of training data and are rendered unsuitable when only few images are available. To address these important issues, we propose a distortion disentanglement and knowledge distillation framework for satellite image restoration. Our algorithm requires only two images: the distorted satellite image to be restored and a reference image with similar semantics. Ablation studies show that our proposed mechanism successfully disentangles distortion. Exhaustive experiments on different timestamps of Google-Earth images and on publicly available datasets, LEVIR-CD and SZTAKI, show that our proposed mechanism can tackle a variety of distortions and outperforms existing state-of-the-art restoration methods visually as well as on quantitative metrics.
Further, we propose an end-to-end deep network for handling geometric distortions in images. CMOS sensors in hand-held cameras employ a row-wise acquisition mechanism while imaging a scene, which can result in undesired geometric distortions known as rolling shutter (RS) distortions in the captured image. Existing single image RS rectification methods attempt to account for these distortions by using either algorithms tailored for a specific class of scenes that warrants information of intrinsic camera parameters or a learning-based framework with known ground truth motion parameters. Our network consists of a motion block, a trajectory module, a row block, an RS rectification module, and an RS regeneration module (which is used only during training). The motion block predicts the camera pose for every row of the input RS distorted image, while the trajectory module fits estimated motion parameters to a third-order polynomial. Finally, the RS rectification module uses motion trajectory and the output of a row block to warp the input RS image to arrive at a distortion-free image. We propose an end-to-end deep neural network for the challenging task of single-image RS rectification. Experiments on synthetic and real datasets reveal that our network outperforms prior art both qualitatively and quantitatively.