DATE: March 22, 2017 (Wednesday)
TIME: 2 PM to 3 PM
VENUE: ESB 244
SPEAKER: Abhijith Punnappurath (EE10D038)
GUIDE: A.N. Rajagopalan
DC Members Dr. Srikrishna.B (Chairman)
Dr. Aravind .R
Dr. Umesh .S
Dr.Sukhendu Das (CSE)
Dr. Manivannan (AM)
ABSTRACT
Traditional algorithms designed for the tasks of dynamic object segmentation (DOS) and super-resolution (SR) typically assume that the camera is stationary during the exposure duration of an image. However, this assumption is typically violated in the case of hand-held cameras that have now become ubiquitous. Motion of the camera during exposure time produces distortions (motion blur in CCD cameras and rolling shutter (RS) effect in CMOS cameras) that conventional algorithms are unequipped to handle. In this talk, we look at two problems – DOS from CCD cameras and SR from CMOS cameras – both under the challenging scenario where the camera is free to move during exposure time.
First, we tackle the issue of detecting moving objects from a single space-variantly blurred image of a 3D scene captured using a hand-held CCD camera. We train a convolutional neural network to predict the composite kernel which is the convolution of motion and defocus kernels at each pixel in the image. Based on the defocus component, we segment the image into different depth layers. We then judiciously exploit the motion component present in the composite kernels to automatically and unambiguously segment dynamic objects at each depth layer. Next, we undertake a detailed analysis of the hitherto unexplored topic of multi-image SR in CMOS cameras. We initially develop an SR observation model that accounts for the row-wise RS distortions in images captured using non-stationary CMOS cameras. We then propose a unified RS-SR framework to obtain an RS-free high resolution image (and the row-wise motion) from distorted low resolution images.
ALL ARE CORDIALLY INVITED.