Loading Events

« All Events

  • This event has passed.

Motion Deblurring For Rolling Shutter and Light Field Cameras

February 20, 2020 @ 2:00 pm - 3:00 pm

Date: 20.02.2020 (Thursday)


Time: 2 p.m.


Venue: ESB 244


Speaker: Mahesh Mohan M R (EE14D023)


Guide: Dr A. N. Rajagopalan


DC Members:

Dr Devendra JalihalĀ  (Chairperson)

Dr Aravind RĀ  (EE)

Dr Arun Pachaikannu (EE)

Dr Chandra Sekhar (CSE)



Most present-day imaging devices are equipped with CMOS sensors. Motion blur is a common artifact in hand-held cameras. Because CMOS sensors mostly employ a rolling shutter (RS), the motion deblurring problem takes on a new dimension. Although few works have recently addressed this problem, they suffer from many constraints including heavy computational cost, need for precise sensor information, and inability to deal with wide-angle systems (which most cell-phone and drone cameras are) and irregular camera trajectory. In the first part of the talk, we propose a model for RS blind motion deblurring that mitigates these issues significantly. Comprehensive comparisons with state-of-the-art methods reveal that our approach not only exhibits significant computational gains and unconstrained functionality but also leads to improved deblurring performance


In the second part of the talk, we deal with motion deblurring for light field (LF) cameras. The state-of-the-art method for blind deblurring of LFs of general 3D scenes is limited to handling only downsampled LF, both in spatial and angular resolution. This is due to the computational overhead involved in processing data-hungry full-resolution 4D LF altogether. Moreover, the method warrants high-end GPUs for optimization and is ineffective for wide-angle settings and irregular camera motion. To this end, we introduce a new blind motion deblurring strategy for LFs which alleviates these limitations significantly. Our model achieves this by isolating 4D LF motion blur across the 2D subaperture images, thus paving the way for independent deblurring of these subaperture images. Furthermore, our model accommodates common camera motion parameterization across the subaperture images. Consequently, blind deblurring of any single subaperture image elegantly paves the way for cost-effective non-blind deblurring of the other subaperture images. Our approach is CPU-efficient computationally and can effectively deblur full-resolution LFs.


February 20, 2020
2:00 pm - 3:00 pm
Event Category: