Abstract: Motion blur is an inevitable byproduct of imaging dynamic scenes, yet cost-efficient approaches to modeling and correcting it remain relatively underexplored. As a step in that direction, we present data-efficient, physically grounded methods for modeling, synthesizing, and mitigating motion blur in real-world scenarios.

First, we address realistic space-variant blur synthesis in the absence of ground-truth semantic annotations. We propose a pseudo mask-guided framework leveraging zero-shot segmentation foundation models to generate object-centric masks from arbitrary images, enabling spatially varying, object-aware blur synthesis. We demonstrate that pseudo masks are a viable alternative to ground-truth annotations and improve robustness in downstream vision tasks. Second, we introduce DaMBA, a depth-aware motion blur synthesis framework that incorporates scene geometry via depth foundation models. By simulating camera trajectories and with depth information, DaMBA generates physically consistent, space-variant blur, overcoming limitations of prior space-invariant approximations. We validate its realism through distribution-level comparisons with real-world datasets and show improved generalization across scene understanding tasks. Finally, we investigate deblurring under a weakly supervised paradigm. Rather than relying on paired sharp images, we use camera trajectories and depth maps as supervision by modeling the blur formation process. This reblurring-based training bridges the gap between supervised and unsupervised methods, achieving performance comparable to partially supervised approaches while substantially outperforming unsupervised baselines.

Together, these contributions underscore the value of geometry-aware, annotation-efficient approaches for robust vision under motion blur.

Event Details
Title: Improving Performance In Data Constrained Settings for Computer Vision
Date: April 10, 2026 at 03:00 PM
Venue: ESB 234 (Malaviya Hall) / Google Meet (https://meet.google.com/fpb-zcfg-yqt)
Speaker: Ms. Aakanksha (EE18D405)
Guide: Dr. Rajagopalan A N
Type: PHD seminar

Updated: