| PhD Seminar


Name of the Speaker: Ms. Lakshmi Jayalal (EE19D751)
Guide: Dr. Sheetal Kalyani
Online meeting link: http://meet.google.com/brp-mbsx-zdt
Date/Time: 5th December 2025 (Friday), 11:15 AM
Title: Tuning-Free Online Robust Principal Component Analysis through Implicit Regularization

Abstract :

Online Robust Principal Component Analysis (OR-PCA) is a powerful technique for identifying low-dimensional subspaces within high-dimensional data streams that are corrupted by sparse outliers. It finds critical applications in areas ranging from video surveillance to dynamic network monitoring. However, the performance of standard OR-PCA algorithms is heavily contingent on the optimal tuning of explicit regularization parameters. In real-world scenarios where data is non-stationary and arrives sequentially, performing the necessary grid searches or cross-validation to tune these parameters is computationally prohibitive and often impractical.

This seminar addresses this bottleneck by proposing a novel "Tuning-Free OR-PCA" (TF-ORPCA) framework. In this work, we demonstrate that the dependency on explicit tuning parameters can be removed by exploiting the implicit bias of early-stopped modified gradient descent. The proposed approach decomposes the OR-PCA problem into three sub-problems, applying a tailored Implicit Regularization (IR) strategy to each to estimate sparse outliers and low-dimensional representations in a streaming setting. This constitutes a non-trivial extension of existing techniques. A key novelty lies in the design of a new parameterization for matrix estimation in OR-PCA. Experimental results on synthetic and real-world video datasets demonstrate that the proposed TF-ORPCA outperforms existing OR-PCA methods. TF-ORPCA makes it more scalable for large datasets.