Speaker: Sreejith K (EE12D032)
Recovering/estimating sparse vectors in high dimensional linear regression models is a fundamental problem in machine learning and signal processing. Most of the sparse recovery algorithms proposed in the literature assume a priori knowledge of noise variance. However, such a priori information is rarely available in practice. Further, estimating noise variance in high dimensional regression models is extremely difficult. In this seminar, I present a novel framework called residual ratio thresholding that allows existing sparse recovery algorithms like OMP, LASSO etc. to operate without the knowledge of noise variance. I discuss both finite sample and large sample guarantees for the proposed framework. These analytical results indicate that the performance of OMP, LASSO etc. when used in the proposed framework is nearly similar to the performance of these algorithms with a priori knowledge of noise variance. Numerical simulations in real and synthetic data sets further demonstrate the superiority of RRT over techniques like cross validation and information theoretic criteria which are currently used to operate OMP, LASSO etc. in the absence of noise statistics.
All are cordially invited.