| PhD Viva


Name of the Speaker: Ms. Sriprabha R (EE19D013)
Guide: Prof. Mohanasankar S
Online meeting link: https://meet.google.com/kkc-yfms-zua
Date/Time: 19th June 2024 (Wednesday), 10:00 AM
Title: Dynamic Weight Prediction and Meta-learning Methods for Multiple Acquisition Context-based Magnetic Resonance Image Reconstruction

Abstract :

Magnetic Resonance Imaging (MRI) stands as a versatile medical imaging system with applications spanning from diagnosing simple injuries to chronic degenerative illnesses such as cancer. Despite its versatility, MRI remains the secondary choice owing to its lengthy scanning durations. However, the undersampling leads to aliasing artifacts in the acquired image due to sub-Nyquist sampling, and a diagnostic quality image must be reconstructed. Conventional methods like compressive sensing (CS-MRI) methods involve non-linear optimization solvers with iterative computations and repeated hyperparameter tweaking resulting in longer reconstruction time. Secondly, MRI offers diverse and complementary perspectives of the organ of interest to aid radiologists in better diagnosis.

Recent research has seen a surge in using deep learning models for various imaging tasks to develop efficient MR imaging workflows at every stage, right from acquisition to patient-specific longitudinal analysis. Our research aims to develop robust deep learning techniques to expedite and enhance reconstruction quality. However, contemporary deep learning methods require retraining for each acquisition setting, posing a substantial barrier to applying deep learning methods for clinicians and radiologists to configure imaging workflows due to a lack of resources or machine learning expertise to train models at workstations. Consequently, our objectives extend to identifying clinical scenarios involving data diversity, designing adaptive models that can tune to different acquisition settings, and making these models applicable to different MR imaging tasks.

We have developed improved adaptive reconstruction methods based on dynamic weight prediction at the architecture level and enhanced meta-learning methods at the optimization levels to achieve high representation capacity to adapt to varying acquisition contexts without retraining and expense in quality. The various scenarios include multiple acquisition contexts combining multiple anatomies under study, under sampling patterns, acceleration factors, multiple coil configurations and multiple contrasts in MRI reconstruction. Our analyses encompass exploring various meta-learning variants, configurations of acquisition contexts, task-level and instance-level model abstractions, explainable model methodologies, and frequency perspectives, incorporating supervised and physics-driven self-supervised learning approaches.

Extensive experimentation validates our adaptive methods, demonstrating significant reductions in retraining requirements, alongside improvements in reconstruction quality compared to previous deep learning models, adaptive methods, and baseline meta-learning models. Our code is publicly available in a shared repository. Furthermore, we are extending our techniques to clinical scenarios, particularly in dynamic contrast-enhanced MRI synthesis.