| PhD Seminar


Name of the Speaker: Ms. Sriprabha R (EE19D013)
Guide: Prof. Mohanasankar Sivaprakasam
Venue: Online
Online meeting link: https://meet.google.com/iup-bwhg-rfz
Date/Time: 8th November 2023 (Wednesday), 3:00 PM
Title: Meta-learning for Multimodal Magnetic Resonance Image Reconstruction

Abstract

Magnetic Resonance Imaging (MRI) is a versatile medical imaging system enabling a wide range of applications in neuro-diagnosis, spine, cardiac, musculoskeletal, and soft tissue reaching into chronic degenerative conditions like cancer. However, MRI involves long scan time, necessitating faster and more effective clinical protocols for better patient care. Recently, deep learning has bolstered the medical imaging domain with promising performance, especially in tasks like image reconstruction from under-sampled MRI scans. However, under the heterogeneous (multimodal) data scenario in multi-contrast MRI, the conventional deep learning models have to be trained for each acquisition setting (or acquisition context), posing a barrier to several thousands of clinicians who often lack the resources or machine learning expertise to train deep learning model at clinical workstations.

The focus of this talk is to discuss acquisition-context adaptive multimodal MRI reconstruction with varying contrast levels using meta-learning at the architecture and optimization levels. At the architecture level, we use adaptive weight prediction hypernetworks to modulate the features of the image reconstruction network, effectively utilizing the contextual knowledge of common and varying image features among acquisition contexts and generality to unseen configurations at test time. At the optimization level, we use gradient-based meta-learning to update the weights of the hypernetwork to learn discriminative mode-specific features of multimodal MR images.

Extensive experimentation on scalability shows that the proposed meta-learning approach exhibits better adaptive capabilities for multimodal MRI reconstruction over baseline meta-learning methods and conventional learning without compromising reconstruction quality. (i) In multi-coil reconstruction, the architecture-based meta-learning model adapts on the fly to various unseen coil configurations up to 32 coils when trained on lower numbers (i.e., 7 to 11) of randomly varying coils and to 120 deviated unseen configurations when trained on 18 configurations in a single model. (ii) In multi-contrast MRI, the gradient-based meta-learning model adapts to 80% and 92% of unseen multi-contrast data contexts with improvement margins of 0.1 to 0.5 dB in PSNR and around 0.01 in SSIM (structure similarity metric).