Abstract: Generative modeling frameworks have emerged as an effective approach to capture high- dimensional image distributions from large datasets without requiring domain-specific knowledge, a capability essential for disease progression modeling. Recent generative approaches have attempted to capture progression by mapping images to a latent space and guiding representations to generate follow-up images from previous time points. However, these methods impose constraints on distribution learning, resulting in latent spaces with limited controllability for generating follow-up images without paired subject-specific longitudinal guidance.

In order to enable controlled movements in the latent representational space and generate progression images from a previous time-point image without subject-specific guidance, we introduce a conditionable Diffusion Auto-encoder framework that forms a compact latent space capturing high-level semantics and providing means to control generation. Our approach leverages this latent space to condition and apply controlled shifts to the representations of previous time-point images by isolating progression and subject identity information for generating follow-up images. The shifts are implicitly guided by correlating with progression attributes and constraining to Alzheimer’s disease specific regions, without paired longitudinal guidance. We validate the generations through image quality metrics, volumetric progression analysis, and downstream tasks in Alzheimer’s disease datasets from different sources. This demonstrates the effectiveness of our approach for Alzheimer’s progression modeling and longitudinal image generation.

Event Details
Title: AD-DAE: Alzheimer’s Disease Progression Modeling with Unpaired Longitudinal MRI using Diffusion Auto-Encoders
Date: January 22, 2026 at 12:15 PM
Venue: Online (https://meet.google.com/wvz-zkyi-gjr)
Speaker: Ms. Ayantika Das (EE19D422)
Guide: Dr. Mohanasankar Sivaprakasam
Type: PHD seminar

Updated: