Abstract: KEYWORDS Generative Models; Diffusion Models; Massive MIMO-OFDM; Channel Estimation; DPM-Solver-2M; CSI Prediction; Channel Aging; Spatio-Temporal Graph Neural Networks; CSI Feedback; Dictionaryguided cross-attention; entropy modeling; Hyperprior The forthcoming 6G era will couple large-scale antenna arrays with stringent latency and bandwidth constraints, making timely and compact Channel State Information (CSI) indispensable for beamforming, scheduling, and link adaptation. This thesis attempts to develop deep-learning toolkit that targets the full CSI lifecycle—estimation, prediction, and feedback—using deep generative models, dynamic graph neural networks, and learnable-dictionary-aware neural compression, respectively. First, we explore pilot-aided channel estimation through the lens of diffusion generative priors. While discrete reverse-diffusion delivers high fidelity, its sampling loop is too slow for practical latent-constrained receivers. We recast the reverse process as an ordinary differential equation (ODE) and employ a higher-order multistep integrator (herein called “DPM-Solver-2M”) on a lightweight predictor, bridging discrete and continuous schedules without retraining. The resulting estimator preserves various performance matrices while substantially reducing number of function evaluations, yielding 4× lower inference latency with deterministic runtimes suited to PHY pipelines. Next, we discuss the channel aging problem in high-mobility Frequency Division Duplex (FDD) systems. Channel aging is a well-known issue that commonly arises in highmobility conditions, as the time-varying nature of the channel leads to outdated CSI between estimation and utilization. Treating CSI as a graph-structured multivariate time series, we propose an evolving-relational attention network (ERAN), a graph neural network, that learns a time-varying adjacency via gated updates. This dynamic graph forecasting captures mobility-induced dependency shifts across antennas and subcarriers, improving future-CSI quality, sustaining high spectral efficiency at higher mobility while reducing error accumulation across horizons as compared with static-graph and transformer based baselines models. Finally, to compress downlink CSI for feedback, we move beyond “internal-prior-only” hyperpriors by introducing a dictionary-guided cross-attention (DCA) entropy model. A compact, shared dictionary supplies external, environment-level prototypes that partially decoded latents query to refine their distribution parameters. Coupled with hyperprior and autoregressive context, the DCA module tightens entropy estimates, achieving lower bitrates (In this work, bitrate means the number of bits we must transmit for one CSI feedback report after entropy coding. Since the encoder produces variable-length codes, we report the average bits per CSI report (mean over the test set). A lower bitrate at the same reconstruction quality (e.g., Normalized Mean Squared Error (NMSE)) directly translates to lower uplink overhead) at comparable fidelity across standard indoor/outdoor datasets. Future work will extend the continuous-time ODE-based estimator by using SNR-aware adaptive step-size and time-grid selection to improve the accuracy–latency tradeoff without manual tuning. We will also study data-prediction (instead of noise-prediction) parameterizations to reduce error accumulation, and jointly optimize pilot design and the ODE sampler in an end-to-end differentiable loop. Incorporating wideband angulardelay priors, multi-cell cooperation, and uncertainty quantification (e.g., ensembles or score-variance proxies) is another direction to enable confidence-aware adaptation and robust fallback strategies. On the prediction and compression side, we will develop an end-to-end graph pipeline that jointly learns CSI compression and forecasting so that learned dynamics inform both coding and mobility-aware prediction, and evaluate scalability to multi-user, celledge, and CoMP scenarios with evolving interference graphs. Finally, we will explore scenario-conditioned dictionary learning to improve cross-scenario robustness of the entropy model, and hardware-friendly implementations of entropy coding/decoding using bit-exact integer arithmetic for practical deployment. In conclusion, this thesis unifies three advances—fast diffusion-ODE channel estimation, dynamic Graph Neural Network (GNN)-based CSI prediction to mitigate channel aging, and dictionary-guided neural entropy coding for feedback—to improve CSI handling end-to-end. It has been demonstrated in this work, through simulations and analytical modeling, that the proposed techniques consistently achieve lower latency, higher spectral efficiency, and more bandwidth-efficient feedback in realistic massive MIMO settings.

Event Details
Title: Learning-Driven CSI Processing for Massive MIMO:Low-Latency Estimation,Adaptive prediction,and Rate-Efficient Compression (PhD Viva Voce)
Date: February 19, 2026 at 4:00 PM
Venue: Google Meet (https://meet.google.com/jsd-irnf-gkg)
Speaker: Mr. Ravi Kumar (EE20D004)
Guide: Dr. Manivasakan R
Type: PHD seminar

Updated: