| PhD Viva


Name of the Speaker: Mr. Aggraj Gupta (EE18D033)
Guide: Dr. Uday K Khankhoje
Online meeting link: https://meet.google.com/jnp-szsy-hwj
Date/Time: 5th December 2024 (Thursday), 11:00 AM
Title: Neural Network Approaches for Inverse Design of Microstrip Antennas and Photonic Beamsplitters

Abstract :

In this talk, I'll share my journey of over six years working on the inverse design of RF and photonic structures. The complexity of devices with advanced functionalities often requires multiple time-consuming and computationally expensive iterations of electromagnetic (EM) simulations. Instead of using traditional methods with commercial full-wave solvers, this research focuses on developing fast-to-evaluate solvers using data-driven techniques. This talk concentrates on inverse design methods applicable to antennas in the RF/mm-Wave domain and beam splitters in the optical regime. We will investigate the general practices of designing efficient and compact antennas and beam-splitters for next-generation devices. Although the talk primarily revolves around antenna and power splitter design, the approach can be extended to various device designs such as power dividers/combiners, impedance transformers, filters, and more.

I will introduce a novel, fully automated "tandem network" approach for designing multi-band microstrip patch antennas and discuss the challenges and limitations of using data-driven techniques for inverse design. This work is one way to optimize binary space using a neural network architecture, which is not trivial due to the complexities of backpropagation involved in training a neural network. We'll also explore how leveraging pre-trained networks can significantly improve computational efficiency and reduce the data required when solving new but related problems.

Lastly, I’ll present the relatively new concept of "Neural Operator" learning, which, unlike traditional "Neural Network" learning, maps entire functions, rather than individual data points. Traditional neural networks learn a relationship between inputs and outputs based on a fixed dataset, while neural operators generalize across varying input sizes or domains. This means neural operators can solve problems across different spatial resolutions or mesh sizes, making them more flexible and efficient for tasks like solving partial differential equations without needing retraining for each new mesh or configuration.