| PhD Viva


Name of the Speaker: Ms. Nancy Nayak (EE17D408)
Guide: Dr. Sheetal Kalyani
Venue: Online
Online meeting link: https://meet.google.com/qfv-cfci-yie
Date/Time: 1st July 2024, 6.30PM
Title: Energy-efficient Machine Learning Methods for Green Communication Systems

Abstract :

The forthcoming 6th generation of communication systems is expected to seamlessly integrate Artificial Intelligence (AI) and Machine Learning (ML) into their framework, enabling intelligent network management, resource allocation, and optimization. These technologies facilitate dynamic adaptation to evolving network conditions and user requirements, enhancing overall performance, latency, and efficiency. However, in 6G, the exponential growth of network infrastructure and connected devices will lead to a surge in energy costs, which will make the development of green communications increasingly important and urgent. Implementing energy-efficient design principles and technologies will be essential for sustainable and cost-effective 6G networks. In this talk, we explore various energy-efficient ML techniques for green wireless communication systems. Online learning-based ML methods take feedback from the environment in the form of samples, and this feedback is used to update the model iteratively and improve its performance over time. In this work, we first propose an online learning framework for centralized collaborative spectrum sensing in IoT networks, leveraging cognitive radio networks. This approach combines individual sensing results based on the past performance of the detectors. Furthermore, we introduce a strategy to selectively enable sensing at detectors, extending the devices' field life without compromising accuracy. Next, we investigate full-duplex transmission scenarios with a proposed two-stage Deep Reinforcement learning (DRL) approach. It is also considered a form of online learning as it learns directly from sequential interactions with an environment, updating its policy based on each experience. DRL methods have significantly enhanced 6G network optimization with exceptional function approximation capabilities because of the neural network architectures involved. Instead of solving multiple smaller problems, DRL-based methods can solve a larger problem only based on feedback from the network, leading to reduced communication overhead. Deep models are well known for their exceptional function approximation capabilities. Models can be over-parameterized for a task when they have more parameters than necessary to effectively capture the complexity of the data, leading to increased computational requirements, longer training times, and a higher risk of overfitting. Sparsification of deep networks is necessary to improve efficiency, speed, generalization, interpretability, and energy efficiency. In the next part, we propose a novel activation that helps the models achieve sparsification intrinsically without using regularizers. We introduce the concept of rotating the ReLU activation function, which improves parameter and filter representation capability while significantly reducing memory and computation requirements. This approach achieves better accuracy than state-of-the-art regularization-based methods, demonstrating its effectiveness across various datasets and network architectures. In summary, this work contributes to utilizing ML advancements, which can enhance the sparsity and energy efficiency of modern communication systems.