Abstract: This short course will give a broad survey of information theory topics,
starting from the simplest point-to-point communication networks, working
towards small multi-terminal networks like multiple access and broadcast
systems, and then discussing methods for generalizing information
theoretic tools to derive results for very large network systems. Topics
covered will include capacities, source coding bounds, and unifying themes
in the tools used to derive them.
Background: 1. Comfort with Probability 2. Elements of Information Theory, Cover & Thomas, 2nd Edition, Chapters 2 (Entropy, Relative Entropy, and Mutual Information), 3 (Asymptotic Equipartition Property), 7 (Channel Capacity), 8 (Differential Entropy), 9 (Gaussian Channel).
Abstract : This course will cover some of the fundamental gradient based (or first order) algorithms and results in optimization theory, with a focus on provably efficient methods. The choice of gradient based methods is due to their wide usage in machine learning as well as other applications. We will use some classical machine learning problems as running examples through out the course to illustrate the performance of various algorithms presented.
Background: Basic knowledge of linear algebra (matrix norms, singular values etc.) and functional analysis (derivatives, norms and dual norms). Everything else that is required will be covered in the course. Appendix A in Boyd and Vandenberghe's book, (see here), covers these basics.
Rough outline Lecture Notes