We study “self-learning” networks, i.e. models that learn in an unsupervised and “self-supervised” way without the help of an explicit teacher. These models are neuro-biologically inspired and, usually, self-organizing, dynamic, recurrent, and auto-encoding networks. We examine the principles of neural learning algorithms from the historical models, such as Willshaw-von der Malsburg feature maps, Linsker models, Kohonen’s self-organizing maps, Grossberg models, recurrent networks, Anderson’s brain-state-in-a-box, actor-critic networks, Hopfield’s associative memory, Boltzmann machines, and deep belief networks. We study mathematical tools for approximation and optimization of the neural learning models. These include information-theoretic algorithms, such as maximum entropy, mutual information, and KL divergence as well as the statistical-mechanical methods, such as Markov chains, Metropolis algorithms, Gibbs sampling, and simulated annealing. We also examine the neurodynamic models of self-supervised, end-to-end learning to solve the challenging problems, such as time series prediction and reconstruction. These include Markov decision processes, approximate dynamic programming, reinforcement learning, sequential Bayesian estimation, Kalman filtering, particle filtering, real-time recurrent learning, dynamic reconstruction of a chaotic process.
Week | Topics | Slides |
---|---|---|
|
Learning in Neurodynamic Self-organizing Systems
Principal-Components Analysis (Ch. 8)
Hebbian-Based PCA (Ch. 8)
|
|
(9/12, 9/14) |
Self-organizing Maps (Ch. 9)
|
|
(9/19, 9/21) |
Information-Theoretic Learning Models (Ch. 10)
|
|
(9/26, 9/28)
|
Statistical-Mechanical Learning Methods (Ch. 11)
|
|
(10/3, 10/5) |
Korean Thanksgiving Holiday |
|
(10/10, 10/12) MakeUp Class |
Deep Neural Networks (Ch. 11)
|
(Same as Week 4) |
(10/17, 10/19) |
Dynamic Programming (Ch. 12) Problem Solving Session by TA |
|
(10/24, 10/26) |
Summary (10/24) Mid-term Exam (10/26) |
|
(10/31, 11/2) |
Dynamic Programming (Ch. 12)
|
(Same as Week 7) |
(11/7, 11/9) |
Neurodynamic Models (Ch. 13)
|
|
(11/14, 11/16) Classroom Change |
Bayesian Filtering (Ch. 14)
|
|
(11/21, 11/23) MakeUp Class |
Particle Filters (Ch. 14)
Dynamic Recurrent Networks (Ch. 15)
|
|
(11/28, 11/30) |
Real-Time Recurrent Learning (Ch. 15)
Final Exam (11/30) |
|
(12/5, 12/7) |
Review and discussion |