We study “selflearning” networks, i.e. models that learn in an unsupervised and “selfsupervised” way without the help of an explicit teacher. These models are neurobiologically inspired and, usually, selforganizing, dynamic, recurrent, and autoencoding networks. We examine the principles of neural learning algorithms from the historical models, such as Willshawvon der Malsburg feature maps, Linsker models, Kohonen’s selforganizing maps, Grossberg models, recurrent networks, Anderson’s brainstateinabox, actorcritic networks, Hopfield’s associative memory, Boltzmann machines, and deep belief networks. We study mathematical tools for approximation and optimization of the neural learning models. These include informationtheoretic algorithms, such as maximum entropy, mutual information, and KL divergence as well as the statisticalmechanical methods, such as Markov chains, Metropolis algorithms, Gibbs sampling, and simulated annealing. We also examine the neurodynamic models of selfsupervised, endtoend learning to solve the challenging problems, such as time series prediction and reconstruction. These include Markov decision processes, approximate dynamic programming, reinforcement learning, sequential Bayesian estimation, Kalman filtering, particle filtering, realtime recurrent learning, dynamic reconstruction of a chaotic process.
Week  Topics  Slides 


Learning in Neurodynamic Selforganizing Systems
PrincipalComponents Analysis (Ch. 8)
HebbianBased PCA (Ch. 8)


(9/10, 9/12 ) 
Selforganizing Maps (Ch. 9)
Korean Thanksgiving Holiday 

(9/17, 9/19) 
InformationTheoretic Learning Models (Ch. 10)


(9/24, 9/26)  StatisticalMechanical Learning Methods (Ch. 11)


(10/1, 10/3) 
Deep Neural Networks (Ch. 11)
National foundation Day of Korea 

(10/8, 10/10) 
Deep Neural Networks (Ch. 11)
Dynamic Programming(Ch. 12) 
(Same as Week 5) 
(10/15, 10/17) 
Summary (10/15) Midterm Exam (10/17) 

(10/22, 10/24) 
Dynamic Programming (Ch. 12)


(10/29, 10/31) 
Dynamic Programming (Ch. 12)

(Same as Week 8) 
(11/5, 11/7) 
Neurodynamic Models (Ch. 13)


(11/12, 11/14) 
Bayesian Filtering (Ch. 14)


(11/19, 11/21) 
Particle Filters (Ch. 14)
Dynamic Recurrent Networks (Ch. 15)


(11/26, 11/28) 
RealTime Recurrent Learning (Ch. 15)
Final Exam (11/28) 

(12/3, 12/5) 
Review and discussion 