Week of |
Topic |
Tue 8/30 | Introduction: from the neuron to the brain. Overview of the course. |
Tue 9/6 | Recap of statistical mechanics: ensembles, statistical entropy, Ising model with nearest neighbor interactions. |
Tue 9/13 | Ising model with long range interactions and phase transitions. Mean field theory for the Ising model, formal derivation and applications.
|
Tue 9/20 | Auto-associative memory: what it is; how to represent it using attractor neural networks. Hebbian learning rule and its relation to synaptic plasticity. Memory as an Ising model: the Hopfield network. |
Tue 9/27 | Statistical mechanics of the Hopfield network. |
Wed 10/5 | Computation of the capacity of the Hopfield network using mean field theory. Absence of spurious retrieval states; phase diagram in the temperature-storage plane. |
Tue 10/11 |
Recap of probability theory: conditional probabilities, Bayes' theorem with several examples. |
Tue 10/18 | Statistical inference. Supervised learning: the perceptron. |
Tue 10/25 | AND, OR and XOR functions. The Hebb rule. Neurobiological evidence for synaptic plasticity. |
Tue 11/1 | The perceptron learning rule, gradient descent learning. Learning as inference: how to reinterpret the learning rules using probability theory; making predictions. |
Tue 11/8 | Multi-layer networks: XOR function; error back-propagation algorithm. |
Thur 11/17 |
Unsupervised learning: Principal Component Analysis, Oja's rule. |
Tue 11/22 | The brain as an anticipating machine: Boltzmann machines, Helmholtz machines learn probability distributions. Wake and sleep learning rule. |
Tue 11/29 |
Reinforcement learning. The temporal credit assignment problem and its solution using temporal difference learning. The actor/critic model. |
Tue 12/6 |
Spiking neuron models of learning and memory.The stochastic neuron. Firing rate models, population models. |
Tue 12/13 |
Biological network models of working memory (Amit-Brunel 97). Synaptic plasticity and stochastic learning (Amit-Fusi 92, 94; Fusi et al 2000). Overview of current open problems in Neuroscience: reinforcement learning, contextual decision-making and more. |