The exam consists of a student project, whose list of topics can be found below. Projects include a theoretical part and a numerical part. The items in the list below contains a few articles and book references on a specific problem, plus a coding problem. They are a mixture of classic results in the Neuroscience literature, that we could not discuss in class due to time constraints, or advanced research articles on recent or current open problems, whose theoretical background was discussed in class. The student will prepare a one-hour blackboard seminar on one topic, including theory and simulation parts, to be held after the end of the semester, on Wednesday mornings at 11:30 at the Simons Center for Geometry and Physics.

*Hopfield network. Theory part: Solution of the Hopfield network using the replica method, phase diagram and stability analysis. Numerical part: Write a code that implements the Hopfield network. Estimate how long it takes for the net to retrieve a stored pattern. Estimate numerically the value of the critical capacity and compare the numerical result with the analytic computed in class. This topic may be split into 2 seminars by two different students.*Refs:- D. J. Amit, H. Gutfreund, H. Sompolinsky,
*``Storing an Innite Number of Patterns in a Spin-Glass Model of Neural Networks,''*Physical Review Letters 55 (1985) 1530-1533, available here. - D. J. Amit, H. Gutfreund, H. Sompolinsky,
*``Spin Glass Models Of Neural Networks,''*available here. - Chapter III and XIII of: M. Mezard, G. Parisi, M. Virasoro,
*``Spin Glass Theory and Beyond,''*World Scientific (1987). - Chapter X of: J. Hertz, A. S. Krogh, R. G. Palmer,
*``Introduction to the Theory of Neural Computation,''*Addison-Wesley (1991). - D. J. Amit,
*``Modeling Brain Functions: The world of attractor neural net- works,''*Cambridge University Press (1989).

- D. J. Amit, H. Gutfreund, H. Sompolinsky,
*Capacity of a perceptron (using the replica method).*Refs:- E. Gardner and B. Derrida,
*``Optimal storage properties of neural network models,''*Journal of Physics A, 21, 271-284 (1988), available here. - Chapter V and X of: J. Hertz, A. S. Krogh, R. G. Palmer,
*``Introduction to the Theory of Neural Computation,''*Addison-Wesley (1991). - Chapter 40 of: D. MacKay,
*``Information Theory, Inference, and Learning Algorithms"*.

- E. Gardner and B. Derrida,
*Independent Component Analysis (ICA) and Vision. Theory part: Unsupervised learning rule that implements ICA, main features of ICA. Numerical part: Write a code that implements the ICA of natural images as discussed in [Bell and Sejnowski, 1997].*Refs:- A. J. Bell and T. J. Sejnowski,
*``The Independent Components of Natural Scenes are Edge Filters,''*Vision Res. 1997 December 37 (23): 3327-3338; available here. - Chapter 34 of: D. MacKay,
*``Information Theory, Inference, and Learning Algorithms"*. - Chapter 10 of: S. Haykin,
*``Neural Networks: A comprehensive foundation"*, Pearson Education, Singapore (1999).

- A. J. Bell and T. J. Sejnowski,
*A neural network model of working memory. Theory part: An attractor neural network that displays both global spontaneous activity and local delay activity as in [Amit and Brunel, 1997]. Numerical part: Write an attractor neural network that realizes this model. Reproduce the results in [Amit and Brunel, 1997] using the simplified IF neuron model and the full LIF neuron model. This topic may be divided into 2 seminars by two different students.*Refs:- D. J. Amit and N. Brunel,
*``Model of global spontaneous activity and local structured activity during delay periods in the cerebral cortex,''*Cerebral Cortex Volume 7, Issue 3, pp. 237-252 (1997); available here. - D. J. Amit and N. Brunel,
*``Dynamics of a recurrent network of spiking neurons before and following learning,''*Network: Computation in Neural Systems, 1997, Vol. 8, No. 4 : Pages 373-404; available here. - N. Brunel and V. Hakim,
*``Fast Global Oscillations in Networks of Integrate-and-Fire Neurons with Low Firing Rates,''*Neur. Comp., Vol. 11, No. 7, Pages 1621-1671 (1999); available here.

- D. J. Amit and N. Brunel,
*Reinforcement learning in populations of spiking neurons. Numerical part: Write a code that implements the results in [Urbanczik and Senn, 2009]*Refs:- R. Urbanczik and W. Senn,
*``Reinforcement learning in populations of spiking neurons,''*Nat Neurosci. 2009 Mar;12(3):250-2. Epub 2009 Feb 15.; available here. - J.-P. Pfister, T. Toyoizumi, D. Barber, and W. Gerstner,
*``Optimal Spike-Timing-Dependent Plasticity for Precise Action Potential Firing in Supervised Learning,''*Neural Comput. 2007 Mar; 19 (3): 639-71; available here. - J. Friedrich, R. Urbanczik, and W. Senn,
*``Spatio-Temporal Credit Assignment in Neuronal Population Learning,''*PLoS Comput Biol. 2011 June; 7(6); available here. - RJ Williams,
*``Simple statistical gradient-following algorithms for connectionist reinforcement learning,''*Machine Learning, Volume 8, Numbers 3-4, 229-256; available here.

- R. Urbanczik and W. Senn,
*Self-organizing maps: the Kohonen map.*Refs:- Jurjut OF, Nikoli D, Pipa G, Singer W, Metzler D, Murean RC,
*``A color-based visualization technique for multielectrode spike trains,''*J Neurophysiol. 2009 Dec;102(6):3766-78; available here. - J. Vesanto, J. Himberg, E. Alhoniemi and J. Parhankangas,
*``Self-organizing map in Matlab: the SOM Toolbox,''*Proceedings of the Matlab DSP Conference 1999; available here. - Chapter 9 of: S. Haykin,
*``Neural Networks: A comprehensive foundation"*, Pearson Education, Singapore (1999).

- Jurjut OF, Nikoli D, Pipa G, Singer W, Metzler D, Murean RC,
*Boltzmann machines. Write a network that learns a probability distribution of inputs. Show numerically the difference between a Botzmann machine with and without hidden units: detecting higher order correlations in the input probability distribution or failing to do so.*Refs:- Hinton, G. E., and Sejnowski, T. J.,
*``Learning and relearning in Boltzmann machines,''*Parallel Distributed Pro- cessing, ed. by D. E. Rumelhart and J. E. McClelland, pp. 282� 317, MIT Press (1986). - J. Vesanto, J. Himberg, E. Alhoniemi and J. Parhankangas,
*``Self-organizing map in Matlab: the SOM Toolbox,''*Proceedings of the Matlab DSP Conference 1999; available here. - Chapter 43 of: D. MacKay,
*``Information Theory, Inference, and Learning Algorithms"*.

- Hinton, G. E., and Sejnowski, T. J.,

- Hertz, Krogh & Palmer,
*Introduction to the theory of neural computation*The Hopfield network and the supervised learning part of the class are partly based on this book. - David MacKay,
*Information Theory, Inference, and Learning Algorithms*. Learn while having fun! Fantastic book, and it's freely available online. The inference and the Boltzmann machine parts of the course are based on this book. It's going to become your favorite book, resistance is futile. - Mezard, Parisi & Virasoro,
*Spin glass theory and beyond*. Classic statistical mechanics book for the theory of spin glasses and the Hopfield network. Includes reprints of all the original papers, a must-read for everybody. I can't stress this enough: you will really enjoy reading the original papers. Back then, people put a lot of efforts in writing clear papers... - Amit,
*Modeling brain functions*. An exhaustive treatment of attractor neural networks as of the early nineties, starting from the Hopfield model. If you are looking for a more didascalic approach than the previous book. - Abbott & Dayan,
*Theoretical neuroscience*. The main textbook to get acquainted with the field of neuroscience. You can't live without it. - Trappenberg,
*Fundamentals of computational neuroscience*. Textbook in computational neuroscience. Parallel to the explanations of each topic, you can find*Matlab*code examples to start your own neural network simulations. Super fun. - Dalvit et al.,
*Problems on statistical mechanics*. Collection of exercises on statistical mechanics, with solutions. Very useful as a general recap, explains many cool tricks to compute partition functions.

Week of | Topic |
---|---|

Tue 8/30 | Introduction: from the neuron to the brain. Overview of the course. |

Tue 9/6 | Recap of statistical mechanics: ensembles, statistical entropy, Ising model with nearest neighbor interactions. |

Tue 9/13 | Ising model with long range interactions and phase transitions. Mean field theory for the Ising model, formal derivation and applications. |

Tue 9/20 | Auto-associative memory: what it is; how to represent it using attractor neural networks. Hebbian learning rule and its relation to synaptic plasticity. Memory as an Ising model: the Hopfield network. |

Tue 9/27 | Statistical mechanics of the Hopfield network. |

Wed 10/5 | Computation of the capacity of the Hopfield network using mean field theory. Absence of spurious retrieval states; phase diagram in the temperature-storage plane. |

Tue 10/11 | Recap of probability theory: conditional probabilities, Bayes' theorem with several examples. |

Tue 10/18 | Statistical inference. Supervised learning: the perceptron. |

Tue 10/25 | AND, OR and XOR functions. The Hebb rule. Neurobiological evidence for synaptic plasticity. |

Tue 11/1 | The perceptron learning rule, gradient descent learning. Learning as inference: how to reinterpret the learning rules using probability theory; making predictions. |

Tue 11/8 | Multi-layer networks: XOR function; error back-propagation algorithm. |

Thur 11/17 | Unsupervised learning: Principal Component Analysis, Oja's rule. |

Tue 11/22 | The brain as an anticipating machine: Boltzmann machines, Helmholtz machines learn probability distributions. Wake and sleep learning rule. |

Tue 11/29 | Reinforcement learning. The temporal credit assignment problem and its solution using temporal difference learning. The actor/critic model. |

Tue 12/6 | Spiking neuron models of learning and memory.The stochastic neuron. Firing rate models, population models. |

Tue 12/13 | Biological network models of working memory (Amit-Brunel 97). Synaptic plasticity and stochastic learning (Amit-Fusi 92, 94; Fusi et al 2000). Overview of current open problems in Neuroscience: reinforcement learning, contextual decision-making and more. |