Introduction to Computational Neuroscience


This is a Ph.D.-level course in Computational Neuroscience. It is interdisciplinary in spirit and students with either quantitative or biological backgrounds are all welcome to attend.

Instructor: Luca Mazzucato. Email: lmazzuca at uoregon dot edu. Hours by appointment.

Prerequisite: General physics, Calculus, Linear Algebra. It helps to have a basic knowledge of Statistical Mechanics, a self-contained primer will be presented. No previous experience with neural data is required.

Evaluation: The final grades will be based on: homework assignments (exercises), one third; class participation, one third; each student will give a final lecture/seminar based on one or more research papers on a special topic (one third).

Course goals: We will describe the fundamental concepts and techniques in theoretical neuroscience. This course is interdisciplinary in spirit (just like neuroscience itself) and involves different expertise from mathematics, physics, neurobiology, computer science and information theory. We will start from associative memory: simple time-independent associative-memory systems described by spin glasses. In this first example the model neuron is an Ising spin. Step by step we will introduce more realistic features to the model neuron, building our way up to the leaky integrate-and-fire model that is currently widely used in the literature. We will work out many examples in full details, with more emphasis on the problem solving strategy rather than on the formal constructions. The goal of the course is to enable the students to address research papers in computational and theoretical neuroscience and understand the basic questions and methods in this field. The computational tools we will explain in the course are broadly used in very diverse areas of research, from system biology to high frequency trading in finance. The course will end with a broad overview of the open problems in the field of neuroscience and the current most promising research directions.

Exam


The exam consists of a student project, whose list of topics can be found below. Projects include a theoretical part and a numerical part. The items in the list below contains a few articles and book references on a specific problem, plus a coding problem. They are a mixture of classic results in the Neuroscience literature, that we could not discuss in class due to time constraints, or advanced research articles on recent or current open problems, whose theoretical background was discussed in class. The student will prepare a one-hour blackboard seminar on one topic, including theory and simulation parts, to be held after the end of the semester, on Wednesday mornings at 11:30 at the Simons Center for Geometry and Physics.

Exam seminar topics

  1. Hopfield network. Theory part: Solution of the Hopfield network using the replica method, phase diagram and stability analysis. Numerical part: Write a code that implements the Hopfield network. Estimate how long it takes for the net to retrieve a stored pattern. Estimate numerically the value of the critical capacity and compare the numerical result with the analytic computed in class. This topic may be split into 2 seminars by two different students. Refs:
  2. Capacity of a perceptron (using the replica method). Refs:
  3. Independent Component Analysis (ICA) and Vision. Theory part: Unsupervised learning rule that implements ICA, main features of ICA. Numerical part: Write a code that implements the ICA of natural images as discussed in [Bell and Sejnowski, 1997]. Refs:
  4. A neural network model of working memory. Theory part: An attractor neural network that displays both global spontaneous activity and local delay activity as in [Amit and Brunel, 1997]. Numerical part: Write an attractor neural network that realizes this model. Reproduce the results in [Amit and Brunel, 1997] using the simplified IF neuron model and the full LIF neuron model. This topic may be divided into 2 seminars by two different students. Refs:
  5. Reinforcement learning in populations of spiking neurons. Numerical part: Write a code that implements the results in [Urbanczik and Senn, 2009] Refs:
  6. Self-organizing maps: the Kohonen map. Refs:
  7. Boltzmann machines. Write a network that learns a probability distribution of inputs. Show numerically the difference between a Botzmann machine with and without hidden units: detecting higher order correlations in the input probability distribution or failing to do so. Refs:

Books

Week of Topic
Tue 8/30 Introduction: from the neuron to the brain. Overview of the course.
Tue 9/6 Recap of statistical mechanics: ensembles, statistical entropy, Ising model with nearest neighbor interactions.
Tue 9/13 Ising model with long range interactions and phase transitions. Mean field theory for the Ising model, formal derivation and applications.
Tue 9/20 Auto-associative memory: what it is; how to represent it using attractor neural networks. Hebbian learning rule and its relation to synaptic plasticity. Memory as an Ising model: the Hopfield network.
Tue 9/27 Statistical mechanics of the Hopfield network.
Wed 10/5 Computation of the capacity of the Hopfield network using mean field theory. Absence of spurious retrieval states; phase diagram in the temperature-storage plane.
Tue 10/11 Recap of probability theory: conditional probabilities, Bayes' theorem with several examples.
Tue 10/18 Statistical inference. Supervised learning: the perceptron.
Tue 10/25 AND, OR and XOR functions. The Hebb rule. Neurobiological evidence for synaptic plasticity.
Tue 11/1 The perceptron learning rule, gradient descent learning. Learning as inference: how to reinterpret the learning rules using probability theory; making predictions.
Tue 11/8 Multi-layer networks: XOR function; error back-propagation algorithm.
Thur 11/17 Unsupervised learning: Principal Component Analysis, Oja's rule.
Tue 11/22The brain as an anticipating machine: Boltzmann machines, Helmholtz machines learn probability distributions. Wake and sleep learning rule.
Tue 11/29 Reinforcement learning. The temporal credit assignment problem and its solution using temporal difference learning. The actor/critic model.
Tue 12/6 Spiking neuron models of learning and memory.The stochastic neuron. Firing rate models, population models.
Tue 12/13 Biological network models of working memory (Amit-Brunel 97). Synaptic plasticity and stochastic learning (Amit-Fusi 92, 94; Fusi et al 2000). Overview of current open problems in Neuroscience: reinforcement learning, contextual decision-making and more.