Academic Commons

Theses Doctoral

Building theories of neural circuits with machine learning

Bittner, Sean Robert

As theoretical neuroscience has grown as a field, machine learning techniques have played an increasingly important role in the development and evaluation of theories of neural computation. Today, machine learning is used in a variety of neuroscientific contexts from statistical inference to neural network training to normative modeling. This dissertation introduces machine learning techniques for use across the various domains of theoretical neuroscience, and the application of these techniques to build theories of neural circuits.

First, we introduce a variety of optimization techniques for normative modeling of neural activity, which were used to evaluate theories of primary motor cortex (M1) and supplementary motor area (SMA). Specifically, neural responses during a cycling task performed by monkeys displayed distinctive dynamical geometries, which motivated hypotheses of how these geometries conferred computational properties necessary for the robust production of cyclic movements. By using normative optimization techniques to predict neural responses encoding muscle activity while ascribing to an “untangled” geometry, we found that minimal tangling was an accurate model of M1. Analyses with trajectory constrained RNNs showed that such an organization of M1 neural activity confers noise robustness, and that minimally “divergent” trajectories in SMA enable the tracking of contextual factors.

In the remainder of the dissertation, we focus on the introduction and application of deep generative modeling techniques for theoretical neuroscience. Specifically, both techniques employ recent advancements in approaches to deep generative modeling -- normalizing flows -- to capture complex parametric structure in neural models. The first technique, which is designed for statistical generative models, enables look-up inference in intractable exponential family models. The efficiency of this technique is demonstrated by inferring neural firing rates in a log-gaussian poisson model of spiking responses to drift gratings in primary visual cortex. The second technique is designed for statistical inference in mechanistic models, where the inferred parameter distribution is constrained to produce emergent properties of computation. Once fit, the deep generative model confers analytic tools for quantifying the parametric structure giving rise to emergent properties. This technique was used for novel scientific insight into the nature of neuron-type variability in primary visual cortex and of distinct connectivity regimes of rapid task switching in superior colliculus.

Files

  • thumnail for Bittner_columbia_0054D_16734.pdf Bittner_columbia_0054D_16734.pdf application/pdf 6.51 MB Download File

More About This Work

Academic Units
Neurobiology and Behavior
Thesis Advisors
Cunningham, John Patrick
Degree
Ph.D., Columbia University
Published Here
July 28, 2021