Biocomputation
The prevailing modern scientific paradigm of the brain is a computational one. But if the brain is a computer—which is an 'if'—it must have operating principles, abilities and limitations that are radically different to those of artificial computers. In this session, talks will explore diverse topics within quantitative neuroscience that consider the brain as a device for computation, broadly conceived.
Session Chairs
Professor Dan V. Nicolau Jr (King’s College London)
Yasmine Ayman (Harvard University)
Keynote Talks
Professor Wolfgang Maass (Technische Universität Graz): Local prediction-learning in high-dimensional spaces enables neural networks to plan
Professor Sophie Deneve (Ecole Normale Supérieure, Paris)
Invited Talks
Professor Christine Grienberger (Brandeis): Dendritic computations underlying experience-dependent hippocampal representation
Professor Dan V. Nicolau Jr (King’s College London): A Rose by Any Other Name: Towards a Mathematical Theory of the Neuroimmune System
Dr James Whittington (Oxford / Stanford / Zyphra): Unifying the mechanisms of the hippocampal and prefrontal cognitive maps
Spotlight Talks
Paul Haider (University of Bern): Backpropagation through space, time and the brain
Deng Pan (Oxford): Structure learning in the human hippocampus and orbitofrontal cortex
Francesca Mignacco (CUNY Graduate Center & Princeton University): Nonlinear manifold capacity theory with contextual information
Angus Chadwick (University of Edinburgh): ROTATIONAL DYNAMICS ENABLES NOISE-ROBUST WORKING MEMORY
Carla Griffiths (Sainsbury Wellcome Centre): Neural mechanisms of auditory perceptual constancy emerge in trained animals
Harsha Gurnani (University of Washington): Feedback controllability constrains learning timescales of motor adaptation
Arash Golmohammadi (Department for Neuro- and Sensory Physiology, University Medical Center Göttingen): Heterogeneity as an algorithmic feature of neural networks
Sacha Sokoloski (University of Tuebingen): Analytically-tractable hierarchical models for neural data analysis and normative modelling
Alejandro Chinea Manrique de Lara (UNED): Cetacean's Brain Evolution: The Intriguing Loss of Cortical Layer IV and the Thermodynamics of Heat Dissipation in the Brain
Keynote Talks
Technische Universität Graz
Local prediction-learning in high-dimensional spaces enables neural networks to plan
Planning and problem solving are cornerstones of higher brain function. But we do not know how the brain does that. We show that learning of a suitable cognitive map of the problem space suffices. Furthermore, this can be reduced to learning to predict the next observation through local synaptic plasticity. Importantly, the resulting cognitive map encodes relations between actions and observations, and its emergent high-dimensional geometry provides a sense of direction for reaching distant goals. This quasi-Euclidean sense of direction provides a simple heuristic for online planning that works almost as well as the best offline planning algorithms from AI. If the problem space is a physical space, this method automatically extracts structural regularities from the sequence of observations that it receives so that it can generalize to unseen parts. This speeds up learning of navigation in 2D mazes and the locomotion with complex actuator systems, such as legged bodies. This is joint work with Christoph Stöckl and Yukun Yang. Details: Nature Communications, 15(1), 2024.
Invited Talks
Brandeis University
Dendritic computations underlying experience-dependent hippocampal representation
A crucial function of the brain is to produce useful representations of the external world. These representations are used to form a cellular memory of past experiences, which can be recruited to guide present behaviors. However, the nature of the neuronal code and the mechanisms used to create, maintain, and recall neuronal representations remain unsolved. As a result, we still know very little about how representations embedded within neuronal circuits are actually used by the brain to guide goal-directed, adaptive behaviors.
We previously demonstrated that a non-Hebbian type of synaptic plasticity, behavioral timescale synaptic plasticity (BTSP), has a fundamental role in forming experience-dependent representations in hippocampal area CA1. BTSP has several distinct characteristics, including that it is driven by dendritic plateau potentials (plateaus) instead of APs. Axons from layer 3 entorhinal cortex (EC) impinge onto the site of dendritic plateau initiation, the apical tuft of CA1 pyramidal neurons, and their activity regulates plateau probability. In this talk, I will focus on our most recent data that indicate that an incoming signal from the EC3 directs learning-related activity changes in the hippocampal CA1 network. These results identify EC3 input as a key signal that instructs CA1 neurons - by driving BTSP - in how to represent an experience.
Kings College London
A Rose by Any Other Name: Towards a Mathematical Theory of the Neuroimmune System
The adaptive immune system is much younger than the brain, and, given its immense powers, its emergence would have required the development of exquisite neural control mechanisms. Conversely, the immune system influences the brain in myriad ways, from neural development to homeostatic modulation (such as placebo, nocebo and sickness behaviours) in the adult and neuroinflammation and degenerative diseases in old age. The link between the brain and immune system has been recognised by physicians for millennia but only recently have modern experimental techniques, such as optogenetics and fMRI, allowed us to start unravelling the complexities of the neuroimmunity. Despite this rapid progress, experimental work has remained highly fragmented, consisting of disparate descriptions of phenomena as fascinating as they are diverse, with no attempt at a unifying framework or theory. In this talk, I will describe some of the most exciting recent results from the field as well as the outlines of a new mathematical model of neuroimmune 'engrams' in the insular cortex we have been developing in my lab, along with a discussion on perspectives for theoretical neuroimmunology in the next few years.
Oxford / Stanford / Zyphra
Unifying the mechanisms of the hippocampal and prefrontal cognitive maps
Cognitive maps have emerged as leading candidates, both conceptually and neurally, for explaining how brains seamlessly generalize structured knowledge across apparently different scenarios. Two brain systems are implicated in cognitive mapping: the hippocampal formation and the prefrontal cortex. Neural activity in these brain regions, however, differs during the same task, indicating that the regions have different mechanisms for cognitive mapping. In this talk, we first provide a mechanistic understanding of how the hippocampal and prefrontal systems could build cognitive maps (with the hippocampal mechanism related to transformers and the prefrontal mechanism related to RNNs/SSMs); second, we demonstrate how these two mechanisms explain a wealth of neural data in both brain regions; and lastly, we prove that the two different mechanisms are, in fact, mathematically equivalent.
Spotlight talks
University of Bern
Backpropagation through space, time and the brain
In Machine Learning (ML), the answer to the spatiotemporal credit assignment problem is almost universally given by the error backpropagation algorithm, through both space (BP) and time (BPTT). However, BP(TT) is well-known to rely on biologically implausible assumptions, in particular the dependency on both spatially and temporally non-local information. Here, we introduce Generalized Latent Equilibrium (GLE), a computational framework for spatio-temporal credit assignment in dynamical physical systems. We start by defining an energy based on neuron-local mismatches, from which we derive both neuronal dynamics via stationarity and parameter dynamics via gradient descent. The resulting dynamics can be interpreted as a real-time, biologically plausible approximation of BPTT in deep cortical networks with continuous-time, leaky neuronal dynamics and continuously active, local synaptic plasticity. GLE exploits the ability of biological neurons to phase-shift their output rate with respect to their membrane potential in order to map time-continuous inputs to neuronal space and to enable the temporal inversion of feedback signals which is essential to approximate the adjoint states necessary for estimating useful parameter updates.
University of Oxford
Structure learning in the human hippocampus and orbitofrontal cortex
Humans possess remarkable cognitive capabilities for extracting the abstract structures underlying various phenomena. Our brains can build ‘cognitive maps’ to organise structural knowledge, affording flexible decisions. The hippocampus (HC) and orbitofrontal cortex (OFC) are both implicated in the formation of cognitive maps, but whether they might differ in their roles and how they interact with each other is currently a topic of debate. To investigate this, we designed a ‘structure reversal learning’ task and monitored participants via fMRI as they engaged in the activity. Computational modelling of behavioural results revealed that people grasped and utilized abstract structural knowledge to make novel inferences. Multivariate analyses showed that the hippocampus (HC) and medial OFC consistently encoded abstract task structures, while the lateral OFC represents the identity of each specific event. Additionally, a recurrent neural network (RNN) trained on the task can also display neural patterns mirroring this separate encoding of abstract structures and specific states. Together, the synergy of both HC and OFC forms a comprehensive cognitive map of the task space, supporting adaptive and goal-directed behaviours.
CUNY Graduate Center & Princeton University
Nonlinear manifold capacity theory with contextual information
Neural systems efficiently process information through high-dimensional representations. Understanding the underlying physical principles presents a fundamental challenge at the interface of theoretical neuroscience and machine learning. A commonly adopted approach involves the analysis of statistical and geometrical attributes of neural representations as population-level mechanistic descriptors of task implementation. One of these population-geometry metrics is the invariant object classification capacity. However, this metric has been so far limited to linearly separable settings. Here, we propose a theoretical framework that overcomes this limitation leveraging contextual information about the input. We derive an exact formula for the context-dependent capacity that depends on manifold geometry and context correlations. We test our theoretical predictions on synthetic and real manifolds and find good agreement with numerical simulations. The increased expressivity of our framework allows to capture representation untanglement in deep networks at the early stages of the layer hierarchy. Our method is data-driven and widely applicable across datasets and models.
University of Edinburgh
Rotational Dynamics Enables Noise-Robust Working Memory
Working memory is fundamental to higher-order cognitive function, yet the circuit mechanisms through which memoranda are maintained in neural activity after removal of sensory input remain subject to vigorous debate. Prominent theories propose that stimuli are encoded in either stable and persistent activity patterns configured through recurrent attractor dynamics or dynamic and time-varying patterns of population activity brought about through non-normal or feedforward network architectures. However, the optimal dynamics for working memory, particularly when faced with ongoing neuronal noise, has not been resolved. Here, we address this question within the analytically tractable setting of linear recurrent neural networks. We develop a novel method to optimise continuous-time linear RNNs driven by Gaussian noise to solve working memory tasks, without requiring forward-simulation or backpropagation-through-time. Application of this optimisation method yields a novel and previously overlooked mechanism for working memory maintainence combining both non-normal and rotational dynamics. To test whether these dynamics are a consequence of our optimisation method, we derive analytical expressions for the updates generated by backpropagation-through-time, which we produce near-identical learning dynamics to those produced by our method. Finally, we show that the optimised networks replicate core features of experimentally-observed neural population activity in prefrontal cortex, including “dynamic coding" (as quantified by both cross-temporal decoding analysis and switching of single-neuron neuronal selectivity over the delay period) despite stable representational geometry. Taken together, our findings suggest that memoranda are stored and maintained during working memory using combination of non-normal and rotational dynamics, which support a stable and optimally noise-robust representation of working memory contents within a time-varying and dynamic population code.
Sainsbury Wellcome Centre
Neural mechanisms of auditory perceptual constancy emerge in trained animals
While considered part of generalisation, the neural mechanisms of perceptual constancy remain challenging to elucidate. To examine the neural basis of perceptual constancy, we trained four ferrets in a Go/No-Go task, where ferrets identified `instruments' in a stream drawn from 54 probe words. Once trained, we varied the perceived pitch across the whole stream. Gradient-boosted trees revealed that pitch was a minor factor in choice uncertainty. Using an LSTM, we decoded neural responses for each target versus probe word combination (trained=715, naive=674, 4 animals) by using the whole word and cumulative 40ms windows over time. A unit that encodes acoustics will vary its peak decoding time across probe words, whereas a categorical response will remain constant in its peak. We found that the target word was robustly represented in trained animals, as assessed by higher and more invariant decoding scores. Trained animals’ units could generalise over multiple distractor words and had less variance in peak time when decoding over time compared to naive units. Overall, neural responses become robust and generalisable to the target, supporting the idea that nodes that take on individualised roles become adaptable to novel environments.
University of Washington
Feedback controllability constrains learning timescales of motor adaptation
Previous work exploring the structure of primary motor cortex (M1) activity has largely assumed autonomous dynamics, and related work on learning in brain-computer interfaces (BCIs) has focused on local mechanisms (such as M1 synaptic plasticity). However, recent experimental evidence suggests that M1 activity is continuously modified by sensory feedback and produces corrections for noise and external perturbations, suggesting a critical need to model this interaction between feedback and intrinsic M1 dynamics. Here we propose that for fast adaptation to BCI decoder changes, M1 dynamics can be effectively modified by changing inputs, including by flexible remapping of sensory feedback. Using recurrent network models of BCI under feedback control, we show how the rate of such adaptation is constrained by pre-existing structured dynamics .Lastly, we show that the geometry of low-dimensional network activity can affect the design and robustness of BCI decoders. By incorporating adaptive controllers upstream of M1, our work highlights the need to model input-dependent latent dynamics, and clarifies how constraints on learning arise from both the statistical characteristics and the underlying dynamical structure of neural activity.
Department for Neuro- and Sensory Physiology, University Medical Center Göttingen
Heterogeneity as an algorithmic feature of neural networks
To gain tractability, theorists reduce reality to abstract models. Yet, a priori, it is unclear if such a reduction dismisses any fundamental features of the phenomenon under investigation. Consequently, despite overwhelming evidence for neuronal heterogeneity, research on biological networks predominantly focuses on networks with homogeneous neurons. We relaxed this constraint by systematically controlling heterogeneity level in otherwise identical networks. Several networks were trained to perform diverse cognitive tasks such as memory, prediction, and processing. We demonstrated that, even in small networks, heterogeneous ones outperform their homogeneous counterparts. These results suggest that heterogeneity may be more than a theoretical complication; it might be an algorithmic advantage, especially in systems with finitely many units. Given the ubiquity of heterogeneity, it is likely that biological organisms have evolved to exploit this trait under environmental constraints and parasites. As such, we predict that heterogeneity must be a robust feature. Indeed, this prediction aligns with recent experimental findings that showed that cellular heterogeneity profiles are invariant over age and external perturbation.
University of Tuebingen
Analytically-tractable hierarchical models for neural data analysis and normative modelling
Latent variable models (LVMs) are useful in neuroscience both for learning distributions of neural data and modelling how the brain engages in optimal inference. In practice we often rely on approximation schemes such as variational methods in order to implement inference and learning with LVMs, yet these schemes can introduce difficult to analyze biases and errors, and ideally we would avoid them unless strictly necessary. Towards this end, we present a general theory of hierarchical LVMs for which learning and inference can be implemented exactly. In particular, we derive necessary and sufficient conditions for exact inference and learning in a large class of exponential family LVMs. We then show how these models can be stacked to create novel, hierarchical models that retain their tractable properties. Moreover, we derive general inference and learning algorithms for these models, such as expectation-maximization and Bayesian smoothing, and show that many well-known algorithms are special cases of these general solutions. Finally, we use our theory to develop several novel models, including (i) a hierchical probabilistic population code with a novel prior that combines a multivariate normal distribution with a Boltzmann machine, and (ii) a hierarchical Gaussian mixture model for clustering high-dimensional data. In summary, we show how to build complex LVMs without relying on unnecessary approximations. In future work we will explore training complex LVMs with variational techniques, while minimizing approximations by using our analytically-tractable models as components.
Alejandro Chinea Manrique de Lara
Universidad Nacional de Educación a Distancia
Cetacean's Brain Evolution: The Intriguing Loss of Cortical Layer IV and the Thermodynamics of Heat Dissipation in the Brain
During the transition from the Eocene to the Oligocene epoch there was a cooling of the temperatures of the oceans that is believed affected cetacean brain evolution. Compared to other mammals the most intriguing feature of their brains is the lack of layer IV in the entire cerebral cortex. A novel interpretation of the evolutionary and functional significance of the loss of layer IV in the cerebral cortex of cetaceans is presented using the intelligence and embodiment hypothesis. This hypothesis is based on evolutionary neuroscience postulates the existence of a common information-processing principle associated with nervous systems that evolved naturally and serves as the foundation from which intelligence can emerge and to the efficiency of brain’s computations. The adaptive function of these neuronal trait is shown to be related with an increased heat dissipation of the cerebral cortex as indicated by the statistical physics model of the hypothesis, thus supporting a previous hypothesis correlating thermogenesis to the evolution of large brain sizes in cetaceans but putting forward that these results are not at odds with the possibility of levels of cognitive complexity beyond the majority of other mammals