Biocomputation
The prevailing modern scientific paradigm of the brain is a computational one. But if the brain is a computer—which is an 'if'—it must have operating principles, abilities and limitations that are radically different to those of artificial computers. In this session, talks will explore diverse topics within quantitative neuroscience that consider the brain as a device for computation, broadly conceived.
Session Chair
Professor Dan V. Nicolau Jr (King’s College London)
TBA
Invited Talks
Professor Dan Nicolau Sr (McGill)
Setting the baseline of what intelligence could be: the case of space searching by populations of filamentous fungal hyphae
Professor Andrew Adamatzky and Dr. Panagiotis Mougkogiannis (University of the West of England)
Towards proteinoid neuromorphic computers
Dr Ilias Rentzeperis (Spanish National Research Council)
Modelling a continuum of simple to complex cell behavior in V1 with the INRF paradigm
Contributed Talks
Professor Marcelo Bertalmío (Spanish National Research Council)
Modeling challenging visual phenomena by taking into account dynamic dendritic nonlinearities
Dr Steeve Laquitaine (Swiss Federal Institute of Technology)
Using a large-scale biophysically detailed neocortical circuit model to map spike sorting biases
Jia Li (KU Leuven)
Self-organization of log-normally distributed connection strength
Hanna Derets (University of Waterloo)
Distance Metrics and Minimization of Epsilon Automata, with Applications to the Analysis of EEG Microstate Sequences
Invited Talks
McGill
Setting the baseline of what intelligence could be: the case of space searching by populations of filamentous fungal hyphae
The ubiquity of filamentous fungi suggest that they possess efficient space searching in microconfining environments, which were experimentally probed in microfluidic networks. Apparently, filamentous fungi perform space searching using a hierarchical, three-layered system. The space search output of single hypha is the result of a tug-of-war between three independent ‘software” algorithms; the spatial output for multiple, closely-confined hyphae integrate the lower level “software” and operate a higher level program which the result of another competition between two software programs; finally the spatial distribution of mycelium is merely the result of quasi-independent space searching by subpopulations of hyphae. The optimality of space searching by filamentous fungi suggests that it exhibits elements of intelligence, both in terms of its biological algorithms, as well as the hierarchical architecture of information processing, responsible for balancing complexity with specialisation.
UWE
Towards proteinoid neuromorphic computers
Proteinoids, also known as thermal proteins, are synthesized through the heating of amino acids to their melting point, initiating polymerization to create polymeric chains. When placed in an aqueous solution, these proteinoids expand into hollow microspheres. Within these microspheres, an inherent burst of electrical potential spikes occurs, and their electrical activity patterns change when exposed to light. Furthermore, these microspheres can connect with one another through pores and tubes, forming networks with adjustable growth patterns. We are proposing to use assemblies of these proteinoid microspheres to develop unconventional computing devices. We have conducted experimental laboratory tests that demonstrate the implementation of Boolean gates and speech recognition using solutions containing proteinoid microspheres. The presentation delves into fascinating aspects of establishing proto-neural networks using proteinoid microspheres.
Dr Ilias Rentzeperis
Spanish National Research Council
Modelling a continuum of simple to complex cell behavior in V1 with the INRF paradigm
A recent model of neural summation, the intrinsically nonlinear receptive field (INRF), has shown very promising results in overcoming the limitations of the linear RF by explaining several phenomena in visual neuroscience and perception. The INRF formulation falls outside the classical paradigm of the so-called standard model of vision, as it’s not expressed as a cascade of linear and nonlinear operations, but as a highly nonlinear process modelling the effects of nonlinear integration of inputs by dendrites and of action potentials propagating from the soma back to the dendrites. Here, we focus on the capacity of the INRF formulation to model simple and complex responses in V1. Traditionally, the standard model has explained this dichotomy with a hierarchical model of connections where LGN cells aligned along a particular axis feed into simple cells, and simple cells with the same orientation selectivity but shifted phases in their receptive fields project into complex cells. Physiological studies have contested the exclusivity of the hierarchical model, indicating that both simple and complex cells can result in parallel from direct geniculate inputs (parallel model). Finally, the recurrent model considers the connectivity patterns in the sensory cortex and proposes that simple and complex cell responses are different instantiations of the same circuit but with different gain values of the recurrent connections. Here we show that the INRF formulation is able to model a continuum of simple to complex cell behavior, in a simple and biologically plausible manner, incorporating each of the above mentioned models as special cases.
Contributed Talks
Spanish National Research Council
Modeling challenging visual phenomena by taking into account dynamic dendritic nonlinearities
The so-called standard model of vision is grounded on the notion of a linear neural summation process followed by a nonlinearity. But the assumption that neural summation can be modeled as being linear is an over-simplification, whose adequacy is contested by a growing number of neuroscience studies that show the limitations of the standard model in predicting neuron responses to complex stimuli.
Recently, an alternative neural summation model has been proposed, the Intrinsically Nonlinear Receptive Field (INRF), based on considering the dynamic and input-dependent nature of dendritic nonlinearities. As a result, INRF falls outside the standard model.
In this talk we will show how the INRF framework can be used to build visual perception models that are able to explain very challenging experimental data: (1) Plaid masking: unlike what INRF can achieve, modern models still cannot explain plaid masking and oblique masking concurrently with the same set of parameters; (2) Motion perception: under a single and simple model, we reproduce basic and complex motion perception phenomena, while additional stages need to be introduced in classical models to predict, individually, some of the phenomena presented; (3) Image and video quality assessment: we obtain state-of-the-art results with a novel type of metric based on INRF; (4) Brightness perception and visual illusions: INRF-based models can explain several brightness perception phenomena and visual illusions with better accuracy than classic neural field approaches and without the need of adjusting the parameters for different inputs.
Swiss Federal Institute of Technology
Using a large-scale biophysically detailed neocortical circuit model to map spike sorting biases
Accurate spike sorting on extracellular recordings is critical to decipher neural codes. The development of spike sorting algorithms is supported by evaluation against hybrid simulation datasets. Generated by adding a limited set of spike waveforms sorted from actual recordings to a simulated background, they may underestimate biological variability. Moreover, they rely on strong assumptions about firing rate statistics. We evaluated the state-of-the-art spike sorter, Kilosort 3, against data from simulations of a recently published, large-scale model of rodent neocortical circuitry (Isbister, 2023). The model comprises 30,190 morphologically detailed neurons, spanning all six cortical layers. It captures the biological diversity in 60 morphological and 11 electrical neuron types and features realistic connectivity. We simulated Neuropixels 1.0 extracellular recordings, a cutting-edge neural probe that can simultaneously record from hundreds of neurons. The resulting traces resembled actual recordings: they comprised spikes with a high signal-to-noise ratio, low amplitude multi-unit activity, background noise, and spike shapes varied with cell positions relative to electrode contacts. We then evaluated Kilosort 3 against the true spike trains of the model neurons that can, in theory, be isolated and several existing synthetic datasets. In agreement with Buzsáki & Mizuseki (2014), our model had a lognormal firing rate distribution with a peak below 1 Hz. After confirming published accuracies on existing datasets, we found significantly lower performances of Kilosort3 on our simulated data, particularly for sparse firing units, leading to an overestimated mean firing rate. Our results indicate that spike sorting undersamples sparse firing units, predominant in the cortex.
Jia Li
KU Leuven
Self-organization of log-normally distributed connection strength
The studies of brain network connectivity in various species have revealed a consistent long-tailed, typically lognormal, distribution of connection strengths. This ubiquitous phenomenon is considered to be fundamental for both structural and functional organization of brain. However, why connection strengths exhibit such a distribution remains elusive. In this work, we proposed an algorithm that changes network connections in a self-organizing way. The algorithm combines structural plasticity and synaptic plasticity by randomly choosing between rewiring connections and adjusting connection strengths at each step. The parameter p_rewire represents the probability of rewiring at each step. The connections are rewired by adaptive rewiring. The connection strengths are adjusted by a Hebbian rule and stabilized with multiplicative synaptic normalization. Starting from random networks with normally distributed strengths, our algorithm robustly gives rise to a lognormal-like weight distribution. We found that the probability of adaptive rewiring, p_rewire, controls the dispersion of this distribution. We further introduced a small proportion of random rewiring into the algorithm. Without disrupting the formation of the lognormal weight distribution, convergent-divergent units emerge in the network as no weight adjustment.
Hanna Derets
University of Waterloo
Distance Metrics and Minimization of Epsilon Automata, with Applications to the Analysis of EEG Microstate Sequences
An ε-machine is a probabilistic finite state automaton constructed from a series of discrete observations and representing the minimal statistics sufficient to emulate the behavior of the observed process. Crutchfield et al [1] developed a causal-state splitting reconstruction algorithm for ε-machines that can be used for any sequence of observations encoded using the letters of the input alphabet. Previously, this technique was applied to the analysis of neural spatiotemporal dynamics using EEG microstate sequences [2] studying properties such as statistical complexity (number of causal states), entropy rate, the algebraic structure of the corresponding semigroups, etc.
This work concentrates on the development and implementation of the analogous ε-machine construction method based on grouping discrete histories of observations into equivalence classes comprising causal states instead of splitting them. Further, this model is applied for the analysis of the EEG data from two groups of participants (meditators and meditation-naïve healthy controls) under several cognitive conditions (mind-wandering, verbalization, visualization) using the clock-, event-, and peak-based temporal modes. The analysis is carried out using such measures/tools as the distance between the ε-machines (using novel distance metrics), the likelihood of an EEG microstate sequence to be accepted by the ε-machine, separation of the groups of machines and comparisons of the partitions of n-grams into the causal states. These newly developed techniques can be applied for the analysis of neural data using other neuroimaging methods (e.g. fMRI) or other dynamical systems data that can be represented as temporal sequences of activation patterns encoded with discrete symbols.