Virtual Session
Virtual Talks
Shubham Choudhary (Harvard): Implicit generative models using kernel similarity matching
Quilee Simeon (MIT): Scaling Properties for Artificial Neural Network Models of a Small Nervous System
Stefano De Giorgis
Aslan Satary Dizaji (AutocurriculaLab & Neuro-Inspired Vision & LangTechAI): A Multi-agent Reinforcement Learning Study of Evolution of Communication and Teaching under Libertarian and Utilitarian Governing Systems
Yuanxiang Gao (Institute of Theoretical Physics, Chinese Academy of Sciences): A computational model of learning flexible navigation in a maze by layout-conforming replay of place cells
Michael A. Popov (OMCAN, Mathematical Institute University of Oxford UK): Cognitive Acausality Principle and a new kind of Phenomenological Mathematics
Camilla Simoncelli (University of Nevada Reno, USA): Modeling and correcting for individual differences in color appearance: seeing through another’s eyes
Galen Pogoncheff (University of California, Santa Barbara): Beyond Sight: Probing Alignment Between Image Models and Blind V1
Virtual talks
Harvard
Implicit generative models using kernel similarity matching
Understanding how the brain encodes representations for a given stimuli is a key question in neuroscience and has influenced the development of artificial neural networks with brain-like learning abilities. Recently, learning representations by capturing similarity between input samples has been studied to answer this question in the context of learning downstream features from the input. However, this approach has not been studied in the case of a generative paradigm, crucial for explaining top-down interactions in sensory processing, consistent with the predictive abilities of our neural circuitry. We propose a similarity matching framework for generative modeling. We show that representation learning under this scheme can be achieved by maximizing similarity between the input kernel and a latent kernel which leads to an implicit generative model arising from learning the kernel structure in the latent space. We argue that the framework can be used to learn input manifold structures, potentially giving insights into task representations in the brain. Finally, we suggest a neurally plausible architecture to learn the model parameters linking representation learning using similarity matching with predictive coding.
Massachusetts Institute of Technology
Scaling Properties for Artificial Neural Network Models of a Small Nervous System
The nematode worm C. elegans offers a unique chance for exploring in silico data-driven models of a whole nervous system, thanks to its transparency and well-characterized nervous system that provides extensive measurement data from wet-lab experiments. This study explores scaling properties that may govern learning the underlying neural dynamics of this small nervous system using artificial neural network (ANN) models. We investigate self-supervised next time-step neural activity prediction accuracy as a function of data and models. For data scaling, we note a log-linear reduction in mean-squared error (MSE) with more neural activity data. For model scaling, MSE shows a nonlinear relationship with ANN model size. Additionally, we find that dataset and model size scaling properties are affected by model architecture choice but not by the experimental source of the C. elegans neural data. While our results don't achieve long-horizon predictive models of C. elegans nervous system dynamics, they indicate recording more neural data as a promising approach for better predictive ANN models of a small nervous system.
AutocurriculaLab & Neuro-Inspired Vision & LangTechAI
A Multi-agent Reinforcement Learning Study of Evolution of Communication and Teaching under Libertarian and Utilitarian Governing Systems
Laboratory experiments have shown that communication plays an important role in solving social dillemas. Here, by extending AI-Economist, a mixed motive multi-agent reinforcement learning environment, I intend to find an answer to the following descriptive question: which governing system does facilitate the emergence and evolution of communication and teaching among agents? To answer this question, AI-Economist is extended by a voting mechanism to simulate three different governing systems across individualistic-collectivistic axis, from Full-Libertarian to Full-Utilitarian governing systems. In the original framework of AI-Economist, agents are able to build houses individually by collecting material resources from their environment. Here, AI-Economist is further extended to include communication with possible misalignment - a variant of signalling game - by letting agents to build houses together if they are able to name mutually complement material resources by the same letter. Moreover, another extension is made to AI-Economist to include teaching with possible misalignment - again a variant of signalling game - by letting half the agents as teachers who know how to use mutually complement materials resources to build houses but are not capable of building actual houses, and the other half as students who do not have this information but are able to actually build those houses if teachers teach them. I found a strong evidence that collectivistic environment such as Full-Utilitarian system is more favourable for the emergence of communication and teaching, or more precisely, evolution of language alignment. Moreover, I found some evidence that evolution of language alignment through communication and teaching under collectivistic governing systems makes individuals more advantageously inequity averse. As a result, there is a positive correlation between evolution of language alignment and equality in a society.
Institute of Theoretical Physics, Chinese Academy of Sciences
A computational model of learning flexible navigation in a maze by layout-conforming replay of place cells
Recent experimental observations have shown that the reactivation of hippocampal place cells (PC) during sleep or wakeful immobility depicts trajectories that can go around barriers and can flexibly adapt to a changing maze layout. However, existing computational models of replay fall short of generating such layout-conforming replay, restricting their usage to simple environments, like linear tracks or open fields. In this paper, we propose a computational model that generates layout-conforming replay and explains how such replay drives the learning of flexible navigation in a maze. First, we propose a Hebbian-like rule to learn the inter-PC synaptic strength during exploration. Then we use a continuous attractor network (CAN) with feedback inhibition to model the interaction among place cells and hippocampal interneurons. The activity bump of place cells drifts along paths in the maze, which models layout-conforming replay. During replay in sleep, the synaptic strengths from place cells to striatal medium spiny neurons (MSN) are learned by a novel dopamine-modulated three-factor rule to store place-reward associations. During goal-directed navigation, the CAN periodically generates replay trajectories from the animal’s location for path planning, and the trajectory leading to a maximal MSN activity is followed by the animal. We have implemented our model into a high-fidelity virtual rat in the MuJoCo physics simulator. Extensive experiments have demonstrated that its superior flexibility during navigation in a maze is due to a continuous re-learning of inter-PC and PC-MSN synaptic strength.
OMCAN, Mathematical Institute University of Oxford UK
Cognitive Acausality Principle and a new kind of Phenomenological Mathematics
Assuming that humans are able to build both models of external reality as well as models of internal experience at the same time, we must propose an existence of a very fundamental form of human reflection ( consciousness correlate ) which Immanuel Kant defined as “ transcendental reflection “ ( A163/B319). Today, celebrating Kant - 300, we would like to remind that psychiatrists Vladimir Bekhterev, Carl Jung and quantum theorist Wolfgang Pauli have reinvented Kantian transcendental reflection as cognitive acausality principle ( CAP) in the 20th century. It is showed that CAP could be considered as a new platform for a new kind of Phenomenological Mathematics having applications in number theory, physics and AI.( please see extended version in attached pdf abstract )
University of Nevada Reno (USA)
Modeling and correcting for individual differences in color appearance: seeing through another’s eyes
Individual differences in color vision arise at many levels, from the spectral sensitivities of the cones to individual color judgment. Peripheral sensitivity differences are routinely corrected and differences in color appearance are not dependent on factors like the density of preretinal screening pigments or the cone ratios.We developed a procedure that directly adjusts images for the varied color percepts of different observers, based on measurements of variations in hue scaling and unique and binary hues and the achromatic point. Chromaticities in the image are first mapped onto the average scaling function.The corresponding hue percepts are then used to estimate the chromatic axis that would produce the same hue percept in an individual, based on their individual scaling function.Thus 2 observers–each looking at different images tailored to their specific hue percepts–should describe the colors in the images in similar ways.This model allows to visualize the range of phenomenal color experience when the physical stimulus is the same;the correction developed compensates for high-level differences in color perception, and could be used to factor out perceptual differences applied to color calibration or data visualization.
University of California, Santa Barbara
Beyond Sight: Probing Alignment Between Image Models and Blind V1
Neural activity in the visual cortex of blind humans persists in the absence of visual stimuli. However, little is known about the preservation of visual representation capacity in these cortical regions, which could have significant implications for neural interfaces such as visual prostheses. We present a series of analyses on the shared representations between evoked neural activity in the primary visual cortex (V1) of a blind human with an intracortical visual prosthesis, and latent visual representations computed in deep neural networks (DNNs). In the absence of natural visual input, we examine neural activity induced by electrical stimulation and mental imagery. We use representational similarity and linear encoding analyses to quantitatively demonstrate alignment between latent DNN activations and neural activity measured in blind V1. We additionally propose a proof-of-concept approach towards enhancing the interpretability of neurons recorded in blind V1 by studying maximally exciting images (MEIs). Across 69 DNNs, blind V1 alignment was positively correlated with 1) DNN alignment with neural activity in the visual cortex of sighted primates (Pearson r = 0.43, p < 0.01) and also 2) DNN ImageNet accuracy (Pearson r = 0.49, p < 0.01). MEIs predicted for multiple neuron recording sites were observed to share visual features with the electrically evoked visual perceptions that elicited a strong response at that site. The results of these studies suggest the presence of natural visual processing in blind V1 during electrically evoked visual perception and present unique directions in mechanistically understanding and interfacing with blind V1. (Accepted at ICLR 2024 Re-Align Workshop)