Poster Session 2 (virtual)

Fidelis.ai

Identifying the active properties of layer 5 myelinated axons with automated and robust optimization of action potential propagation

Rapid saltatory conduction over myelinated axons is shaped by a confluence of biophysical mechanisms. These include linear conductive and capacitive properties of the myelin sheath and underlying axon, as well as many nonlinearly conducting, differentially-distributed ion channels across the neuron, of notably sodium (NaV) and potassium (KV) subtypes. To understand the precise balance of contribution from each active mechanism to shaping action potential conduction in structures difficult to explore experimentally, a computational modelling approach is necessary. Here, using axo-somatic patch-clamp recordings, detailed morphological reconstructions, and experimentally-verified membrane mechanisms, we developed a large-scale, robust, parallelized simulation approach for action potential propagation throughout neocortical myelinated pyramidal neurons with axons up to ~800 μm in length.

Without bias toward any particular action potential feature, we reproduce single recorded spikes across major metrics: threshold, half-width, peak amplitude, after-hyperpolarization, after-depolarization, and conduction velocity. To determine parameter contribution with statistical robustness, and disentangle non-unique solutions, we seed initial parameters according to a Monte Carlo fractional factorial method. To do so, we created software based on NEURON, and ran our simulations on supercomputers at the NSG (SDSC and UT Austin) and EBRAINS (JSC).

Our latest optimization results indicate the present likelihood of finding unsupervised, unbiased, and statistically-robust values for the biophysical parameters underlying action potential propagation in not only myelinated axons but beyond, through dendritic and axonal trees. An exciting future direction involves extending our axon tree results to fully reconstructed axon networks in single cells, to broaden our understanding of axonal computation.

Dr Aslan Satary Dizaji

AutocurriculaLab & Neuro-Inspired Vision

Dimensionality of Intermediate Representations of Deep Neural Networks with Biological Constraints

It is generally believed that deep neural networks and animal brain solve a vision task by changing the dimensionality of input stimuli across deep hierarchy. However, so far the evidence shows that in object recognition task, deep neural networks first expand and then compress the dimensionality of stimuli, while primate brains do the opposite and first compress and then expand the dimensionality of stimuli across deep hierarchy. In this project, it is shown that if two biological constraints - namely the non-negativity of activities and energy efficiency in both activities and weights - are imposed on deep neural networks - the trend of dimensionality across deep neural networks and primate brain matches to each other and in both of these cases the dimensionality of stimuli is first compressed and then expanded. This result shows that how neuroscience can help better understanding in artificial intelligence.

Dr Michael Popov

OMCAN network; University of Oxford

Round Numbers and Representational Alignment. Fundamentalness of Ramanujan’s theorems

Humans build not only the models of external reality but also the models of subjective experience (unconscious, imaginary feelings, inner states). This is a less known (at least for LLM alignment) sort of diverging representations playing an important role in clinical psychology of consciousness pathologies. Following clinical observations, aligned representations spontaneously emerged from unconsciousness are usually shaped by a special class of numbers called “ round numbers” or “Jungian archetypal numbers” of the form: 2,4,10,12,13,14,28,32,40. It is quite surprising that Srinivasa Ramanujan in a short article entitled “Proof that almost all numbers n are composed of about log log n prime factors” (Ramanujan’s article in 1917) attempted to find mathematical theory of the round numbers in the terms of “ Superior highly composite number theory” (1915). In agreement with Ramanujan, round numbers (from his field observations on “taxicabs number theory”) are composed of a considerable number of comparatively small prime factors and they are exceedingly rare. In today’s mathematics Ramanujan’s concept of radicals of the integers could be connected with famous unsolved ABC problem. Moreover, it can suggest a new approach for resolution! Round numbers have remarkable characteristics (described by the two Ramanujan’s theorems); hence, it is expected that Ramanujan’s mathematics can essentially refine current psychological theories of the consciousness as well as to improve superalignment concept in LLMs.

Arvind Saraf

Attention Tag

Simulating the (Equinamous) Subconscious Mind

This talk proposes a modular software simulation model of the human mind as a reinforcement learning agent with novel components around subconscious thinking & a reward function (“happiness score”). The proposed model incorporates the prior work on modeling the human mind as a Reinforcement learning (RL) agent, continual learning & attention mechanisms. This RL agent receives sensory inputs from the experience of the external world as inputs & generates physical responses (movement of arms, legs, body parts, speech etc) to maximize the reward function (“happiness score”). The model is input multimodal, hierarchical (across senses, systems & timescale), continually learning with controlled habit formation (plasticity). The various blocks of this system defined are Sensory (input), Compute, Reward function, Subconscious, Attention, and Motor (output), each with well-defined inputs & outputs. The proposed model is a conjecture. We propose open questions & suggestions on implementing such a model in software, putting prior public research work on it, vetting the model against the known behavior, and independently allowing work on the happiness score & the subconscious module. A better understanding of what makes us happy perhaps allows for better treatments & parallel advancements in computing & machine learning, leading to an overall better life. The model helps us understand the best environments for children & adults to learn, including moral realignment. This work is inspired by some of the author’s readings of ancient literature on meditation. More details are on https://arvindsaraf.medium.com/modelling-the-mind-e4237435f4b1

Michael Yifan Li

Stanford University

Learning to Learn Functions

Humans can learn complex functional relationships between variables from small amounts of data. In doing so, they draw on prior expectations about the form of these relationships. In three experiments, we show that people learn to adjust these expectations through experience, learning about the likely forms of the functions they will encounter. Previous work has used Gaussian processes—a statistical framework that extends Bayesian nonparametric approaches to regression—to model human function learning. We build on this work, modeling the process of learning to learn functions as a form of hierarchical Bayesian inference about the Gaussian process hyperparameters.

University of Delhi

Computational Modeling of Hyperpolarizing Astrocytic Influence on Cortical Up-Down State Transitions

The understanding of emergent behaviors arising from the network properties of neural systems frequently requires a combination of experimental and theoretical approaches, as exemplified by cortical Up-Down dynamics. It is characterized by periods of neuronal firing alternating with periods of silence, and spontaneous transitions between these two states, even in the absence of external inputs. By leveraging bistable dynamics imposed by a depolarising astrocyte population, in the current paper, we introduced a hyperpolarising astrocyte population to an existing model of Up-Down dynamics to account for biological relevance. We created a computational rate model that includes populations of depolarising and hyperpolarising astrocytes and neurons. To optimize model parameters, we used Elementary Effects Analysis (EEA). It was followed by linear stability analysis to locate bistable regimes in the parameter hyperspace. The addition of hyperpolarising gliotransmission perturbed model dynamics, indicating its sensitivity to qualitatively differing architectures. Nevertheless, a bistable regime was identified in the dynamics continuum. According to the EEA, the strength of cell population coupling is a low-sensitivity parameter, possibly due to neuroplastic changes. The threshold of excitatory populations and the strength of adaptation, the primary intrinsic mechanisms of the model for the emergence of Up-Down dynamics, are high-sensitivity parameters. Our model enables the possibility of testing biologically relevant theories of hyperpolarising gliotransmission, where data remains scant. The differential sensitivities of model parameters provide directions for further investigation into the mechanisms governing the emergence of Up-Down dynamics, neural information processing, and plasticity.

NYU

Coherence influences the dimensionality of communication subspaces

The brain relies on communication between specialized cortical areas to accomplish complex cognitive tasks. There are two leading hypotheses for communication between cortical areas: 1) The communication through coherence (CTC) hypothesis posits that coherent oscillations are required for information propagation. 2) The information transmission via communication subspace (CS) hypothesis, advances the idea that low-dimensional subspaces of population activity are responsible for communication across cortical areas. There is a clear divide between these two mechanisms and the CTC hypothesis, in particular, CTC has been surrounded by considerable skepticism, with many authors reducing oscillations in the cortex to an epiphenomenon. Here, we reconcile these two mechanisms of communication through a spectral decomposition of communication subspaces. In our main result, we predict that coherence influences the dimensionality of the CS: Dimensionality is lowest, and the prediction performance is highest, at frequencies that exhibit a peak in coherence between a source and target population. We arrive at these results by developing an analytical theory of communication for circuits described by stochastic dynamical systems exhibiting fixed-point solutions. We compute directly the predictive performance for the mean-subtracted activity of a target population from that of a source population and show that our predictions agree with the experimental results. Then via a band-pass filtered version of the covariance matrix, we arrive at our main result. Hence, our theory makes experimentally testable predictions of how oscillations influence interareal communication while advancing a new hypothesis for the functional role of oscillatory activity in the brain.

Shirin Vafaei

Osaka University

Brain-grounding of word embeddings for improving brain decoding of visual stimuli

Developing algorithms for accurate and comprehensive decoding of neural representation of objects is one of the fundamental goals in neuroscience. Recent studies have demonstrated the feasibility of using neuroimaging and machine learning techniques to decode the neural activity of visual stimuli (Horikawa and Kamitani 2017, Gauthier and Levy 2019). However, their prediction accuracy highly depends on the way that labels of the visual stimuli are denoted in their algorithms (Gauthier and Levy 2019). In current studies, labels are defined by word embedding vectors derived from neural network latent spaces that encode the “distributional semantics” and are based on patterns of co-occurrence of words in large text corpora (Pennington, Socher, and Manning 2014, Mikolov et al. 2013). On the other side, a semantic meaning in the brain is conveyed through various modalities such as perception, imagination, action, hearing or reading and therefore the semantic space of human brain, or brain space (Huth et al. 2012), is formed based on incorporating information from diverse sources6 . In this study, we propose that by integrating features from the brain space into the current commonly used word embedding spaces, we can obtain a new brain-grounded, more brain-like vector representation of labels, that by using them, decoders can better learn to map the neural activity patterns to their corresponding embedding vectors compared to the cases where original word embeddings were adopted.

Asit Pal

NYU

Feedback-Dependent Communication Subspace in a Multistage Recurrent Circuit Model Implementing Normalization

The brain relies on communication between cortical areas to achieve perceptual, cognitive, and motor function. Communication is supported by long-range reciprocal connections and, crucially, feedback connections have been hypothesized to influence perception by mediating attentional modulation. Here, we present a hierarchical recurrent neural circuit model (ORGaNICs) with feedback that implements divisive normalization exactly at each stage of its hierarchy (i.e., the response of each stage is given by the normalization equation exactly). We simulate exogenous attention by modulating the input gains of the sensory neurons, and endogenous attention by modulating the activity of the sensory neurons via the feedback drive.

We find that both exogenous and endogenous attention elevate the contrast response curve, and that attention has a larger effect on gain in higher cortical areas, consistent with neurophysiological measurements of attentional modulation in sensory cortex and psychophysical measurements of changes in performance with attention. However, the response gain differs between the two attention mechanisms. Additionally, we investigate the coherence between the hierarchical stages observing a peak in the gamma band whose intensity increases with feedback (endogenous attention). Finally, we predict the dimensionality of the communication subspace between population of neurons within (private subspace) and between (communication subspace) sensory cortical areas, predicting that communication is enhanced by increased feedback (endogenous attention). In summary, our hierarchical recurrent neural network provides a robust and analytically tractable framework for exploring normalization, attention, and inter-areal communication.

University of Hyderabad

Gender Disparities in Spatial Cognition: The Influence of Stereopsis and Mental Rotation

Perceiving lines and objects in visual space is an example of spatial ability. These abilities include the awareness of objects or external stimuli, depth perception, and motion (Cobly et al 1999). Stereopsis is a visual phenomenon that should be normal for identifying visuospatial abilities, in which a person with Normal Binocular vision function may estimate the three dimensions of an object relative to the visual environment (Dunford, M, et al 1971). Mental rotation improves spatial comprehension by seeing an object's rotation in three dimensions. This study investigated the impact of the mental rotation task and stereopsis on gender disparities in spatial cognition. A total of n=60 individuals, including both males and females participated in the stereoscopic evaluation and mental rotation task trials. The results of Experiment 1 indicated that participants from both gender groups exhibited similar binocular function parameters and normal depth perception. In Experiment 2, when spatial skills were assessed using the mental rotation task, male individuals demonstrated superior spatial abilities and outperformed female participants. Furthermore, it was found that the male advantage in mental rotation tasks extended across various levels of rotation, suggesting that differences in the degree of this advantage between tests may be influenced by varying levels of complexity. Additionally, the study revealed that developing prototype-based tasks for spatial abilities can enhance spatial cognitive functions.

University of Oxford

(Non-)Convergence Results for Predictive Coding Networks

Predictive coding networks (PCNs) are (un)supervised learning models, coming from neuroscience, that approximate how the brain works. One major open problem around PCNs is their convergence behavior. In this paper, we use dynamical systems theory to formally investigate the convergence of a particular model of PCNs that has recently been turned into supervised-learning model, used in machine learning. In doing so, we put this model on a mathematically rigorous basis by developing a precise mathematical framework for this type of PCNs and show that for sufficiently small weights and initializations, PCNs converge. We provide the theoretical assurance that previous implementations, whose convergence was assessed solely by numerical experiments, can indeed capture the correct behavior of PCNs. Outside of the identified regime of small weights and small initializations, we show via a counterexample that PCNs can diverge.