Neural Data
The neural basis of cognition is not going to be solved by maths alone. We need rich behavioural data and flourishing collaborations between experimentally and theoretically minded folk. In this session, talks will explore new and exciting neural data—whether it be fMRI, electrophysiology, or otherwise—that may or may not yet have an explanation, with a particular focus on data that points to new computational paradigms in brain processing.
Session Chairs
Dr Valeria Fascianelli (Zuckerman Institute, Columbia)
Dr Francesca Mignacco (Princeton)
Keynote Talks
Professor Wolfgang Maass (Technische Universität Graz)
Professor Eve Marder (Brandeis University)
Invited Talks
Professor Alex Cayco-Gajic (ENS)
Dr Jenelle Feather (Flatiron Institute)
Dr. Joao Barbosa (Institute for Neuromodulation)
Dr. John Vastola (Harvard University)
Spotlight Talks
Isabel M Cornacchia (University of Edinburgh)
Philipp Werthmann (Institute for Neuromodulation)
Rajat Saxena (Norwegian University of Science and Technology)
Svenja Küchenhoff (University of Oxford)
Alon Baram (University of Oxford)
Ruben Tammaro (University of Tübingen, Germany)
Keynotes
Technische Universität Graz
Impact of BTSP (Behavioral Time Scale Synaptic Plasticity) on Models for Cognition
I will discuss 3 applications of BTSP to models in cognitive science: Associative memory, models for binding (Hyperdimensional Computing), and models for online planning. Details can be found in
Wu, Y., & Maass, W. A simple model for Behavioral Time Scale Synaptic Plasticity (BTSP) provides content addressable memory with binary synapses and one-shot learning. Nature communications 2025
Yu, C., Wu, Y., Wang, A., Maass, W. BTSP endows Hyperdimensional Computing with attractor features, bioRxiv 2025
Yang, Y., Stoeckl, C., & Maass, W. A surprising link between cognitive maps, successor-relation based reinforcement learning, and BTSP. bioRxiv, 2025
Invited Speakers
Carnegie Mellon University
Optimizing stimuli to test models of perception
Abstract (Coming Soon)
ENS
A distributed learning framework for correcting and consolidating motor memories
Abstract (Coming Soon)
Institut de Neuromodulation and Neurospin
Leveraging in silico experiments to unveil distributed computations during flexible behavior
Previous work investigating the neural dynamics underlying context-dependent decision-making typically analyses a single brain region (typically PFC) or recurrent neural network (RNN). However, evidence suggests that the information required to solve these tasks is distributed across multiple regions. Here, we investigate the neural dynamics across six brain regions of the non-human primate brain where such distributed information has been observed. By examining within-region geometry and dynamics, we identified significant differences not captured by classical decoding analyses. Using surrogate causal perturbation on multi-regional RNNs trained on condition-averaged data, we explored how inter-area interactions shaped these different neural representations. Our findings reveal that even when task-inputs were withheld from frontal regions during testing, these regions still encoded stimulus information and generated response codes, similar to brain data. Conversely, delivering inputs only to frontal regions or blocking across-region interaction lead to network dynamics that represented stimuli, but failed to solve the task and lacked attractor states for current contexts. Gradually disconnecting regions led to an abrupt breakdown of task-solving capabilities, analogous to spatial bifurcation phenomena. Perturbation experiments highlighted the differential contributions of various regions, offering predictive insights for future experimental validation. These results underscore the critical role of inter-regional communication in task performance and provide a framework for understanding distributed neural processing.
Harvard
Single-cell Learning, Habituation, and Design Principles for Intracellular Memory Storage
It is increasingly believed that even organisms without brains or nervous systems can learn and store memories, but the mechanisms responsible—which could include intracellular processes like gene regulation, post-translational modifications, and epigenetic marks—remain unclear. How might intracellular memory work? In this talk, we present experimental and theoretical progress on this question. Experimentally, we use the habituation behavior of the single-celled protist Stentor coeruleus as a model system to study memory. Stentor normally contracts in response to a mechanical stimulus, but may stop responding (i.e., habituate) after sufficiently frequent stimulation. Its contraction response can return if it is left unstimulated for sufficiently long, which allows it to be subjected to additional bouts of stimulation. Stentor is interesting because how quickly it habituates to a mechanical stimulus depends on whether it has habituated to a similar stimulus in the recent past (e.g., in a previous bout), which hints at the existence of at least two intracellular learning processes operating on distinct time scales. On the theory side, we use an information-theoretic framework and systems-biological knowledge about intracellular processes to propose three design principles relevant to any intracellular memory device that is intended to learn quickly, forget slowly, and be robust to noise.
Spotlight Talks
University of Edinburgh
Dynamical systems principles underlie the ubiquity of biological data manifolds
The manifold hypothesis posits that low-dimensional geometry can be ubiquitously found in high-dimensional data. In the biological sciences, such data often arise from complex interactions between molecules, neurons or agents, most naturally described as dynamical systems. However, while dynamical systems models offer mechanistic descriptions of the processes generating the data, the geometric perspective remains largely empirical, as it relies on dimensionality reduction methods to extract manifolds from data. The link between the dynamic and geometric views on biological systems has therefore remained an open question.
In this work, we argue that many manifolds observed in high-dimensional biological systems emerge naturally from the structure of their underlying dynamics. We provide a mathematical framework to characterise the conditions for a dynamical system to be manifold-generating. Using the framework, we verify in datasets across various biological scales, ranging from neuronal interactions to collective behaviours, that such conditions are often met. Next, we apply this framework to investigate the relationship between the dynamics and geometry of large-scale recordings of neural population activity. We identify a manifold in the visual cortex that remains stable over learning, despite the neural dynamics changing. Finally, to gain theoretical insight, we explore how structure and randomness shape manifold generation in low-rank dynamical systems.
Overall, we show that the low-dimensional manifolds seen in biological data are the result of dynamical systems principles, thus providing the theoretical foundations of experimentally observed geometry.
Institute for Neuromodulation
Estimating flexible across-area communication with neurally-constrained RNNs
Previous work investigating the neural dynamics underlying context-dependent decision making typically analyses a single brain region (typically PFC) or recurrent neural network (RNN). However, evidence suggests that the information required to solve these tasks is distributed across multiple regions. Here, we investigate the neural dynamics across seven brain regions of the non-human primate brain where such distributed information has been observed. By examining within-region geometry and dynamics, we identified significant differences not captured by classical decoding analyses. Using surrogate causal perturbation on multi-regional RNNs trained on condition-averaged data, we explored how inter-area interactions shaped these different neural representations. Our findings reveal that even when task-inputs were withheld from frontal regions during testing, these regions still encoded stimulus information and generated response codes, similar to brain data. Conversely, delivering inputs only to frontal regions or blocking across-region interaction lead to network dynamics that represented stimuli, but failed to solve the task and lacked attractor states for current contexts. Gradually disconnecting regions led to an abrupt breakdown of task-solving capabilities, analogous to spatial bifurcation phenomena. Perturbation experiments highlighted the differential contributions of various regions, offering predictive insights for future experimental validation. These results underscore the critical role of inter-regional communication in task performance and provide a framework for understanding distributed neural processing.
Norwegian University of Science and Technology
Enriched Experience Alters Population Coding and Syanptic Connectivity in Neocortex
Cognitive reserve is rooted in lifetime accumulation of knowledge and helps the brain resist cognitive decline, but its neural correlates remain understudied. We used environmental enrichment as a proxy for knowledge acquisition and exposed young-adult to either an enrichment (ET) or control track (CT) for ten weeks. ET mice were exposed to a unique configuration of objects daily, allowing for a rich repertoire of experiences. In contrast, CT mice ran on a track with simple ramp hurdles that remained the same throughout. High-density silicon probe recordings from ET mice during awake rest and slow-wave sleep revealed increased population and lifetime sparsity and orthogonality in neocortex (NC) layer 5/6, but not in layer 2/3 NC and CA1. We observed an increase in reciprocal excitatory and inhibitory connections, which was accompanied by a reduction in the number of unidirectional excitatory-excitatory in NC. We propose a theoretical model linking these changes in synaptic connectivity to a more selective cell-assembly formation and increased sparsity. Overall, these findings shed light on how enriched experience reshape population coding dynamics and synaptic connectivity, leading to more efficient brain function and improved behavior.
University of Oxford
Algorithmic representations in the human brain that underlie schema generalisation
Our brain has the remarkable capacity to generalise previously learned knowledge to novel situations. When attending Neuromonster for the first time, it will draw from generalisable information of conferences abstracted from previous experiences, apply it to the current context in Split, and form a plan of useful behaviour. Recent single-cell recordings in rodent medial prefrontal cortex (mPFC) demonstrate an algorithmic representation, termed Structured Memory Buffers (SMB), that makes use of a combination of exactly these two elements to encode the animals’ future actions: the abstract task structure and the current context.
Here, using ultra-high field fMRI and iEEG, we test for this representation and its properties in humans. Using RSA and a computational model, we find the SMB-like representations in human mPFC and OFC, as well as a second representation, a pure abstraction of “location in task”, in human EC and OFC.
Preceding the SMB-like representation, action plans are ‘loaded’ to mPFC once subjects were given all action-relevant information, suggestively through replay. To test this hypothesis, we next identified sharp-wave ripples in intracranial EEG data of human epilepsy patients (n=5) solving the same task. We find that ripple rate is increased when subjects are planning vs. when they are executing the task. Together, our findings suggest an algorithm of how human mPFC encodes future actions, and provide evidence for a replay-based mechanism of loading that representation.
University of Oxford
An abstract relational map emerges in the human medial prefrontal cortex with time
Understanding the structure of a problem, such as the relationships between stimuli, supports fast learning and flexible reasoning. Rodent and theoretical work has suggested that abstraction of structure away from sensory details occurs gradually, over time, in cortex. However, direct evidence of such explicit relational representations in humans is scarce, and its relationship to consolidation mechanisms is under-explored. Here, using fMRI, we found such a relational map in the human medial prefrontal cortex (mPFC). Importantly, this map emerged on the time scale of days – supposedly through consolidation mechanisms.
Participants extensively trained on two graphs with different contexts, where nodes are visual objects and edges are associations between them. The objects and the graph structure were identical in both contexts, but pairwise distances between objects were decorrelated through their distribution on the graphs. This allowed us to simultaneously test where the brain represents the task-relevant and irrelevant graphs, in a task where relevance rapidly switches. Crucially, we could also test for an abstracted graph representation, reflecting the underlying common structure. To assess the role of consolidation in this abstraction, we examined these representations in two scanning sessions, at least 24 hours apart. In two fully independent analyses we found strong evidence that an abstract representation in mPFC emerged between the two scanning sessions. In addition, the medial temporal lobe represented both the relevant and irrelevant graphs, in both scanning sessions. These results shed new light on neural representations underlying the remarkable human ability to draw accurate inferences from little data.
University of Tübingen, Germany
Latent Connectivity Patterns in Prefrontal Microcircuits Reveal Independent Laminar Populations
Neurons in the mammalian cortex are organized in layers that differ in structure, function and connectivity. Anatomically, cortical microcircuits follow a stereotyped organization, leading to the hypothesis of a conserved computational motif. However, evidence for a consistent functional pattern of interactions remains elusive, especially in higher-order areas. A common method to infer neuronal interactions is Granger Causality (GC), yet its validity for laminar recordings - where electrodes are closely spaced - remains uncertain.
We investigated this by analyzing GC connectivity between laminar local field potentials (LFPs) recorded from the prefrontal cortex of non-human primates, using a source-mixing model. Our findings suggest that GC-based connectivity estimates may not reflect true inter-laminar interactions but instead arise from the spatial spread of recurrent activity within distinct laminar populations. Using cross-validated non-negative matrix factorization (cv-NNMF), we identified three dominant connectivity patterns, whose spatial arrangement corresponded well with physiological markers of laminar organization. This suggests that many GC-derived interactions reflect signal mixing rather than genuine connectivity.
These insights call for caution when interpreting GC in laminar recordings. However, they also open new avenues for improving analytical approaches - such as source separation techniques - to better reveal true inter-laminar interactions.