Poster session 1 (physical)

NYU

The geometry and role of sequential activity in olfactory processing

Animals depend on their senses for survival. Mice, who rely on olfaction to navigate the world, can rapidly identify odors within a single sniff across a wide range of concentrations. In the mouse olfactory bulb (OB), mitral and tufted cells (MTCs) respond to odors by changing both the rate and timing of spikes relative to inhalation, resulting in reliable, odor specific sequences that evolve over a single sniff. However, it remains unknown how sequential MTC activity is organized. Specifically, what defines the order in which MTCs fire, and what information is encoded in these sequences?

To address this, we performed 2-photon (2P) calcium imaging of hundreds of MTCs expressing the fast calcium indicator jGCaMP8f, permitting us to monitor the sub-sniff timing of responses to a diverse battery of odors. We constructed a space of MTC tuning using the pairwise correlations between MTC odor responses averaged over a single sniff. We then analyzed the propagation of sequences in this space and discovered that sequences originated in a set of similarly tuned neurons and propagated to more distantly tuned neurons, so that the latency of MTC activation was linearly related to distance in tuning space. Analyzing the concentration dependence of sequence propagation, we found that the early part of the sequences carried information that was concentration invariant, while later MTC responses were inconsistent across concentrations. Finally, inspired by the discovery that similarly tuned MTCs are activated sequentially across odors, we constructed and analyzed a computational model for sequence-based unsupervised training of synapses from MTCs to the piriform cortex, which revealed that sequential activity across the entire sniff permits perceptual generalization for novel odors, and representational alignment between animals.

Sabahaddin Taha Solakoglu

Hacettepe University Institute for Neurological Sciences and Psychiatry

Analysis and comparison of synaptic inputs from three brain regions onto mPFC dendrites in stress resilient and stress vulnerable mice

Stress decreases branching and spine numbers in apical dendrites of layer II/III and layer V pyramidal neurons of medial prefrontal cortex (mPFC). In the light of recent studies suggesting that dendritic branches are the basic functional unit of neural computation, it is important to know from which brain regions these atrophied dendrites receive input. By using 3-color enhanced green fluorescent protein reconstitution across synaptic partners (eGRASP), we mapped distribution of synaptic inputs from ventral tegmental area (VTA), ventral hippocampus (VH) and basolateral amygdala (BLA) onto mPFC pyramidal neurons and studied differences between stress susceptible (SS) and stress resilient (SR) mice based on their social interaction scores following 10-day chronic social defeat stress.

We acquired confocal images of full column mPFC. After processing of dendritic and synaptic signals (spectral unmixing, deconvolution, 3D reconstruction and segmentation), we obtained spatial locations of those signals. We are investigating synaptic numbers and distances between synapses on a dendritic branch. Additionally, we are planning to present differences in spatial distribution of synapses between SS and SR groups on dendritic branches or in 3D volume. Finally, we will discuss possible mathematical and computational options to interpret and model dendritic computation changes with stress susceptibility. Our findings will help to understand mechanisms underlying stress susceptibility, i.e., why some individuals develop mental disorders when faced with stressors.

LMU Munich

Do Artificial Neural Networks Understand Each Other?

Language is a major means of exchanging information between agents. But how can we be sure that the first agent has the same understanding of what a cat or a dog is as the second agent? The idea pursued in this work is to have the first agent analyze an image and produce a text describing that image. The second agent then obtains the text and simulates an image based on the text. This latter process can be thought of as an embodiment process. We propose that communication is successful if the image simulated by the second agent is close to the image the first agent received as input. Specifically, we assess the mutual understanding in forefront multimodal generative models, with a particular focus on the highly sophisticated text-to-image generative models like DALL-E, and intricate image-to-text captioning models such as Flamingo, which exemplify the remarkable advancements made in this field. We evaluate their ability to reconstruct images based solely on language communication. Our findings indicate that both models exhibit a certain level of understanding and this process leads to improvement in each other's performance. Our work highlights the effectiveness of language, i.e., symbolic sentences, to convey information: Image content can effectively be compressed into a sentence, and a sentence can be reconstructed as an image. This latter step can be considered a form of grounded cognition or embodiment.

Francesco Guido Rinaldi

SISSA

Intuitive Interpretation in Uncertain Environments: A Bayesian Perspective

In order to survive, animals constantly face decisions where they have to choose between competing interpretations for noisy and sparse sensory data. In statistics, multiple, well established formal frameworks exist to tackle this problem, known as model selection (MS). However, despite the abundance of theoretical methods, the way in which MS is performed by the brain is poorly understood. In this study, we investigate how naïve humans perform MS when faced with the problem of guessing the number of hidden sources behind a noisy stream of sensory data. We describe human behavior in terms of the Bayesian MS framework: this allows us to quantify the strategies employed by humans and compare them to the mathematical optimum prescribed by the theory. Our group has previously utilized this methodology to demonstrate that humans, in accordance with all MS frameworks, have a bias towards simpler models. Preliminary results from our new study confirm the presence of this bias in a task with a wider set of models to choose from. This new set-up allows to better analyze the effect of model properties on human behavior. Specifically, we see that in general the number of parameters of the model linearly penalizes the choice of that model, as prescribed by MS criteria as AIC and BIC. Moreover, the stimuli we used are amenable to delivery with multiple sensory modalities. This will make it possible to understand if human MS is a high-level cognitive function or if each sensory modality has its own independent interpretation strategy.

Declan Campbell

Princeton University

Unraveling geometric reasoning: A neural network model of regularity biases

Uniquely among primates, humans possess a remarkable capacity to manipulate abstract structure in service of task goals across a broad range of behaviors. Previous studies have demonstrated that this sensitivity to abstract structure extends to the visual perception of geometric forms. These studies have highlighted a uniquely human bias toward geometric regularity whereby task performance for more regular and symmetric forms is enhanced compared to their geometrically irregular counterparts. Computational models investigating geometric reasoning have revealed that a wide array of neural network architectures fail to replicate the performance biases and generalization exhibited by human behavior. In this study, we propose a neural network architecture augmented with a simple relational inductive prior. When trained with the appropriate curriculum, this model demonstrates similar biases towards symmetry and regularity in two distinct tasks involving abstract geometric reasoning. Our findings indicate that neural networks, when equipped with the necessary training objectives and architectural elements, can replicate human-like regularity biases and generalization. This approach provides valuable insights into the neural mechanisms underlying geometric reasoning and offers a promising alternative to prevailing symbolic language of thought models in this domain.