To integrate or to differentiate, that is the question.
At each moment, the brain receives sensory information that it must be able to integrate or differentiate. For example, seeing a bird and hearing its song allows the brain to make a deduction (an inference), by multisensory integration, of the existence of the same cause (the bird). Conversely, seeing a cat and hearing a bird simultaneously requires us to dissociate the visual and auditory sources into two distinct representations. This ability of our brain to correctly attribute whether or not two sensory sources have the same origin (the bird or the bird and the cat) is well explained if we consider that our brain is Bayesian: it constantly calculates the probabilities of the possible causes of what it is observing. Schematically, in our first example, it is because the brain calculates the highest probability for the cause "the song and the sight of the bird come from the same animal" among all the possible causes that it retains this hypothesis as the most likely.
But how does our brain manage to selectively combine the signals that need to be combined in a continuous stream of multisensory information? How does it detect correlations? And how does it adapt to spatio-temporal conflicts (I see a cat at the same time as I hear a bird or I hear a bird before I see it)? The neural mechanisms that allow the integration or dissociation of information during the analysis of complex scenes remain largely unknown, in part because Bayesian models are mostly static, i.e. they do not take into account the evolution of sensory information over time, and do not generate predictions about neural dynamics.
A model at the neurone level?
A number of data show that multisensory integration is all the better when the input signals correlate in time and space. A dynamic algorithm, called a "multisensory correlation detector" could be a good model of the integration (causal inference) and segregation (temporal ordering) of information by the neurons involved in these operations. In a study published in Nature Communications, the Cognition and Brain Dynamics team (UNICOG/NeuroSpin) tested whether the two types of parallel neuronal dynamics predicted by the model (one explaining integration and the other segregation of multisensory signals) take place at the scale of neuron populations. The team developed a multivariate time-resolved encoding model approach combined with non-invasive high-temporal resolution neuroimaging (magnetoencephalography, MEG). Participants judged whether sequences of auditory and visual signals originated from the same source (causal inference) or whether one sensory modality preceded the other (temporal ordering). Throughout the task, their brain activity was recorded by MEG. First, the researchers confirm that the multisensory correlation detector does explain the behavioral judgments of causal inference and temporal order. Second, they find a strong match between brain activity in temporoparietal cortices and the multisensory correlation detector. Finally, they describe an asymmetry in the quality of this matching, which is better during the causal inference task than during the temporal order judgment task.
Overall, the results suggest the existence of multisensory correlation detectors in the human brain, which explains how and why causal inference is determined by the temporal correlation of multisensory signals.
EUropean Fundings
This work was carried out in the framework of the ERC Starting Grant
MINDTIME, coordinated by Virignie van Wassenhove.
Contact CEA-Joliot