How does the brain predict?
Being able to correctly predict what is going to happen allows us to make better decisions, to better perceive the world around us and to react more quickly to events. In many situations, our brain manages to make optimal predictions based on what it has already observed. How does it do this when it must regularly deal with unpredictable hazards and complex and often latent (i.e. hidden) relationships between the elements of the environment?
On the one hand, we can consider that the brain is looking for the ideal solution: it uses the Bayesian inference method which estimates the probability of the cause of events from previous observations to make statistically optimal predictions. But Bayesian inference can only be applied if we know the statistical model that characterizes our observations and this method is very difficult to model.
On the other hand, we can consider that the brain quickly looks for an acceptable (and therefore not necessarily ideal) solution by working in a heuristic way. In many situations, this method is not efficient enough and cannot be generalized.
TOWARDS A GENERALIZABLE NEURAL NETWORK MODEL?
Two researchers from UNICOG/NeuroSpin (équipe
The Computational Brain) therefore asked themselves if there was a generalizable and biologically plausible model that would allow in different environments to make simple, efficient predictions and that would reproduce the qualitative aspects of the optimal prediction of our brain. To do this, they examined models of recurrent artificial neural networks trained on sequence prediction tasks analogous to those used in human cognitive studies.
They show that a specific architecture of recurrent neural network allows to find simple and accurate solutions in several environments. This architecture relies on three mechanisms:
- the gating the connections according to the state of the network, which allows multiplicative interactions between network units,
- lateral connections that allow the activities of different recurrent units of the network to interact with each other,
- and the learning of these connections during training.
Like the human brain, such networks develop internal representations of their changing environment (including estimates of latent variables in the environment and the accuracy of those estimates), exploit multiple levels of latent structure, and adapt their effective learning rate to changes without altering their connection weights. Being ubiquitous in the brain, this "gated recurrence" model could therefore serve as a generic building block for prediction in real-world environments. These novel and remarkable results open new avenues for cognitive modeling.
European fundings This work was carried out in the framework of the
ERC
Starting Grant NEURAL-PROB, awarded to Florent Meyniel.
Contact Joliot's Researcher:
Florent Meyniel
florent.meyniel@cea.fr