On hearing a melody, the awake brain uses the regularities in a sound sequence to predict the sounds that will follow. This predictive ability functions on a hierarchical basis between a set of brain areas. If a sound disrupts the regularity of the sequence, the brain sends out a series of prediction error signals responsible, among other things, for novelty and surprise responses. Previous studies based on electroencephalograms have described at least two successive error signals: Mismatch Negativity (MMN) and P300. MMN has previously been observed in people in an unconscious state (including people in a coma), while P300 is specific to conscious processing, since it reflects the integration of information via a huge cerebral network extending beyond the auditory regions.
During sleep, environmental sounds are not consciously perceived. However, we do not know the level at which the brain interrupts integration of these sounds, nor whether it is still able to extract and predict available regularities. This specific aspect of how the brain functions has been tested by a team at Neurospin (Inserm/CEA), in collaboration with the Centre du sommeil et de la vigilance (Sleep and Vigilance Center) at the Hôtel-Dieu in Paris (AP-HP), ICM, the Institut du cerveau et de la moelle épinière (Brain and Bone Marrow Institute), the Collège de France and the Paris-Sud and Paris-Descartes Universities. The researchers used electro- and magnetoencephalography (EEG/MEG) to study prediction error signals (MMN and P300) in subjects when awake and when asleep.
The researchers asked the volunteers to fall asleep inside the magnetoencephalography machine at NeuroSpin, while being played a series of repetitive sounds. The results confirm that P300 is a marker specific to conscious sound processing, since it vanishes as soon as the subject falls asleep, at the point where they stop responding to sounds. MMN on the other hand was observed during all stages of sleep (slow sleep and paradoxal sleep). However, this signal is only partly maintained, since certain areas of the brain, which would normally be active during wakefulness, no longer respond to the sound stimulus. The activity peak resulting from a prediction error during wakefulness disappears during sleep. Only passive sensory response adaptation phenomena persist, located in the primary auditory areas.
The researchers have therefore demonstrated that, due to a communication fault between brain areas, the brain is no longer capable of predictive coding during sleep. It is, however, still able to represent sounds within the auditory areas and get used to them if they are frequently repeated, which explains why an alarm wakes us up but the regular ticking of a clock does not.
Reconstruction showing the sources in the brain of error signals, using magnetoencephalography. The signals showing a prediction error, the intermediate effect of MMN and P300, vanish during sleep. Only the passive sensory response adaptation mechanisms (the early and late MMN effects), confined to the auditory areas, remain intact. (The different times are given in milliseconds, and measure the response time to the sound.)