Predicting syllables and silences: an ERP study

Predicting syllables and silences: an ERP study

 

Vittoria Spinosa and Iria SanMiguel

 

Institute of Neurosciences and Dep. of Clinical Psychology and Psychobiology, University of Barcelona

 

The human brain processes self-generated stimuli different than stimuli generated by external sources. Particularly, neural responses to self-generated sounds are attenuated. In humans, the self-generated sound per excellence is language. Here, we investigate the neural mechanisms underlying the differential processing of self-generated sounds, which probably contribute to self-monitoring of speech production. Current theories propose that the brain constructs an internal representation of the external world in order to guide our actions. Using this representation, we generate predictions regarding the sensory consequences of our motor acts. Neural responses to stimuli that match the predictions (e.g. predictable self-generated speech sounds) are attenuated, while error-related responses are elicited when our motor acts have unexpected sensory consequences. In the present study we measured event-related potentials elicited by the auditory presentation of a syllable, self-triggered by the subject pressing either one of two buttons. We manipulated the press-effect contingencies, such that one button predicted the presentation and the other the absence of the sound. We investigate the differences between predicting the presence vs. predicting the absence of a verbal stimulus after a motor act, and the violation of each of these predictions. The results corroborate the attenuation of neural responses to predicted self-generated sounds, and show differences in the error signals elicited by the unexpected presentation or the unexpected omission of the self-generated sound.

Authors: 
Vittoria Spinosa & Iria SanMiguel