Brain Polyphony: neuroscience, technology, computer science and creativity come together to support people with disabilities

Brain Polyphony to promote multidisciplinary approaches and mainstreaming of basic research focused on patients and society.
Brain Polyphony to promote multidisciplinary approaches and mainstreaming of basic research focused on patients and society.
Research
(09/07/2015)

To develop an alternative communication system for people with cerebral palsy to allow them to communicate is the final aim of the project Brain Polyphony. Scientists at the University of Barcelona (UB), the Centre for Genomic Regulation (CRG), the company Starlab and the Hospital del Mar Medical Research Institute of Barcelona (IMIM) participate in the project.

Brain Polyphony to promote multidisciplinary approaches and mainstreaming of basic research focused on patients and society.
Brain Polyphony to promote multidisciplinary approaches and mainstreaming of basic research focused on patients and society.
Research
09/07/2015

To develop an alternative communication system for people with cerebral palsy to allow them to communicate is the final aim of the project Brain Polyphony. Scientists at the University of Barcelona (UB), the Centre for Genomic Regulation (CRG), the company Starlab and the Hospital del Mar Medical Research Institute of Barcelona (IMIM) participate in the project.

This highly interdisciplinary team —led by Mara Dierssen, head of the Cellular & Systems Neurobiology Group at the CRG— is developing a system that translates brain waves into sound. The project seeks to promote multidisciplinary approaches and mainstreaming of basic research focused on patients and society, especially at an early stage. Scientists are carrying out the project together with healthy volunteers and the association Pro-Personas con Discapacidades Físicas y Psíquicas (ASDI) from Sant Cugat del Vallès.

“At the neuroscientific level —explains Mara Dierssen, head of the project—, our challenge with Brain Polyphony is to be able to correctly identify the electroencephalographic signals —that is, the brain activity— that correspond to certain emotions. The idea is to translate this activity into sound and then to use this system to allow people with disabilities to communicate with the people around them”. This alternative communication system based on sonification could be useful not only for patient rehabilitation but also for additional applications, such as diagnosis. “Of course —adds Dierssen—, the technological and computational aspects are also challenging. We have to ensure that both the device and the software that translates the signals can give us robust and reproducible signals, so that we can provide this communication system to any patient”.

 

The sound of our brain

Currently, other signal transduction systems (using brain-computer interfaces) are undergoing testing for people with disabilities. However, most of these systems require a certain level of motor control, for example, by using eye movement. This represents a major constraint for people with cerebral palsy.

Until now, the device has been tested mainly with healthy people, but the most recent tests with people with disabilities have been pleasantly surprising. It is the only system that creates sound based on a personʼs emotions —as measured by electroencephalography signals and cardiac response— with no requirements for motor control by the patient.

The device was also presented at the 2015 Sónar festival in Barcelona, where it enhanced the artistic expression of the event by allowing users to “hear” the music created by their emotions. In this sense, Efraín Foglia, professor in the Department of Design and Image of the UB and researcher in the Research Group BR::AC (Barcelona Research Art & Creation), pointed out: “The mere fact that we are able to hear our brains “talk” is a complex and interesting experience. With Brain Polyphony, we are able to hear the music that is broadcast directly from the brain. This is a new form of communication that will take on a unique dimension if it can also allow us to enable people with cerebral palsy to communicate”.

Unlike other existing sonification systems using brain signals, Brain Polyphony allows us to directly “hear” brainwaves. “For the first time, we are using the actual sounds of brain waves. We assign octaves (as they are amplified) until we reach the range audible to the human ear, so that what we hear is really what is happening in our brain. The project aims to achieve this sound and to identify a recognizable pattern for each emotion that we can translate into code words. And all of this happens instantaneously in real-time”, explained David Ibáñez, researcher and project manager of Starlab.