Under the auspices of the Artsoundscapes project, the building of the Immersive Psychoacoustics Laboratory (immpaLAB) was completed in July 2020 and will be inaugurated very soon. This is a twenty-five square meter facility located in the Faculty of Psychology of the University of Barcelona, and coordinated by Prof. Carles Escera, head of Brainlab-Cognitive Neuroscience Research Group and member of the Department of Clinical Psychology and Psychobiology. This facility will allow the team in charge of the Psychoacoustics and Neuropsychology Research Lines (RL2 and RL3) of the project to carry out experiments where participants will feel immersed in the desired acoustic environment, by rendering auditory stimulation with the sonic signature of selected rock art sites through a 3D-loudspeaker array, in order to explore enhanced emotional dimensions and even altered states of consciousness triggered in the listeners by these singular spaces.
Figure 1. immpaLAB’s semi-anechoic chamber and 3D loudspeakers array.
Traditional psychoacoustics studies address the auditory perception of the physical characteristics of sound. More recently, the term emoacoustics has been coined, which inquires about the emotional response to sound and its influence on sound perception itself (Asutay et al., 2012). The Artsoundscapes Psychoacoustics Research Line aims to cover both topics in the study of rock art soundscapes.
Over the past two decades, the fields of Psychoacoustics and Archaeoacoustics of rock art sites have been colliding by means of several human auditory experiments. This is the case, for example, of the studies that Joakim Goldhahn performed in the rock art site near the river Nämforsen in Sweden, where the increased intensity of the water sound level close to the engravings is thought to have helped shamans to achieve trance, as they believed the mountains were ensouled (Goldhahn, 2002). Another example is the experiments that Riitta Rainio and colleagues carried out in northern Finland rock art sites (Rainio et al., 2018). The general aim of the Psychoacoustics research line of the Artsoundscapes project is to explore the human auditory experience in rock art sites in order to understand better the role of sound in ancient societies’ way of living.
Since the late 1970s, the field of psychoacoustics has aimed to jump from the use of “simple” artificial sounds, such as pure tones, as stimuli for its experiments, to the use of natural complex sound excerpts. Experimental conditions had thus to evolve as well, in order to faithfully represent real-world listening situations (Yost, 2015). The creation of an immersive auditory experience facilitates the participants’ emotional reaction and feeling of being present in the soundscape, that increase with spatialized sound (Västfjäll, 2003; Fan et al., 2015). This is why the creation of the immpaLAB was one of the objectives of the Artsoundscapes project: to set up an artificial environment capable to reproduce with the maximum accuracy the sonic properties of the natural sound reflections in the soundscapes of interest, giving the participant a full immersive experience in the acoustics of rock art sites.
The immpaLAB facilities consist of two separate spaces: the psychoacoustics immersive cabin and the control room. The cabin required a process of soundproofing with layers of different materials arranged on the floor, ceiling and walls that 1) acoustically isolated the cabin from and to the outside, and 2) reduced its the reverberation time for our experimental needs. The cabin has a 3D loudspeakers array, composed of a circular truss and several cross-bars, supported by a structure of metal columns. Eight loudspeakers are disposed in a central ring at the height of the participant’s head –sitting in the center-, and together with four more loudspeakers at the top of the structure (upper ring), and four at the bottom (lower ring), form the 16-channel 3D spherical array, which is complemented with a 2-channel subwoofer located in a corner of the cabin
The control room, adjacent to the cabin, is the workspace where the psychoacousticians and neuroscientists in charge of the experiments will be able to 1) control the audio rendering through the loudspeakers array, and 2) communicate with the participant using a camera and a microphone installed in the cabin. The large scope of potential experimental methodologies to use results in a wide range of tasks that the participant can perform in the cabin, from answering questions using a tablet, to controlling by him/herself the reverberation of the sound that is being rendered, or even just doing nothing while the experimenter records his/her electroencephalogram (EEG).
The control room is equipped with a computer connected to an amplifier that has, at the same time, cable connections to each one of the 18 loudspeakers of the cabin’s array (including the two subwoofer channels). The audio signal is emitted from the computer through an Ethernet cable to the amplifier. The codification of the signal and its rendering to each one of the array’s channels is done using a virtual audio interface called Dante Virtual Soundcard ®, by Audinate, which allows to connect up to 64 input and output channels. To send the signals to each one of the transmitter channels of the interface an audio management software is required; the one used in the immpaLAB is called Bidule ®, by Plogue Inc.
Figure 3. Detail of loudspeakers.
Figure 2. immpaLAB detail of ceiling truss and loudspeaker.
The objective of the first behavioral experiment that is going to be conducted in the immpaLAB is to discern whether the relationship between the occurrence of rock art and the affective responses of the participant while listening to the sonic signature of the actual site where the paintings are located. The first sonic signatures to be used are the acoustic data recorded in the Artsoundscapes fieldwork campaign in Siberia during the summer 2019. During the campaign, the so-called Impulse Responses of the visited sites were recorded. Impulse responses (IR) are measures of the sound propagation between a sound emission point and a receiver device located in the same environment (Farina et al., 2007). These measures represent the acoustic signature of interior and exterior spaces and allow acousticians to compile several acoustical parameters and to render these through the process of auralization (Vorländer, 2008), that being in our case, through the lab’s 3D-loudspeaker array.
Auralization is the process of simulating the experience of acoustic phenomena rendered as a soundfield in a virtualized space. Using this technique, psychoacousticians are able to measure the individual reaction and subjective interpretation that listeners attribute to sound and the affective responses evoked on them. In this particular experiment, we will explore the emotional dimension using validated scales with which participants will have to rate how much of each affective descriptor they felt while listening to the stimuli.
An example of affective scale that, despite not being exclusive for auditory research, is commonly used in experiments relating the categorization of music and, also, of acoustic properties of spaces, is the circumplex model of affect (Russel, 1980). It is a graphical model where the affective descriptors are located in a circle divided in four quadrants by two axes that represent opposite pairs of descriptors “negative valence-positive valence” and “high arousal-low arousal”, referring to the way the participant feels during the listening. Participants have to indicate in which point of the circle they feel located.
Similarly, several other testing instruments, known by the general name of semantic differential scales (Susini et al., 2011), are also based in the categorization of the stimulus in scales with opposite descriptors, that can be affective (like the discussed above of the circumplex model) or not. This is the case of, for example, the scales of expression, sleepiness, and sharpness, among others, used by Västfjäl in 2012.
Another example of scale to be considered, this time, specific for music-related studies, is the Geneva Emotional Musical Scale, or GEMS (Zentner et al., 2008). This scale has three different versions, GEMS-9, GEMS-25 and GEMS-45, that differ in the number of items that are evaluated, that are emotional labels in an increasing level of precision. For example, the descriptors “moved”, “filled with wonder” and “allured” in the GEMS-25 version are summarized as “wonder” in the GEMS-9 version. Other examples of affective labels used in these scales are “tension”, “power”, “transcendence”, “joy”, etc., all of them referring to how much, from 1 to 5, the participant is experimenting each emotion while listening to certain musical stimuli.
Psychoacousticians can also benefit from the study of several physiological variables as a complement to all the mentioned instruments used for measuring behavioral responses. This is the case of both heart rate and skin conductance, as contrasted correlates of several affective labels, such as anger, disgust, fear, joy, surprise and sadness, all of these assessed by Ménar et al. in 2015.
In the described experiment we will use static sound sources and put the focus on the acoustical properties of a particular studied space. However, the immpaLAB offers multiple possibilities to play with sound localization and sound source movement. And most important, we plan to implement, in the near future, the use of virtual reality applications. Thanks to the use of photogrammetry in the fieldwork campaigns, to register the shape of the surfaces where rock art is located, we can reproduce the actual landscape’s 3D view to increment the immersion of the participants in the environment.
We are looking forward to receiving participants as soon as possible, so anyone who wants to take part in our experiments can enjoy the auditory immersive experience of the immpaLAB.
Asutay, E., Västfjäll, D., Tajadura-Jiménez, A., Genell, A., Bergman, P., & Kleiner, M. (2012). Emoacoustics: A study of the psychoacoustical and psychological dimensions of emotional sound design. AES: Journal of the Audio Engineering Society, 60(1–2), 21–28.
Goldhahn, J. (2002). Roaring Rocks: An Audio-Visual Perspective on Hunter-Gatherer Engravings in Northern Sweden and Scandinavia. Norwegian Archaeological Review, 35, 29–61.
Rainio, R., Lahelma, A., Äikäs, T., Lassfolk, K., & Okkonen, J. (2018). Acoustic Measurements and Digital Image Processing Suggest a Link Between Sound Rituals and Sacred Sites in Northern Finland. Journal of Archaeological Method and Theory, 25(2), 453–474.
Yost, W. (2015). Psychoacoustics: A Brief Historical Overview. Acoustics Today, 11(3), 46–53.
Västfjäll, D. (2003). The subjective sense of presence, emotion recognition, and experienced emotions in auditory virtual environments. Cyberpsychology and Behavior, 6(2), 181–188.
Fan, J., Thorogood, M., Pasquier, P., & Riecke, B. E. (2015). Automatic recognition of eventfulness and pleasantness of soundscape. ACM International Conference Proceeding Series, 07-09-Octo.
Farina, A., Capra, A., Conti, L., Matignon, P., & Fazi, F. (2007). Measuring Spatial Impulse Responses in Concert Halls and Opera Houses Employing a Spherical Microphone Array. 19th INTERNATIONAL CONGRESS ON ACOUSTICS, (September), 2–7.
Ménard, M., Hamdi, H., Richard, P. and Daucé, B. (2015). Emotion Recognition Based on Heart Rate and Skin Conductance. In Proceedings of the 2nd International Conference on Physiological Computing Systems, 26-32
Russell, J. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39(6), 1161–1178.
Susini, P., Lemaitre, G., & Mcadams, S. (2011). Psychological measurement for sound description and evaluation. In Measurement with Persons: Theory, Methods, and Implementation Areas (pp. 227–253).
Vorländer, M. (2008). Auralization. Berlin: Springer.
Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions Evoked by the Sound of Music: Characterization, Classification, and Measurement. Emotion, 8(4), 494–521. https://doi.org/10.1037/1528-35188.8.131.524