Deprecated: Required parameter $endpoint follows optional parameter $args in /www/wub/masterartsonor_temp/wp-content/plugins/breadcrumb-navxt/class.bcn_rest_controller.php on line 64
Time and space in experimental music – Màster en Art Sonor

Time and space in experimental music

Summary

This module deals with much more practical questions than in Psychoacoustics and Experimental Music. It makes sense: we needed to lay the foundation for a language to accurately refer to the behavior of sound in its natural place: space. This required a lot of theoretical information. For space, however, we not only understand traditional spatial dimensions – which are only applicable to the domain in which we move voluntarily – but we will add time to this turn, since sounds and music also take place over time, whatever time it ends up, after all, being. In other words, physicists still argue about their actual existence.

Due to these considerations, we have structured the theme into four large modules: Environment and Dissemination, Gesture, Generation and Time.

Syllabus

  • 1. Space. Soundscape
    • 1.1. Concept and usefulness
    • 1.1.1 Methodological dimension of the soundscape
    • 1.1.2 Narrative dimension of the soundscape
    • 1.2. Sound diffusion. Speakers as a musical instrument
      • 1.2.1. Types of speakers and forms of spatial stimulation
      • 1.2.2. Directionality
      • 1.2.3. Speaker designs
        • 1.2.3.1. Stereophony
        • 1.2.3.2. Quadraphony and octophony
        • 1.2.3.3. Special broadcasting systems
          • 1.2.3.3.1. Gmebaphone
          • 1.2.3.3.2. Acousm-nium
          • 1.2.3.3.3.   Kupper Domes
        • 1.2.3.4. Ambisonics  and Vector Based Amplitude Panning – VBAP – VBAP
        • 1.2.3.5. Wave Field Synthesis – WFS
        • 1.2.3.6. Manifold – Interface Amplitude Panning – MIAP
  • 2. Interactive systems. Interaction and real time. Hardware devices
    • 2.1. In the virtual world
      • 2.1.1. Between applications on the same computer
        • 2.1.1.1. Sound apps
        • 2.1.1.2. Sound, visual and other applications
      • 2.1.2. Between dedicated computers
        • 2.1.2.1. Computers dedicated to sound applications
        • 2.1.2.2. Computers dedicated to sound, visual and other media applications
      • 2.1.3. On the network
    • 2.2. In the physical world
      • 2.2.1. Sensors
        • 2.2.1.1. Microphones
        • 2.2.1.2.Capt mechanical  trainers
        • 2.2.1.3.Electromagnetic  radiation captors
          • 2.2.1.3.1. Photo-resistance
          • 2.2.1.3.2. Infrared detectors
          • 2.2.1.3.3. Video cameras and microscopes
      • 2.2.2. Actuators
        • 2.2.2.1. Speakers
        • 2.2.2.2. Plates
        • 2.2.2.3. Solenoids
        • 2.2.2.4. Motors
        • 2.2.2.5. Other actuators
  • 3. Sonification
    • 3.1. Sound depending on the image
      • 3.1.1. Color identification
      • 3.1.2. Luminosity identification
      • 3.1.3. Identification of movement
      • 3.1.4. Color location
      • 3.1.5. Location of luminosity
      • 3.1.6. Location of movement
    • 3.2. Data-driven sound
      • 3.2.1. GPS
      • 3.2.2. Internet. The case of Carnivore
      • 3.2.3. SRTM – NASA ground elevation data
      • 3.2.4. Cassini – Dynamic Explorer – Terrestrial Electromagnetic Field Data
      • 3.2.5. Stock values
      • 3.2.6. State of the Sea Data
      • 3.2.7. Climate Status Data
      • 3.2.8. Demographics of the planet
      • 3.2.9. Geological activity data
      • 3.2.10. Decoding two-dimensional and QR arrays
  • 4. Generating images depending on the sound
    • 4.1. Displaying musical parameters
      • 4.1.1. Height/Time
      • 4.1.2. Dynamic/Time
      • 4.1.3. Bell/Hour
      • 4.1.4. Space/Time
    • 4.2. Displaying sound parameters
      • 4.2.1. Frequency/Time
      • 4.2.2. Amplitude/Time
      • 4.2.3. Spectre/Time
      • 4.2.4. Location/Time
      • 4.2.5. Encoding sounds in two-dimensional arrays and QR
  • 5. Joint generation of sound and image
    • 5.1. User-independent processes
      • 5.1.1. Classic numerical behaviors
        • 5.1.1.1. Series of famous numbers
        • 5.1.1.2. Famous functions
      • 5.1.2. Fractals
      • 5.1.3. Cell automatons and Conway matrices
      • 5.1.4. Parametric surfaces. Matrix mixing
      • 5.1.5. Computational agents
        • 5.1.5.1. Boids
        • 5.1.5.2. Docks
        • 5.1.5.3. Fireflies
      • 5.1.6. Genetic algorithms
    • 5.2. User-dependent processes. Interactivity and real time. Real-time human action on computational parameters
      • 5.2.1. Interactive perspective on the use of fractals
      • 5.2.2. Interactive perspective on the use of Conway cell automatons and matrices
      • 5.2.3. Interactive perspective on the use of parametric surfaces and matrix mixing
      • 5.2.4. Interactive perspective on the use of computational agents
        • 5.2.4.1. Boids
        • 5.2.4.2. Docks
        • 5.2.4.3. Fireflies
      • 5.2.5. Convolution and flows
  • 6. Time
    • 6.1. Timelines
    • 6.2. Directionality
    • 6.3. Closed work
      • 6.3.1. Musical forms
    • 6.4. Open work

bibliography

BARLOW, C., F. BARRIèRE, J. M. BERENGUER, et al. Time in electroacoustic music. Bourges: Actes 5, Mnemosyne, 1999-2000.

BARRET, N., “Spatio-Musical Composition Strategies”. Organised Sound, Vol. 7 (3) (CUP), 2002, pp. 313-323.

BAYLE, F., Musique acousmatique, propositions… positions. París: Buchet/Chastel—INA-GRM, 1993.

BOULANGER, R., The Csound Book: Perspectives in Software Synthesis, Sound Design, Signal Processing,and Programming. The MIT Press, 2000.

CHOWNING, J., “The Simulation of Moving Sound Sources”. Journal of the Audio Engineering Society, 19 (1), 1971, pp. 2-6 (Computer Music Journal, June 1977, pp 48–52).

CLOZIER, Ch., “The Gmebaphone Concept and the Cybernéphone Instrument”. Computer Music Journal, Vol. 25 (4), 2001, pp. 81–90.

COLE, H., Sounds and Signs: Aspects of Musical Notation. Oxford University Press, 1974.

COLLINS, N., Handmade Electronic Music: The Art of Hardware Hacking. Routledge, 2006.

DAVIS, M. F., “History of Spatial Coding”. Journal of the Audio Engineering Society, Vol. 51 No. 6, June 2003, pp. 554–569.

EMERSON, S. (ed.), The Language of Electroacoustic Music. London: MacMillan Press, 1986.

DOHERTY, D., “Sound Diffusion of Stereo Music over a Multi Loudspeaker Sound System: from First Principles onwards to a Successful Experiment”. Journal of Electroacoustic Music (SAN), Vol. 11, 1998, pp. 9-11.

GHAZALA, R., Circuit-Bending: Build Your Own Alien Instruments.John Wiley & Sons, 2005.

GRITTEN, A. i E. KING (eds.), Music and Gesture. Aldershot: Ashgate, 2006.

KEANE, D., Tape Music Composition. Oxford University Press, 1981.

MANNING, P., “Computers and Music Composition”. Proceedings of the Royal Musical Association, Vol. 107, David Greer (ed.), 1980–1, pp. 119–131.

OWSINSKI, B., The Mastering Engineers Handbook. MixBooks, 2000.

SCHAEFFER, P., De la musique concrète à la musique même. París: Mémoire du Livre, 2002.

WANDERLEY, M. M., Non-Obvious Performer Gestures in Instrumental Music. Heidelberg: Springer, 1999.