DanielleWilde-01_gesture-sound-tryptich-web

Ross Bencina, Danielle Wilde & Somaya Langley.

In July 2007, together with Ross Bencina and Somaya Langley, I undertook a residency at STEIM, the Studio for Electro-Instrumental Music, in Amsterdam. The goal of the residency was to explore and experiment with new methods for controlling and performing computerised sound using whole-body gesture. I co-directed the project with Ross Bencina, and directed the movement aspects of the exploration, including overlaps with sound generation; I also contributed to sound development with ideas and responses and, with Ross, designed and built custom electronics.

experiments with new methods for controlling and performing computerised sound using whole-body gesture.

Approach

Our overall approach was multifaceted and reflected the interests of the collaborators. Considerations included: physicality in the space, sonic and compositional form, structure and aesthetics, conceptual semantics, sensor technologies and applications. These concerns were used as the basis for devising experiments, some of which were undertaken without interactive technology. For example in the early phases of the residency we experimented with movement-only composition, and later, some of the sound mappings were prototyped by improvising movement to pre-recorded sound. DanielleWilde-g-s_1-web The residency focused on two sensor technologies: 3-axis accelerometers (deployed using 7 Nintendo Wii Remotes), and a custom wireless ultrasonic range finding system which was developed to measure the distance between performers. The sensor systems drove various sound synthesis algorithms running in a custom version of AudioMulch using the Lua scripting language to specify the mappings between sensor data and sound. Max/msp was used to translate the various sensor data to Open Sound Control Protocol with the help of the aka.wiiremote max external.

During our residency we gave a public presentation of a selection of our experimental outcomes. Edited videos of these outcomes can be viewed by clicking on the images below (mp4 format video). Detailed discussion of each experiment and additional technical information about the mappings used can be found in the NIME paper.

Head Scrape

A hyper-instrument in which a sound generator is triggered by the motion of one performer’s head. The resulting sound is processed by a bank of resonators whose frequencies are modulated by the motion of a second performer. When a highpassed version of the first performer’s acceleration exceeds a threshold, a gate is opened which causes a granular glitching sound to be generated. The processing performer wears two sensors, each controlling an amplitude modulated delay line and a bank of spaced resonators. The modulation rate and resonator frequencies are modulated by lowpassed velocity magniture while performer velocity controls the amount of signal entering the filter bank.

Motion Shatter

A smooth continuous drone of Tibetan monks chanting is fed through a granulator. As the performer spins in a circle holding the sensor in an outstretched hand the sound becomes less smooth. Spinning faster causes the sound to become gritty, and eventually to break up. It is necessary for the performer to spin in circles, in an increasingly desperate manner in order to effect a complete cessation of sound. The controlling signal (lowpassed acceleration magnitude) reduces grain durations (from approx 500 ms down to 10ms) while increasing the randomised interonset time from 2.6 to 500ms causing the sound to slowly break up with increased centripetal acceleration.

Leg Ratchet

Sensors are attached to the performer’s lower legs. Each leg controls a similar synthesis patch. The patch iterates a pulse generated by gating a granular texture with pulse rate, transposition and gain modulated by performer accelleration. When the sensor is at rest the pulse is slow, silent, and lower pitch. The legs’ movement results in accelerated pulses or rhythmic modulation. At some point an error was made which resulted in the performer having to move one leg to make sound, and the other leg to stop its corresponding sound. This opened up as yet unconsidered possibilities, and provided a rich space for performer experimentation.

Blades Of Grass

Each performer wears a Wii Remote aligned to their spine, which is associated with a synthesis patch consisting of processed noise with a resonant filter swept according to the angle and direction in which they are leaning. Sensor tilt direction is processed into a triangular shaper which produces a periodic sweep as the performer rotates the tilt of their spine. This is multiplied by the amount the performer is leaning and mapped to the resonant filter cutoff frequency.

Speed Harmonics

The performer wears a sensor on each forearm. The sound world consists of two resonant harmonically tuned oscillator banks, one controlled by each arm. As the speed of the arms increase (sometimes requiring spinning the whole body), white noise and additional bass is faded in, and comb filters are swept across the spectrum creating a swooshing sound. Sensor velocity (lowpassed at 4Hz) sweeps the comb filter between 400 and 4000Hz with increased performer velocity. While velocity (lowpassed at 1Hz) controls the introduction of the white noise and bass boost through a sweeping shelf filter. The filtered velocity signal is also quantized into 10 steps, and used to select one of the harmonics of the oscillator bank: the velocity signal is applied to an envelope follower associated with the selected harmonic, which boosts or sustains the current harmonic level. When the velocity no longer excites a particular harmonic it slowly fades to silence.

Tone Change

Two performers each perform with two Wii Remotes, one in hand and the other attached to the hip. Each Wii Remote is associated with two sine wave oscillators. One is slightly detuned from the other with the detune distance increasing by an offset of between .01 and 20Hz with increased performer velocity. The amplitude of each oscillator pair is modulated by an envelope follower tracking performer velocity. The polarity of the filtered Z velocity is tracked. When the sensor has been at rest and starts moving again in the opposite direction a new random note from a diatonic scale is chosen. Thus, the performers start and stop to change notes, and move in various ways to articulate their tones, creating slowly modulating random chord sequences.

Vocal and Multi-Modal Prototyping (Jerk Glitch)

The aim of vocal and multi-modal prototyping was to challenge our usual ways of thinking about movement and sound and to begin to understand the kinds of relationships we might make between them. We began by exploring a range of processes to develop appropriate sounds. Working individually we identified sounds from the Freesound creative commons database, which we used as a basis for discussing and understanding the qualities of sonic space we each desired to create. This was followed by free-form sound generation using the voice only; physical performance making sessions during which we vocalised sounds that were suggested by movement; and free-form movement and sound generation using the voice and entire body.Through this iterative process we generated a substantial amount of material and were able to make concrete steps towards formalising a gesture sound vocabulary.

Open Questions

In each of the experimental outcomes we strove to maintain a balance in the relationship between movement and resultant sound that was easy to perceive for audience and performer alike. The mappings discussed were intentionally simple. The development of more complex mappings is a clear direction for further investigation. Engaging the body in performance necessarily raises notions of the body as interface, and, for the audience, physical theatre, or theatre of the body. We feel that it is difficult to escape a theatrical mode of interpretation when confronted with a musical performer without an instrument, which of course also invites a dramaturgical mode of composition. We consider the dialog between musical and theatrical creation to be a significant area for future development in whole body gesture sound performance. As previously observed by Bahn et al. performing with the whole body involves skills not always possessed by musicians – some of the authors are now considering training in this area to continue the research. Finally, the sensor technology employed so far has been adopted as a pragmatic prototyping aid. We are now investigating options for smaller, wearable sensor platforms.