bret battey • bat hat media

 

Haptic Control of Multistate Generative Music Systems (2013-14)


Bret Battey, Marinos Giannoukakis and Lorenzo Picinali

This project, funded by a grant from De Montort University's Revolving Investment Fund for Research (RIF), established proof-of-concept and a core software toolkit for using force-feedback interfaces to enable a performer to both sense and direct the behaviour of semi-autonomous generative-music systems.

Project Context and Premise

Most digital-music interfaces, unlike musical instruments, do not provide feedback to the performer’s body about the state and behaviour of what is being controlled. Without that feedback, a primary means for informing effective and affective shaping of music is lost to the performer.

Researchers have considered force-feedback controllers as a solution, primarily by focusing on emulating instrumental models such as piano actions or controlling physical modelling or granular synthesis.

Yet digital-music performers often are not generating individual ‘notes’ as would a traditional instrumentalist. Instead, they are controlling semi-autonomous behaviours generated by computer systems. These behaviours may be simple looping mechanisms or more elaborate computer algorithms. With such systems, there is a high degree of technical mediation between the actions of the musician and the sounds generated. Here, too, a performer may experience an unsatisfactory sense that physical actions and the felt sense of the control interface are strongly divorced from musical results.

In that context, we propose that haptic interfaces could be used to enable performers to have a more fully embodied engagement with generative-music systems. The performer could feel important aspects of the state of the generative system. The performer could ‘push’ or ‘pull’ the system to other states, amplifying the performer’s ability to refine their control over the system in a musically expressive fashion. More speculatively, we propose further that the performer could feel and shape the subjective/expressive ‘tension’ of the musical texture, driving the haptic feedback through (for example) hierarchical models of tension in harmonic progressions. In this case, we might take a step further and radically reconsider a generative music system and its haptic interface as embodying a set of musical potentials through a system of imaginary physics.

Aims

With the ultimate goal of submitting an EPSRC grant proposal, our aims were to:

  1. Create a core software toolkit for connecting generative music systems to the chosen haptic devices
  2. Establish proof-of-concept via a sequence test cases
  3. Refine the conceptual framework for the research
  4. Create and perform a short musical work using the system

Outcomes

Some of the results can be found in the form of a video of the summary lecture at the Research Seminar Series of the DMU Faculy of Technology and in Battey, B., Giannouakakis, M., & Picinali, L. (2015) “Haptic Control of Multistate Generative Music Systems”. Proceedings of the International Computer Music Conference, University of North Texas. Permalink: http://hdl.handle.net/2027/spo.bbp2372.2015.018

Summary Technical Approach

Giannoukakis developed haptics servers for a Sensable Phantom Omni (now Geomagic Touch) interface using the OpenHaptics toolkit. Open Sound Control provided the link between the server and Max software for implementing the music algorithms and sound generation.

Test Cases

The above video shows the final proof-of-concept in the form of an complex multistate arpeggiator, designed by Giannoukakis and performed by Battey. (Our excuse for the cheesy MIDI piano sound is that we were after efficient proof-of-concept!)

The following videos provide examples of a few of the test cases that ulimately led to the complex arpeggatior — starting with very simple interactions and gradually developing towards control of multistate generative systems.

Simple Contact

A simple virtual wall on the Z-axis was the first test case, where touching the wall triggers a sawtooth tone. By applying sufficient force, the user can push or pull the interface pointer through the wall. This was implemented as an OH Frictionless Plane module. This creates a force vector repelling the user from the plane proportional to the penetration distance, using F=kx — where k is the plane stiffness (.75) and x is the penetration vector. The pop-through force threshold was 5N. No damping was applied. A simple indication of whether the wall was being touched was sent to Max/MSP via OSC.

Above: Simple contactTouching a virtual wall triggers a sawtooth tone.

A small alteration immediately created a more expressive system: the force values for the haptic arm were returned to Max/MSP and mapped to signal amplitude and the cutoff of a low-pass filter, such that the sound became louder and brighter the harder one pushed against the wall. This provided a simple but effective correspondence between haptic tension and sonic experience:

Above: Simple contact plus arm force values are mapped to amplitude and low-pass filter cutoff and x-axis is mapped to MIDI notes.

Envelope Amplitude Sensing

Granular and/or looping sound playback could be seen as a type of generative system, and one to which haptic control could be applied. Towards this end, first the ability to feel amplitude envelopes of sound was tested. The code used an OH model of a movable frictionless plane to apply forces to the haptic device on the Y (up-down) axis. The final force on the arm was I-vK, where I is the initial force, v is the velocity of the arm and K is the damping coefficient. The damping coefficient was .001. Within Max/MSP, the amplitude-envelope range of a looped sample (-90 to 0dBFS) was mapped to a force range of -3N to 4N and sent to the server to provide the initial force value. Notice that some negative forces were needed to pull the arm down. If the range was 0N to 4N, for example, the arm would continuously pull upward:

Above: Feel an ampitude envelopeamplitude of the drum sound drives the arm up and down

Having demonstrated the ability to feel the amplitude envelope, this function was extended to two additional test cases. In the first case, the user could freely scrub the time pointer of a sound granulator from left to right across the time domain of an audio sample, feeling the amplitude of the resulting sound in the Y-axis. The results suggest that this can be a simple but powerful way to provide fine control over the granulation time-pointer, particularly when approaching a strong attack transient in the source sound:

Above: Scrub and feel an ampitude envelopeX-axis sets timepoint for a granulator, pushing the interface button engages ampitude control of up and down forces (particularly interesting experience when approaching transients in the sound)

Simple Arpeggiator

A 3D virtual spring was established, where distance from the spring center was mapped to the upper limit of a fixed-interval MIDI arpeggiator. For the standard spring model F = k*x, the stiffness (k) was 0.05N/mm and the distance limit (x) was 150mm. For the damping function -vK, where v is the velocity vector, the damping coefficient K was .0015. The resulting spring tension formed a convincing perceptual parallel with the extent and pitch-range of the arpeggiator.

Above: Simple arpeggiator A 3D virtual spring, where distance from the spring center is mapped to the upper limit of a fixed-interval MIDI arpeggiator.

Complex Multi-state Arpeggiator

Building on the same 3D virtual spring, this test case also enabled a true multistate function. When the spring stretch exceeds 100mm in the virtual space, the spring influence breaks and a new spring is launched at the given position — and the base note of the argeggiator rises one step in a chosen scale mode. If a button on the haptic arm is pressed when the spring snaps, the resultant semitone shift is downward, instead.

The 0-100mm distance from the spring center was mapped to a stretch interval of 0-12. Each time the stretch interval changes, the base note MIDI note is played and three notes are generated: b+2s+c, where b is the base note, s is the stretch interval and c is a semitone interval (4, 7 or 12). Each resulting MIDI-note value is placed through its own delay, the timing of which is also determined by the distance from the spring center, the three delay-time ranges: 150-300ms, 300-600ms, and 450-900ms. Further, the distance from the spring center was inversely mapped to the amplitude of the notes (MIDI velocity range 114 to 49). Thus, as the spring is stretched, multiple musical parameters are altered. The interval between notes of the arpeggio increases, tempo slows, and amplitude decreases — creating a greater sense of suspense, uncertainty and tentativeness. The launch of a new spring is often accompanied by an immediate contrast: a rapid cluster of new, high-velocity, close-interval notes. The authors found the overall feel and behavior of the system provided an immediate, intuitive and compelling sense of expressivity.

Further, the multistate aspect of the design does, in fact, enable distinctive behaviors that are immediately amenable to creating musical macrostructure. The approach to and execution of the launch of a new spring becomes a highly malleable and expressive gesture-type that, together with the resulting rise or fall of the base pitch, can accumulate in time to provide a flexible basis for establishing higher-level dramatic form.

(See top of page for the demonstration video.)