Battey, B. (2020) "Technique and Audiovisual Counterpoint in the Estuaries Series". In Knight-Hill, Andrew, Ed., Sound and Image: Aesthetics and Practices. London: Routledge. Publisher page.
Abstract: The author’s Estuaries series of audiovisual compositions (2016-19) draw together investigations into novel methods of abstract animation, generative music and audiovisual counterpoint. This article explains key aspects of some of these concerns and how they contribute to particular aesthetic qualities of the works. The animation technique was based on visualisation of Nelder-Mead search processes that seek the brightest or darkest points in a source image. This process can create unpredictable and complex textures and temporal behaviors that can seem perceptually unified and coherent due to the consistent shape and behavioral vocabulary of the algorithm. The music was created with the assistance of the author’s Nodewebba software, which implements his concept of variable-coupled iterated maps. Algorithmic integration of these two techniques, and mixing discrete events with articulation of continuums, helped to establish a hierarchically organized “fluid” audiovisual counterpoint. Given the limits of automated linking of visual and musical algorithms that are not perceptually informed, the author argues for an “audiovisualisation-assisted composition”, where generative audiovisual materials serve as a starting point from which an artist elaborates and edits heuristically.
Battey, B. and Fischman, R. (2016) "Convergence of Time and Space: Visual Music from an Electroacoustic Music Perspective". In Kaduri, Y., Ed.,The Oxford Handbook of Sound and Image in Western Art. Oxford: Oxford University Press. DOI: 10.1093/oxfordhb/9780199841547.013.002
Abstract: This chapter considers the historical lineage and conceptual origins of visual music, addressing the turn to abstraction and absolute film in visual arts, particularly in the first half of the twentieth century, and the turn to mimesis and spatialisation in music, particularly through the acousmatic tradition after World War II. The latter context will be used to introduce relevant concepts from electroacoustic music. The authors propose the existence of a process of convergence between visual artists and musicians that prompted the former to embrace time through a shift away from mimesis towards abstraction, and the latter to adopt greater focus on space in shifting from abstraction towards mimesis. Together, these historical shifts serve as a preamble to the development of audiovisual art, revealing underlying theoretical commonalities regarding the articulation of time and space. These commonalities suggest fundamental dynamics of what Chion calls the audiovisual contract and strategies available to the visual music creator to create a synergy of sound and image. Some of these strategies are demonstrated in two case studies of works by the authors.
Battey, B. (2013) "Some Reflections on Autarkeia Aggregatum". In Kaduri, Y., Ed., Ear Sees, Eye Hears: On the Interconnections among Sound and Picture in Art. Jerusalem: Magnes Press. [Chapter is translated into Hebrew.]
Abstract: The author reflects on personal, aesthetic, theoretical and technical concerns reflected in his audiovisual composition "Autarkeia Aggregatum" (2005). He introduces his concept of "isomorphism of complex gestalts" to describe the nature of the sound and image relationships in the work. He also provides an analytical visual score of the beginning of the work, exploring the applicability of such scores to audiovisual compositions. He concludes by emphasising the important of audiovisual composers being sensitive to the holistic, emergent perceptual properties unfolding in visual and sonic materials and their relationships.
Battey, B. (2013) "From Shapes in the Moment to Shapes in Time: Pioneers in Musical Organization of the Visual". In Kaduri, Y., Ed., Ear Sees, Eye Hears: On the Interconnections among Sound and Picture in Art. Jerusalem: Magnes Press. [Chapter is translated into Hebrew.]
Abstract: The exploration of “concrete”, non-instrumental sound sources in music in the 20th century has led to new insights into the importance of experiential “space” in music. At the same time — somewhat ironically — the application of musical thought to image has encouraged visual abstraction. Both this shift to “concrete” materials in music and the shift to “abstract” materials in visual arts have highlighted important principles for audio-visual creation. This essay considers briefly a handful of thinkers and artists who have explicitly brought a musically-informed thinking to developing time-based abstraction in visual arts, and have, in the process, revealed a concern common to both domains: the sculpting of experiential shapes out of time.
Battey, B. (2016) “Creative Computing and the Generative Artist”. In International Journal of Creative Computing 1(2-4), pp. 154-173. DOI: 10.1504/IJCRC.2016.076065
Abstract: This article addresses the research agenda of creative computing from the perspective of a generative artist/composer, someone for whom processes of software creation and art creation inseparably intertwine. Using his audiovisual work Clonal Colonies (2011) as a case study, the author addresses the dynamic of generative artistic creation when it is a process of discovery and dialog with artist-created, often-unpredictable software systems. He provides technical specifics regarding his use of his Variable-Coupled Map Networks approach for music and his Brownian Doughnut Warper visual algorithm. Finally, he proposes a set of principles applicable to the creation of generative artwork, considers how tools and systems could better support such work, and proposes that creative computing research also focus on helping creatives surmount the fundamental personal challenges encountered in creative work of all types.
Battey, B. (2015) "Towards a Fluid Audiovisual Counterpoint". In Ideas Sónicas17(4), pp. 26-32.
Abstract: The author critiques and expands his concept of "fluid audiovisual counterpoint" by investigating the relevance of the species counterpoint tradition in music to conceptualizing audiovisual relationships in abstract video music. He points out both gains and limitations of such an approach, considering in particular issues of temporal and perceptual hierarchy, atomism, and gesture. He also considers how this might relate to Adam Basanta's 2013 proposed three-axes model of audiovisual relationship typologies. Ultimately, he proposes elements of a "phenomenological theory of audiovisual counterpoint", with fluidity being a subcategory. As an example, he considers how this subcategory might be informed by the relationship between the gestures of Indian classical vocalists and the music they are singing. Ultimately, the argument is made that complete systematization of audiovisual counterpoint is not possible, or even desirable, and that informing the artistic intuition by expanding possibilities and sensitivities would be the primary goal of an audiovisual counterpoint pedagogy.
Battey, B. (2013) "Artist's Statement: 'Sinus Aestum' — Wayfinding, Algorithm, Impermanence". In Soundtrack Journal 5(1), pp. 73-75. DOI: 10.1386/st.5.1.73_7
Abstract: The author briefly explores the aesthetic of his audiovisual composition Sinus Aestum by addressing issues of beauty, novelty, ways of being, wayfinding in creative dialog with computer systems, and degrees of isomorphism between sound and image.
Battey, B. (2004) "Bézier Spline Modeling of Pitch-continuous Melodic Expression and Ornamentation". Computer Music Journal, 28(4), pp. 25–39. DOI: 10.1162/0148926042728377
Abstract: A set of techniques is presented for analysis and computer rendering of melodies such as those found in Indian classical music, in which subtle control of the continuum between scale steps is fundamental to expression. Pitch, amplitude, and spectral centroid curves are extracted from a recorded performance. Critical inflection points on these curves are identified by psychoacoustic criterion. Constrained Bézier splines are fit to the data between those points using a non-linear solver with linear interpolation to estimate function values, eliminating the need to optimize parameterization. The model can be used for musicological analysis, or the data can be repurposed towards expressive computer rendering of custom melodic gestures and musical textures.
Audio files (AIFF) for this article are available here [33 Mb].
Battey, B. (2004) "Musical Pattern Generation with Variable-Coupled Iterated Map Networks". Organised Sound 9(2), pp. 137–150. DOI: 10.1017/S1355771804000226
Abstract: This paper introduces the concept of variable-coupled iterated map networks and explores its application to generation of musical textures. Such networks are comprised of one or more interlinked nodes. A node is an iterated map function with a time-delay factor that schedules successive iterations. The map state broadcast by a node can drive the variables and time-delay factor of any other nodes in the network, including itself. Lehmer's Linear Congruence Formula, an iterated map normally used for production of pseudo-random numbers, is explored for its own potential as a pattern generator and is used as the iterated map in the network nodes in the examples presented. The capacity of the networks to produce richly gestural behaviors and mid-term modulation of behavior is demonstrated.
Audio files (MP3) for this article are available here [7.71 Mb].
Note that aspects of the technique are now available in my software Nodewebba.
Published Conference Papers
Battey, B. (2016) “Nodewebba: Software for Composing with Networked Iterated Maps”.Proceedings of the International Computer Music Conference, Utrecht.
Abstract: Nodewebba software provides a GUI implementation of the author’ s “Variable-Coupled Map Networks” (VCMN) approach to algorithmic composition. A VCMN node consists of a simple iterated map, timing controls, and controls for mapping the node output to musical parameters. These nodes can be networked, routing the outputs of nodes to control the variables of other nodes. This can enable complex emergent patterning and provides a powerful tool for creating musical materials that exhibit interrelated parts. Nodewebba also provides API hooks for programmers to expand its functionality. The author discusses the design and features of Nodewebba, some of the technical implementation issues, and a brief example of its application to a compositional project.
Battey, B., Giannouakakis, M., & Picinali, L. (2015) “Haptic Control of Multistate Generative Music Systems”.Proceedings of the International Computer Music Conference, University of North Texas. Permalink: http://hdl.handle.net/2027/spo.bbp2372.2015.018
Abstract: Force-feedback controllers have been considered as a solution to the lack of sonically coupled physical feedback in digital-music interfaces, with researchers focusing on instrument-like models of interaction. However, there has been little research applied to the use of force-feedback interfaces to the control of real-time generative-music systems. This paper proposes that haptic interfaces could enable performers to have a more fully embodied engagement with such systems, increasing expressive control and enabling new compositional and performance potentials. A proof-of-concept project is described, which entailed development of a core software toolkit and implementation of a series of test cases.
Battey, B. (2011) "Sound Synthesis and Composition with Compression-Controlled Feedback". Proceedings of the International Computer Music Conference 2011, University of Huddersfield, UK. Permalink: http://hdl.handle.net/2027/spo.bbp2372.2011.010
Abstract: This paper introduces a method of sound synthesis that is based on the use of automatic gain control (AGC) in a time-delayed feedback loop. The approach, which the author calls "Compressed Feedback Synthesis" (CFS), can be conceptualized as a special expansion of a generalized comb filter, where feedback gain can be unity or greater. The system can be expanded with additional processing in the feedback loop to create a highly flexible and sensually engaging sound materials. The use of CFS in the author's audiovisual composition Sinus Aestum is discussed, including specific solutions to the challenging of controlling such a system compositionally.
Francksen, K., Battey, B., and Breslin, J. (2009) "Creative Process and Pedagogy with Interactive Dance, Music and Image". Congress on Research in Dance Conference Proceedings, v.41, supplement S1, pp 307-311. DOI: 10.1017/S2049125500001266
Abstract: This lecture-demonstration reflects on a research-informed teaching project in which teaching staff in dance and music technology collaborated on technical and pedagogic research and artistic creation in interactive dance. Our primary aim was to throw light on how interactive technologies might challenge and develop the ways in which students in dance and music technology engage in creative practice through the exploration of a set of technologies and conceptual approaches the research has revealed very particular compositional structures and methods. Experimental sketches were developed with a particular focus on emergent behavior and richly behaviored audio-visual feedback systems that were both controlled by and influenced the dancers. The demonstration presents our approaches and offers methodologies and strategies for the use of new technologies in dance pedagogy.
Abstract: PICACS (Pitch Curve Analysis and Composition System) is a software prototype inspired in part by study of Indian classical music. The primary motivation of the system is to facilitate analysis, composition, and convincing computer rendering of music that is characterized by detailed, expressive shaping of the continuum between scale steps. The system creates models of pitch, amplitude, and spectral-centroid expression envelopes by utilizing nonlinear least-squares fitting of constrained Bézier spline curves. Details of the modeling system have been published elsewhere; this paper focuses on the unique challenges that arise in designing an editing system for such models to support compositional purposes. While some of the challenges are specific to the Bézier model, it is clear that many of the challenges would apply to any system that seeks to support modelling and editing of continuous expression using sets of linked curves and multiple layers of expression data.
Abstract: This paper describes the artistic and technical processes used in generating the visuals for the video music work Autarkeia Aggregatum. An algorithm is described that involves a grid of points extracted from a source image and submitted to rotational and Brownian noise movement schemes. http://dl.acm.org/citation.cfm?id=1179095
Battey, B. (2001) "An Animation Extension to Common Music". Proceedings of the Connecticut College Biennial Symposium on the Arts and Technology.
Abstract: The Animation Extension to Common Music (AECM) version 1 is a set of extensions to the Common Music (CM) infrastructure. These extensions allow musical event algorithms authored in CM to also generate scripts to control a computer animation environment. The current version of AECM works with Common Music 1.4 and generates MaxScript, the scripting language for 3-D Studio Max 2.5. While facilitating the use of algorithmic methods for generation of both audio and visual events, it can encourage reconceptualization of relationships between sound and image mediums. Examples are provided from the author’s recent work Writing on the Surface for computer-realized sound and image.
Unpublished Conference Papers
Mapping Hindustani-Vocalist Motions to Abstract Visuals
Seeing Sound: Visual Music Symposium, Bath Spa University, 2016 • Video available
Abstract: Arguably, the domain of audiovisual composition could gain from development of more formalised perspectives on how 'counterpoint' can be established between sound and image. The author's own efforts to gradually develop a more sophisticated sense of what he calls 'fluid audiovisual counterpoint' is addressed here through experiments inspired by the gestures of Indian classical vocalists. 3D motion-tracking data of a vocalist performing an alaap in raga Bhairavi have been mapped hierarchically to abstract 3D animation processes in an attempt to inspire and inform new conceptions of how complex audiovisual counterpoint may be formed and generated. While success has been achieved in some aspects, the author also discusses how the 'uncanny valley’ problem, a recognised issue in character animation, also may apply when motion-tracking data controls visual abstraction.
Inquiries towards Fluid Audiovisual Counterpoint
Seeing Sound: Visual Music Symposium, Bath Spa University, 2011
Abstract: This paper explores the question of what would constitute a 'fluid audiovisual counterpoint'. The question is approached by considering fundamental principles implicit in species counterpoint, identifying limitations and strengths of this traditional pedagogical approach. It is proposed that the limits and strengths of the counterpoint idea can be best adapted to at least some audiovisual composition and analysis by pursuing the idea of a 'counterpoint of gestures', which would entail the management of alignment/nonalignment of gestural primary goal points and sub goal points. Fluid audiovisual counterpoint, then, would entail minimal occurrence of primary goal points, as a metaphorical extension of the principles of 4th-species counterpoint. This idea is explored with reference to the author's audiovisual work Clonal Colonies and gesture during North Indian classical vocal performance.
Isomorphism of Complex Gestalts: The Audiovisual Composition Autarkeia Aggregatum
Abstract: This paper provides a perception- and practice-based analytical perspective on the relationship between abstract moving image and sound in the context of the author's composition Autarkeia Aggregatum. In particular, it investigates the challenge of understanding and developing audio-visual relationships comprised of extended, continuous isomorphism between mediums, formed with careful attention to higher-order, emergent dynamics of each medium in relationship to rhetorical unfolding. To this end, it investigates the applicability of selected conceptual frameworks and analytical approaches arising from electroacoustic music studies. The approach promotes the idea of relating the two mediums, not by mapping one to the other, but by considering both as manifestations of underlying temporal dynamics (tensions, implications, and rhetorical relationships) that are not, in their essence, either sonic nor visual.
Modulated Feedback: The Audio-Visual Composition Mercurius
SIGGRAPH, 2009, New Orleans
Archetypes of Dissolution: the Luna Series of Audio-Visual Compositions
Seeing Sound: Visual Music Symposium, Bath Spa University, 2009
Abstract: This paper addressed aesthetic and technical aspects of the author's Luna Series of audiovisual compositions. Particular attention is paid to the parallel unfolding of complex audio and visual streams ('Isomorphism of Complex Gestalts') and on an algorithmic process for the visuals (Brownian Dispersal Filter) that entails compounded rotational schemes in 3D space which are collapsed back into a 2D image.
Battey, B. and Amelides, P. (2013) Review of Alessandro Cipriani and Maurizio Giri, "Electronic Music and Sound Design: Theory and Practice with Max/MSP, Volume 1", trans. David Stutz. Organised Sound, 18(2), pp. 231-232. doi:10.1017/S1355771813000150
Battey, B. (2000) Review of Perry R. Cook (Ed.), "Music, Cognition, and Computerized Sound: An Introduction to Psychoacoustics",Organised Sound 5(2), pp. 111-112. doi:10.1017/S1355771800212089
Sound and Image Colloquium, University of Greenwich, UK — Dec 10-12, 2017 Mixing Streams: Generative Techniques and Audiovisual Counterpoint in the Estuaries Series
Guest Lectures, Symposia, Master Classes, Etc.
Artist's Talk – Phoenix, Leicester – Dec 9, 2019 A Wave is Not a Thing: Three Breaths in Empty Space
Force-Feedback and Music Symposium, McGill University, Montréal, Canada — Dec 6-10, 2016 Force-Feedback Interactions with Generative Music
University of Montréal, Montréal, Canada — Dec 8, 2016 “Estuaries 1” (More Wayfinding, Algorithms, and Impermanence)
Leeds University — Mar 2016 Wayfinding, Algorithm, Impermanence
Punto y Raya Academy, Madrid, Spain — May 7-10, 2015
Workshop and Master Class
GLEAM 2014, University of Glasgow, UK — Oct 31, 2014 Clonal Colonies: Wayfinding, Algorithm, Impermanence
Collider Conversation, Kinetika Arts Fair, London, UK — Oct 19, 2014 Panel Discussion
International Seminar Arte en el Siglo XXI, National University of Rosario, Santa Fé, Argentina — Sep 10, 2014 Seminar
University of Montréal, Montréal, Canada — Feb 26, 2014 Seminar
York University, Toronto, Canada — Feb 24, 2014 Composing the Audiovisual: Wayfinding, Algorithm, Impermanence
Iceland Academy of the Arts, Reykjavik, Iceland – Jan 31, 2014 Visual Music in the Digital Age
Intermedial Perspectives: Practice – Technology – Pedagogy Symposium, De Montfort University, UK — July 2, 2013 Embodied Audiovisuality: Mapping Gestures of an Indian Classical Vocalist
Collider Artslab / Elektrical Séance, Northampton, UK — May 7, 2013 Clonal Colonies: Complex Systems and Audiovisual Composition
Royal Conservatory of Music, Stockholm, Sweden — April 26, 2013 Clonal Colonies: Complex Systems and Audiovisual Composition + Composition Master Class
Technarte International Conference on Art and Technology, Bilbao, Spain — April 5, 2013 Clonal Colonies: Complex Systems and Audiovisual Composition
Code-Control Max/MSP Conference, Leicester, UK — March 23, 2013 Clonal Colonies: Composing with Variable-Coupled Map Networks in Max/MSP
University of Kent, UK — December 11, 2012 Clonal Colonies: Complex Systems and Audiovisual Composition
Oberlin Conservatory — September, 2011 Seminar, Master Class, Solo Concert
Juilliard School of Music — September, 2011 Seminar
New York University — September, 2011 Seminar
Durham University, UK — February, 2011 Audiovisual Composition with Complex Gestalts(The Luna Series)
University of the Arts, Berlin — January, 2011 Seminar and Lecture: Audiovisual Composition with Complex Gestalts(The Luna Series)
Visible Bits, Audible Bytes Symposium, De Montfort University/Phoenix Square, Leicester — October, 2010 Matching Complex Audio and Visual Gestalts
Digital Arts Symposium — Phoenix Square Film and Digital Media, Leicester — November, 2009 cMatrix12 background and technique
Stanford University CCRMA/School of Music — April, 2009 Graduate seminar
University of the Pacific, Stockton, California — April, 2009 Seminar
Center for Digital Arts and Experimental Media, University of Washington, Seattle — April, 2009 Graduate seminar
University of Hull at Scarborough, UK, Creative Music Technology — 2008 Visual Music in the Digital Age
Sonic Arts Network Expo 2007, Plymouth University — 2007 Computer Analysis and Composition with Bézier Spline Expression Curve Models
Practice as Research Symposium, Bath Spa University — 2007
Visual Music in the Digital Age
Guest Lecturer Series, Nottingham University School of Music — 2007
Visual Music in the Digital Age
Research Seminar, MTI, De Montfort University — 2007
Computer Modeling of Pitch and Expression Curves in Indian Classical Music
Artistic Process Symposium, MTI, De Montfort University — 2006
Retrospective Thoughts on Creativity and Metacreativity
Daniel D. McCracken Computational Sciences Seminar Series, Central Washington University — 2004 Computer Modeling of Expression Curves in Hindustani Classical Music
Animation Arts Guest Lecture Series, University of Washington — 2002 Form and Expression
Machine Design Group, University of Washington — 2003 Computer Modeling of Hindustani Melodic Ornament
Washington Composers Forum — 2002 An American Computer Musician in India
Lalit Kala Kendra and Inter-University Institute for Astronomy and Astrophysics, University of Pune, India — 2002 Computer Music: The Fusing of Artistic and Technical Creation
Indian Institute of Technology, Mumbai, India — 2002 Computer Music: The Fusing of Artistic and Technical Creation
Indian Institute of Technology, Kanpur, India — 2002 Computer Music: The Fusing of Artistic and Technical Creation
Indian Statistical Institute, Kolkata, India — 2002 Computer Music: The Fusing of Artistic and Technical Creation
Indian Institute of Information Technology Management, Kerala, India — 2002 Computer Music: The Fusing of Artistic and Technical Creation
Indian Institute of Science, Bangalore, India — 2002 Computer Music: The Fusing of Artistic and Technical Creation
CARTAH Lecture Series, University of Washington, Seattle — 2000 Digital Music: Integration of Algorithmic Music and Animation
New Music Northwest, Evergreen State College, Washington — 1999 Chaotic Networks for Algorithmic Pattern Generation
Microsoft Advanced Technology Group, Redmond — 1994 Behind the Juggling Jukebox (with James Jay)
The Pernerstorfer Circle(1999) A hypermedia exploration of the tremendous confluence of arts and ideas in Vienna circa 1900 -- as embodied by the Pernerstorfer Circle -- with emphasis on Gustav Mahler. Part of the University of Washington's Vienna 1900 web site.