In January/February 2010, SquareTangle artists Adam Nash and John McCormick are artists in residence at Ars Electronica Futurelab in Linz, Austria.
An investigation into generating an evolving virtual audiovisual environment populated by agent-based entities spawned initially in response to the physical presence and motion of people, then evolving according to formal audiovisual parameters determined emergently by the environment, the entities themselves, and the interaction between these and the material world. The practical skills gained can be parlayed into a combination interactive installation and live audiovisual hybrid material/virtual performance art piece.
Detailed Project Concept:
Over the past two years, we have been working, both individually and in collaboration with other artists and technologists to investigate the flow of data between the material world and synthetic worlds, in particular realtime 3D environments. Often referred to as “mixed-reality”, such relationships represent “sites of alterity where material and incorporeal forces will continue to engender further connection and differentiation” (Munster, 2006, 20). This project investigates the feasibility of setting up a persistent realtime 3D environment populated with agent entities. These entities are spawned initially by motion (and audio) detection of humans in a physical space, and then proceed to evolve according to formal audiovisual parameters determined emergently by relationships between the virtual entities and each other; the virtual entities and the virtual environment; the virtual entities and the material environment; the virtual environment and other virtual environments; the virtual environment and material environment. A multi-faceted, practical audiovisual experimental testing of the “cultural perception that material objects are interpenetrated by information patterns” (Hayles, 1999, 14).
We are already familiar and capable with many techniques of moving data between material and virtual worlds - see, for example, Adam's work BabelSwarm, 2008 collaboration with Justin Clemens and Christopher Dodds, where voice and text chat activated artificial life entities within Second Life, or our 2008 work at 01SJ called Ways To Wave, where an acrylic flower-like sculpture with 24 “petals” controlled an audiovisual volumetric sculpture in Second Life. Given that, the purpose of the Futurelab investigation is to concentrate specifically on techniques to facilitate the creation, evolution and emergence of virtual, agent-based, audiovisual entities that can engage in an affect cycle to some degree with the material world.
We will work to incorporate an ability into the system to convert captured audio into an evolving sonic scape capable of responding dynamically to sonic stimulus, as well as evolving according to given formal parameters, many of which will emerge during the development process in an iterative cycle of development and discovery. This alone would constitute a satisfactory outcome of the research process at Futurelab, since the principles isolated by the experimentation will be not only specifically applicable to the manipulation of sound, but generally applicable to digital media and synthetic environments. In this way, it is intrinsically related to the experimental development work in agency/AI described in the next paragraph.
We will also work to develop agent-based entities that are capable of responding to, and determining, a range of formal parameters as stimulus, with the aim being the creation of a system that displays a formal evolutionary ability. In her study of Darwin and Deleuze Chaos, Territory, Art (2008), Elizabeth Grosz talks about the role of music and the erotic in evolution as “an excess over survival”. She quotes Darwin and others to say that music is pre-language, but this doesn’t imply some kind of idealised return to a fetishised natural state, rather it is an essential but poorly understood factor in evolution. This must underlie our investigation into synthetic evolution. Through iterative experimentation, we will be able to propose certain evolutionary hypotheses drawn both from empirical observation of the iterations and our existing understanding of the behavioural possibilities implicit in the formal systems of music, dance and programming. We can then test these hypotheses in the virtual environment, proposed modified hypotheses as a result, and so on. Deleuze says that “when music sets up its sonorous system and its polyvalent organ, the ear, it addresses itself to something very different than the material reality of bodies. It gives a disembodied and dematerialized body to the most spritual of entities” (Deleuze, 2005, 39). This could be an important clue in relation to the affect the works are able to achieve on the part of both the artist and the interactor, as well as the formal approach to the framing of hypotheses because music (dance, the erotic) is well established as a non-visual abstract formal system that is generally capable of affecting listeners. It is important for the artists and developers to ensure an approach that doesn’t privilege any particular media-element of the medium, since there is a danger that the virtual environment is approached primarily or fundamentally as a visual medium, or that the evolutionary environment is mistakenly approached as simply a simulation of known material evolution. In this way Deleuze and Guattari potentially offer guidance when they talk of making a “body without organs upon which intensities pass, self and other – not in the name of a higher level or generality or a broader extension, but by virtue of singularities that can no longer be said to be personal, and intensities that can no longer be said to be extensive” (Deleuze and Guattari, 2004, 173). Adam's work with Fabio Zambetta on dynamical policy modeling for interactive storytelling (Zambetta, Nash, 2007) will form a background to our approach, but will by no means determine the parameters or outcome.
In other words, much of this development work is anticipated to be technically straightforward, with the artistic and technical engagement (the messiness, the dynamic reconfiguring of parameters and hypotheses) emerging from the iterative experimental process. The result will be a unique, evolving virtual audiovisual artwork, interacting with itself and stimulus from the material world and other synthetic networked realms.
Simply put, the initial outcome from the collaboration between myself, John McCormick and Futurelab developers will be the evolutionary, agent-based audiovisual virtual environment. This can then be incorporated into a physical installation that potentially operates on three levels, any or all of which can be negotiated as necessary and appropriate:
1. An interactive audiovisual physically sited installation:
This is an installation based around any given physical space. The space is motion-aware via our existing system of camera-based motion capture. The virtual audiovisual environment is displayed via a standard projection wall and sound system. Visitors to the physical space spawn an entity, which they can see and hear as they experimentally interact with it. It is feasible that the system can “remember” individual visitors, but not necessary.
The result is an engaging interactive installation that establishes a genuine affect cycle between the material and virtual environments via motion capture and display. Every visitor will see a different evolution of the work on every visit.
2. A motion (and other data) capture-driven live performance environment:
This operates as both an interactive installation (as in point 1 above), and a live performance environment for virtuosic motion-capture dancers and musicians. The performance season would play out as follows. Each day, physical visitors to the space interact with the virtual environment, spawning audiovisual entities and interacting with them. At showtime, the performers, who are virtuosic users of the system, can then ‘jam’ (using sound, visuals and movement) with the environment as it has evolved during that day. This creates an audiovisual dance show that is a genuine collaboration between the performers, visitors and the virtual entities/environment. Each show will be different, and the season of such a show would be an iterative, evolving improvisation where the parameters emerge through the interaction of all relevant elements of the system. This would afford us an extraordinary opportunity to carry out sustained practical experimental work into contemporary theories of agency and display.
SquareTangle are in possession of several motion capture systems, and a portable 10m dome and hard/software setup for full-dome projection. We would like to tour this show (after an initial season at Ars Electronica if appropriate) using this dome setup. We are currently investigating the viability of operating the entrie dome setup using sustainable energy to create a zero-carbon-footprint system. This seems viable using tracking solar panels and batteries, but much more work is required in this area, and could possibly form part of our research at Futurelab.
3. An online virtual environment operating in a symbiotic feedback cycle between itself, its members, other synthetic worlds, and the material world:
This can be incorporated into outcomes 1 and/or 2, or can exist independently on the internet. This networked environment would be open to the full range of input possibilities afforded by Web 2.0 and our material/virtual nexus experience. An example would be a virtual entity that is spawned by a visitor to the online virtual environment. Once spawned, the visitor and their entity are able to engage in an affect cycle via such things as Twitter feeds, SMS messages, and so forth.
SquareTangle are already in possession of the skills, software, hardware and procedural knowledge required for the realisation of any or all three of these potential performance outcomes, in particular motion capture hardware, software and ability; realtime 3D virtual environment construction and internal/external data flow; visual and sonic display technologies, including projectors, sound systems and database interfaces; a portable 10 metre dome and associated full-dome projection technology for international touring if necessary. The synergy with Futurelab occurs in the practical, experimental investigation into agent-based evolutionary virtual environments and entities. This represents an exciting opportunity for both myself and Ars Electronica to develop some internationally significant knowledge in a transdisciplinary project involving media art, contemporary dance, music/sound/DSP, Artificial Intelligence and Agent technology, and audiovisual display.
Justin Clemens, Christopher Dodds, Adam Nash. BabelSwarm, Lismore Gallery/Second Life, Australia, 2008.
Gilles Deleuze and Félix Guattari. Thousand Plateaus: Capitalism and Schizophrenia. Translated by Brian Massumi, Continuum International Publishing Group, 2004.
Gilles Deleuze. Francis Bacon: The Logic of Sensation, Continuum, 2005.
Elizabeth Grosz. Chaos, Territory, Art: Deleuze and the Framing of the Earth Columbia University Press, 2008.
N. Katherine Hayles. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, University of Chicago Press, 1999.
Anna Munster. Materializing New Media: Embodiment in Information Aesthetics, UPNE, 2006.
Adam Nash, John McCormick. Ways To Wave, Superlight, 01Sj Biennial of Global Art, San Jose Museum of Art, 2008.
Fabio Zambetta, Adam Nash, Paul Smith. Two Families: Dynamical Policy Models in Interactive Storytelling. IE2007 Australasian Conference on Interactive Entertainment, 2007.