TIM PERKIS: PROJECTS
OPENFIELD an internet-based sound environment by Tim Perkis Performed on the internet and at Xerox PARC November 1-8, 1994. OpenField is an experiment in setting up an "acoustic ecology", in which agents defined by web clients take life and interact freely in a shared acoustic environment which is multicast on the internet. Users contact a WWW server to receive a form with which they set various parameters to design and play a musical agent. When the form is submitted, their agent is started up, creating a voice which goes into the mix. The entire mix is continuously multicast on the internet and can be monitored with VAT (the video & audio teleconferencing tool) at any site having an MBONE connection. Consider the analogy of a natural environment: it's amazing to me how the various birds and insects "multiplex" the channel of the outdoor acoustic space, each species differentiating its signal from the others in the space. I'm interested in seeing what kind of ecology may emerge as multiple simultaneous users try to create agents which can be heard. The motivation to hear yourself, or the result of your own actions, is, I hope, enough to drive people to try to create agents with recognizable signatures. Without the option of playing loudly, or continuously, in order to be heard, each user must come up with a sound which is at once distinct from the other voices sounding, and in harmony with them in the sense of basing its nature on the acoustic world the other voices define. Another analogy to explain the focus of OpenField would be the set of sounds used by a language, or the set of graphic marks used by an alphabet: they develop a harmony by all evolving towards maximizing their mutual distinctiveness. How it worked. I used a MIDI synthesizer and sampler, limiting my sound palette to short noise and tone bursts,(chirps, creaks, clicks, etc.) and short vocal sounds, all of which are modulated in pitch and amplitude in continuous variation by the defined agent. An agent consists of a semi-randomly assembled piece of code which has rudimentary hearing (it knows only how many other voices are sounding at the current instant), and which plays short sequences consisting of some subset of the set of basis sounds. Users define aspects of their agent's behavior, by selecting the basis sounds for the agent's voice, and some general behavioral parameters about its timing and range. The completed mix was sent out on the internet to sites connected to the MBONE, a subset of the internet which supports "multicasting", or the transmission of data from one point to selected multiple receivers simultaneously. In much of my recent musical composition work I've been working with genetic programming, in which populations of tiny computer programs in a special langauge are randomly mutated and bred under constraints which lead to evolution toward a goal. This work follows from that, in that the agents' control programs are created by a process of constrained randomness, generated at random with statistical properties (the distribution of different op codes ) set by the user controls.