Been thinking a bit more about mesh network busking. I’d love to have a go at composing something to be played over a few city blocks, and see how performers, people, sounds, spaces, traffic and so on all interact. And working out how to communicate that seems to be just as important as what it sounds like.
I think one of the strengths of the first few Music For Shuffle sketches is that everything is really legible. The iTunes UI is just as important as the audio content of the MP3s; it shows how the piece works, what’s going on during playback, and so on and so on.
So, when I get to do some mesh network busking, I want make the invisible stuff – the network connections between the players; the systemic stuff – just as legible. Anyway. Enough typey typey. Here’s a quick exploration I did in PureData (audio) and Quartz Composer (visuals):
As you can (hopefully) see, it’s a piece for four instruments: piano, vibes and 2 cellos. Each instrument is on a different ‘floor’, and randomly plays any one of 16 phrases (each recorded as an independent mp3) for an indefinite amount of time. The voices on each floor are visually connected via a simple polygon mesh; the vertices animate to various positions depending on what clips are triggering, and the colours of each face are mapped to how loudly each instrument is playing.
Anyway. Just a quick sketch. It’s making me think of that Goethe quote (“Architecture is frozen music”) in the context of read/write networks; Gabrieli writing for St Marks’ Square, Wagner and Bayreuth; the freedom and playfulness of Cage’s HPSCHD compared to the (literal) concreteness of the music in the Philips pavilion; no doubt all sorts of stuff that has come out of the Bartlett over the past decade; states of matter and phase transitions (solid/liquid/gas/plasma music?); drones and quadcopters (quadcopter choirs, anyone?), maybe all mixed in with the rituals and conventions of busking. Bound to be something in there.