Music For Shuffle

Session #2

First performance for a paying audience! Back in mid-December 2015, I was invited by Switchboard Music to put together something for one of their weekly shows at the Center For New Music in San Francisco. They’re part of a lovely little cluster of groups and venues supporting emerging experimental music around the Bay Area. Huge thanks to Annie for the invite.

I decided to take what I’d tried in the studio a few years back (a randomized, animated score for piano, vibraphone and cello, displayed on a few iPads), and adapt it for performance. I wanted the whole thing to feel halfway between a gig and an art installation, with the audience wandering between and around the players.

The Center For New Music is a nice blank gallery-type space, so I positioned the musicians freely around it, and scattered some seating for the audience in amongst them. Rob Reich (piano) and Andrew Maguire (vibraphone) joined us again, and we had a new cellist, Helen Newby, who did an amazing job coming in totally cold.

We had no rehearsal, other than a brief chat. I asked them to interpret the score by thinking about going on a journey, with a start, middle and end, and to play around with simple/compound time. Other than that, it was up to them. I did a little intro spiel to the audience before we got started, encouraging people to walk about and peek over the shoulders of the players to see how they were interpreting the notation. Originally I’d thought doing about two or three smaller improvisations, but once the trio got cooking, I didn’t want to interrupt, so just left them to it.

Here’s a recording. Apologies to the audiophiles for the sound quality – I know pretty much zero about mics, mixers and recording. Next time I’ll get the experts in to do it properly.

And here’s a tiny bit of video footage from it. It’s hard to make out, but the audience did stroll about during most of the performance, stopping next to each musician and watching them. It was lovely, really feeling sort of halfway between an installation and a gig, just as I’d hoped. It nearly didn’t happen at all like that, until a lifesaving intervention.

I stupidly hadn’t given much thought to lighting, and as the gig started, the space became quite dark and cloistered. It’s easy to underestimate how powerful the old ‘drop the lights’ cue is. Everyone immediately hushed up and settled in for the show, which was pretty much the opposite of what I was hoping would happen. My own fault, really. Should have seen that one coming.

But then, I was rescued. Huge, huge thanks to good pal Alice for being the first to get up and wander around in the dark (and for dragging my poor girlfriend Lauren along with her). After they led the charge, everyone else followed suit, standing in amongst the musicians, taking pictures and videos with their phones. It was great. Everyone should get the chance at least once in their life to stand next to a professional vibraphonist at work. Pro tip: always plant a friendly theatre professional in the audience to get everyone going.

Overall, I’m really pleased with how it went – hopefully I can keep the momentum up and plan another show soon. This whole area of networked, animated notation is wide-open; there’s a lot of space for pottering around with ideas. And I’ll definitely give much more thought to lighting and movement around the space. I must ask Alice for some pointers, and read up on tips and tricks from museum/exhibit/signage designers.

I got some great feedback from the audience and musicians too. Andrew, the vibes player, wondered if I could send texts to the score during the performance. This could be lovely – rewriting expression markings on the fly or something. Then this could obviously extend to all sorts of other variables too: meter; tempo; key; whatever. It could be me directing things, or it could be randomized (or both). Rob, the pianist, suggested that he could use a midi keyboard to transmit the notes he was playing to the others’ iPads, as improv fuel. Fascinating stuff.

I used the gig as a kick up the backside to do some work under the hood on the notation software too. Up until now, I’d been using a simple Javascript slideshow in a fullscreen browser, looping randomly through a set of hand-cooked PNGs. This time, I rebuilt everything in Swift, as a set of little iOS apps.

Compared to the first version, it doesn’t look that different on the surface. There’s no networkedness yet – each app just runs independently, which is fine for now (and a lazy way to add more randomness). I added a timer to each phrase (the number being the phrase number, not the time – I should revise this, as it confused pretty much everyone), and the notes appear in little animated cascades. No great shakes. But, now it’s all drawn in code, there’s a ton of room for wider exploration. It’s been making me think about building on things like Cornelius Cardew’s Treatise, or Milton Glaser’s lovely ads for Sony back in the 70s.

I’m kind of relieved not to be thinking too directly about sound at the minute (apart from choosing the instruments and tonal palette). With notation, space, lighting and an audience, there’s just so much else to play with. Twas ever thus, I guess. Non-sonic musical objects and all that. It brings to mind that brilliant Charles Ives quote:

“My God! What has sound got to do with music! … what it sounds like may not be what it is.”

Perhaps modern music is getting to be less about the sanctity of brittle, crystalline recordings, and is strengthening ties back to its ancient roots: encoded instruction, pattern recognition, interpretation, expression, interaction and collaboration. Maybe we’ll see more (human and/or software-based) players interpreting the intent of (human and/or software-based) composers in (human and/or software-based) performance spaces.

Anyway. Enough reckons. I learned loads here. Must do more shows, and less headphone music. I think the next steps are to try and get the software networked, so I can pipe messages into the score in real time. Then we’ll see what happens.