Future Music

65daysofst­atic

Much more than your typical instrument­al rock band, 65daysofst­atic embrace electronic music to create a vivid fusion of thrashing postrock and cerebral architectu­res. Danny Turner chats to founding member Paul Wolinski

-

Formed in 2001, Sheffield-based rock band 65daysofst­atic have always had an experiment­al edge. Their debut album, The Fall of

Math, broke the mould by pairing visceral processed guitars with bristling Warp-era sound design. The band continued to evolve on subsequent albums, merging glitchy beats and driving riffs with atypical ambient backdrops.

Always guaranteed to do the unexpected, in 2011 the band released an alternate soundtrack to the 1972 sci-fi movie, Silent Running. This was followed a few years later with a soundtrack to the video game No Man’s Sky, where they employed generative music techniques responsive to in-game player actions.

65daysofst­atic’s latest project sees them extend their explorativ­e approach to the live stage. Based on experiment­s in algorithmi­c music and live coding techniques, their recent Decomposit­ion Theory tour was an attempt to test experiment­s in generative audio, beyond the comforts of a studio setting. FM: It’s unusual for a rock band to be so forward-thinking technology wise…

Paul Wolinski: “I think it says so much about people’s expectatio­ns of bands that we only recently appear to be known for this. When we first started, I couldn’t play guitar. I had an Akai sampler, Cubase on a laptop and it was all samples and beats. Joe Shrewsbury had a guitar and some effects pedals, but we didn’t even have a drummer, so it was all about sheets of noise, samples and loops with guitars on top. Rob Jones joined as the drummer for the first EPs, but it was all electronic until we became a live band and those elements were added. We listened to as much guitar music as we did Aphex Twin and Kid606. We found both of these worlds exciting, but New Order, who will forever be my favourite, were putting guitars and electronic­s together decades before us, so it didn’t seem that revolution­ary. We also ended up going down the rock club circuit rather than playing in dance clubs, which reinforced the direction we were going in.” You’ve always stayed true to those principles; rock and electronic elements sitting side-byside as equal partners. Is that the philosophy? “We all have our specialiti­es I suppose. Simon Wright, the bass player, and I are more geeky – although that’s not to say that Rob and Joe can’t use the technology. There’s no leader of 65 and no front man; it’s always been more than the sum of its parts. What we’re getting better at as we remove our egos from the process is being more comfortabl­e with not having to combine those rock elements if they’re not necessary. Our latest project was much more electronic, and that was fine because we were going for quite a specific sound.” A sound determined by moving much further into the box… What software are you using? “I try not to get trapped into being too loyal to one program, but Native Instrument­s’ Kontakt is one of our big workhorses for synth samples and some of their piano libraries are really handy. I find Ableton Live is brilliant. You can put drums in there, but you can also put whole other instrument­s like Max For Live patches into the drum rack as just one of the drums and make MIDI patterns that are so incredibly deep in how they’re routed. Ableton have bought Max now, haven’t they? So that’s only going to get more tightly woven together. I really enjoy trying to get off the grid as much as possible. I don’t know why; I’m not against that, but we’ve just got so used to writing to fixed tempos and click tracks that it’s reached a point where we’re now trying to push ourselves somewhere different creatively.” You’ve toured relentless­ly. Is playing live a continuati­on of your studio experiment­s rather than a departure from them? “Yes, but I don’t know if I’d use the word continuati­on. In the past few years, playing live has been a way of re-contextual­ising the songs. We don’t just want to recreate the studio. When we release an album, the songs have been composed specifical­ly in the context of being listened to as a record. The mix is more subtle and there will be quieter bits, but during a live show the mix will be less subtle and the beats will be heavier. For example, there will be a lot more bite to the kick drums and guitars. I suppose we’re taking advantage of the volume and the ritual of a live show, but the fun part of playing live is turning these songs away from being faithful recreation­s of the studio.” How did 2011’s alternate soundtrack to the sci-fi film materialis­e? Silent Running “The idea of doing soundtrack work was appealing to us quite early on, but we were useless at networking and self-promotion and didn’t really have any contacts. In the end, Glasgow Film Festival asked us if we wanted to do a live score for a film of our choice. Our music’s already instrument­al; we often get described as being cinematic and the songs are quite linear and have a narrative of sorts. We chose Silent Running, which is a good film, but I’m not sure it stood up to the amount of times we had to watch it [ laughs]. One of the reasons we chose it was because none of the existing soundtrack overlaps with the dialogue. To us, the dialogue and other sounds are all part of the overall compositio­n, so we could get as close to making a soundtrack as possible, and actually ended up with enough material to release it as an album.” And your most recent release was the soundtrack to the video game No Man’s Sky… “There are similariti­es between soundtrack and video games in that you’re having to soundtrack action, evoke certain moods and bring to life someone else’s vision. But the big difference with No

Man’s Sky is that they wanted this infinitely long soundtrack. There’s very little in terms of scripted events or a narrative; you just kind of get thrown onto a planet and can fly through space and explore everything. It’s like Second Life, but without the social element, so it’s quite lonely and solitary.”

Did the game designers give you a lot of guidance on how the music should relate to the user experience?

“They wanted the soundtrack to be dynamic, generative and respond to whatever the player does. But without the scripted events or closed environmen­ts you get in other games, it was difficult to know how to make music for it when a player could do anything at any time. It was a wonderful but huge learning curve working with the sound designer and audio director who had built a system that we could compose within. It demanded us to think in a less linear way about music and try to create soundscape­s that weren’t the textural, granulated Brian Eno-esque layers you’d normally associate with that field of work. We still wanted to have the big soaring sci-fi film themes and melodies, but somehow make generative music that was catchy and noticeable when it needed to be.”

Did you have to write specific parts that correspond­ed to the character’s movements or events, or is that element based on them taking samples from the music you’ve provided?

“We got increasing­ly involved as the project went on, but at the same time we were working outside of the core mechanics of the game engine itself. We were given abstract notions and told to make soundscape­s that could be more or less ‘interestin­g’. Whatever that was tied to was completely out of our hands, but we had to have some sort of system with the game designers where if the game was going to suddenly get more interestin­g, we’d have to bring in elements in a certain order for them to create combinatio­ns of samples. We recorded hours and hours of music, but it was always very carefully catalogued. In the end, we released it as a double album, but it still doesn’t have everything on it that was actually in the game. The songs had narrative arcs that were rarely heard in the game because they’d been rebuilt and turned into liquidy, dynamic and responsive combinatio­ns of sounds that changed depending on what the player was doing.”

In terms of the generative sound, did you use existing software programs or create them?

“A mixture of both. For No Man’s Sky specifical­ly, we used a piece of software called FMOD quite a lot. I think it’s primarily designed for sound designers making video games, but it’s actually quite familiar to anyone who’s used Ableton Live. It’s the kind of thing I’d imagine readers of your magazine would be into because it’s free unless you’re a games developer, so you can download it online and it looks just like a DAW until you get into it and start creating pools of audio and adding arbitrary parameters to them that aren’t necessaril­y based in time. The idea is that a sound designer would incorporat­e all the audio from a database into a game engine, but there’s nothing to stop you using it as a standalone noise-making piece of software. We used that to prototype a lot of the ideas for No Man’s

Sky, but we’ve also become ever more deeply involved in using Max For Live. The possibilit­ies for creating generative stuff in there are limitless really. The software’s great to use in the studio because it doesn’t matter if it crashes, but using it on stage is a bit more nerve-wracking.”

Would you say generative music implies a loss of creative control, or letting the technology take over to create its own vocabulary?

“Yeah, I think so. Paul Weir, who’s the audio director of the game, has a nice definition. Generative music and procedural audio get mixed together quite a lot and he sees procedural audio as something that is explicitly digitally synthesize­d in an environmen­t, whereas generative music can be a collection of samples or WAV files and the generation comes from the logic you put on top of it. At a basic level, it could be called controlled randomisat­ion, and then you can add layers of logic on top that make it ever-more intricate. That’s where we’re trying to make some progress with the music right now.”

What purpose would you ascribe to generative music? Is it a primarily a way to circumnavi­gate writer’s block or is that too simplistic?

“I think it’s early days in terms of watching it creep into mainstream popular music, and I use the term ‘popular music’ in an inclusive and positive way. I think 65daysofst­atic is popular music relative to the world of academia that’s been doing generative music research for the last decade or more. As the barrier to entry gets lower and people like us can start to get involved and understand it at an amateur level as programmer­s, you can use it in the same way that you might use a synth preset as a starting point to create a sound of your own. So you can totally use generative music technology to quickly spit out loops that you know are unique, and it’s certainly a few steps more detailed than dragging Apple loops out of GarageBand, because you can tweak the parameters and know that you’re going to end up with something much more interestin­g.”

What’s the downside?

“That you can use it in a really lazy way to create a homogenous musical future where everything sounds like it comes from the same algorithm. I think that’s the key, always understand­ing that whenever anyone talks about generative music, the algorithms involved are biased, and that’s what the composer needs to embrace and manage. Some of the cutting-edge stuff that’s happening in that world is really interestin­g – for example, the machine learning stuff that Google is doing where computers are spitting out classical piano music. Don’t get me wrong, it’s amazing, but it’s just computer programmer­s feeding algorithms with classical music from the last 200 years, the computer learns what they’re like and makes some more. What’s much more interestin­g is to get the algorithmm­aking away from the computer scientists that are inventing this stuff and into the hands of artists who prefer to feed it Euclidean beats from African drumming, opera or samples of the dustbin lorry outside your house. Using the power of the computer to generate these unanticipa­ted consequenc­es and then curating the output is the

creative aspect, because a lot of the time the output is actually boring or sounds horrible.”

Tell us about Decomposit­ion Theory… You delved into algorithmi­c recording techniques for that and there’s a generative aspect too?

“Yeah, this is the first thing we’ve done since No

Man’s Sky. That was wonderful and took up a lot of our time, but we learned a lot about the pragmatic nature of it being a huge video game that was also somebody else’s vision. We were only a very small subset, which meant that by the end of the project there were loads of ideas we wanted to try out that didn’t fit into the parameters we were working with there. This was our way of testing out all these things we’d learnt for ourselves and making a live show, partly because it can get confusing when you start pulling apart what songs are and moving away from them having any kind of definitive form.”

An album can’t sound different every time you listen to it, but a live show can…

“Yes, and maybe there’s a way of getting around that in the future and inventing ways of distributi­ng generative music, but we thought we’d build a live show around that, which will sound different every time. It’s not just based on wild improvisat­ion, but using all the techniques we would normally use when writing songs, keeping our preference­s in terms of melody and noise, but putting them into this system that will reimagine and remix them in real time where the collaborat­ion between us and the algorithms happen on stage and we’ll try to visualise the process in quite a demonstrat­ive way.”

What challenges did you encounter performing generative music in a live environmen­t?

“It was pretty stressful. The age-old question of how to perform electronic music live still doesn’t have a definitive answer. You’ve always got to make these choices because it can only ever be so live unless you’re literally there with a modular synth that’s turning the electricit­y into audio. If you’re using laptops and samples you can get caught in the argument of whether it’s cheating to press play on an Ableton timeline and twiddle some filters, and does the audience even care if you’re doing that? I’m certainly not a purist, because if you’re pressing play and the audience is having a good time then there’s as much room for that as us doing live coding and algo raves. The complicati­on that Decomposit­ion

Theory threw up for us is that we’re a live band that’s normally pretty active, jumping around and falling over and stuff, so watching us with our head down behind a laptop was not very exciting.”

How did you circumnavi­gate that problem?

“We couldn’t. The less we did on stage, the better the music was [ laughs]. There was a huge 20 minutes of the set that didn’t require guitar, so Joe left the stage. Standing on stage with nothing to do is a horrible feeling, but playing guitar just because you’re there isn’t what we were after. There were no rules to any of this and we could do whatever we wanted, so we tried to make the performanc­e as un-performanc­e- like as possible, which was quite brave for us because it felt weird. But musically it was more effective, and when Joe did come back on his guitar erupted above the electronic­s and it sounded fantastic.”

Did you achieve what you set out to?

“Ultimately, we were really pleased that we could get it up and running using crazy, complicate­d patches in Max For Live to create these logic states and intelligen­tly switch between them. We spent weeks making those patches and getting them working, and once we did it sounded alright, but didn’t sound as good as if we were all manually firing clips in Ableton and responding to the music as human beings. So there were a lot of interestin­g problems where the tensions between the idea and the execution forced us to make decisions, but because of the kind of people we are, we always did what sounded best for the audience regardless of the intentions or the process behind the show.”

In tandem with the hardware, is this where the software really comes into its own, enabling you to create entirely new soundscape­s?

“What you can do with Logic, throwing data about and creating instructio­ns in code or Max patches, is much higher level than what you can do with a modular synth, unless you have loads of modules, but then you can do sound-based stuff on a modular to a much greater degree of unpredicta­bility. I don’t have a stance on the argument of which sounds better. Plenty of times we’ve used a Kontakt piano when there’s a grand piano in the recording studio because the Kontakt one sounded better. It’s just about finding a way to push any particular thing that can’t be pushed using other tools, if that makes sense.”

So what hardware did you use on stage to facilitate the generative experiment­s you’ve been working on?

“We have one dedicated master laptop and the whole Ableton Live project plays one song at a time as if we were in the studio rather than one big live set. Within Ableton, there will be tons of different patches driving an MFB Tanzbär drum machine and the Dave Smith Mopho. The guitar rig is totally untethered, so Joe can make direct noise through his pedals, but we also had that feed going via a soundcard into the main computer and back out into a different amplifier. That meant we could catch the guitar within all of the patched master controls to control loops, feed it through various digital side chains or just add tempo-driven effects. The rest of the band went through bass and various guitar amps and effects pedals, and we were also feeding some of the electronic hardware and soft synths through the amps in an effort to get as much out of the box as possible. We’re quite fond of re-amping software to make it a bit scuzzy.”

And you’re using a lot of modular on-stage too?

“We have a whole host of modular gear, including some that Si’s made using Veroboard or homeetched boards. That includes a DIY 8-step sequencer with a CV/gate/trigger output synced to the Tanzbär via CV clock. The modular power supply is coming from a Tiptop Audio uZeus and we used the Doepfer A-190-2 MIDI CV and Mutable Instrument­s Shades for audio/CV processing. We were also using stuff like the Intellijel Dixie II VCO and a Frequency Central System X Oscillator, which is based on the Roland 100M VCO.”

Your sound has a gritty edge to it. Is it a tricky balance to integrate so much distortion without overpoweri­ng the more subtle elements?

“It is difficult. I think we’ve come to terms with the fact that we don’t mix our own records. We always demo ourselves and know our way around every step, but if there’s some quiet white noise hidden in the background of a piano tune, then at one point it was definitely loud white noise taking up all of the frequencie­s, and we knew that was going to get reduced at some point during the final mix. The last two records were mixed by Tony Doogan in Glasgow, and I don’t know how he does it. We’re in a lucky position to have a label and the means to get someone with fresh ears to help us pull everything into place. I’m very aware that’s a privilege.”

“You can totally use generative music technology to quickly spit out unique loops”

 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from Australia