April 17, 2011

Purity Of Procedure...

by Chris Randall

Ever since I switched from Pd to Max and didn't have to deal with incessant Problems, I've spent a lot more time working on my own music in this environment, rather than just using it for product development and such.

Lately, I've taken to making songs that run with little input from me, and contain no samples whatsoever. These tracks are built entirely from Max rudiments, and my only concession to convenience is in using reverb and compression plug-ins. (The main reason is that I'm not really able to make a compressor or reverb sub-patcher that is anywhere near the quality of Eos, ValhallaRoom, or DMG's upcoming Compassion compressor.)

These tracks use no external hardware of any sort (which sets them apart from, say, Procedure 1 off the most recent Micronaut EP, which uses external effects units) and, as I mentioned, no samples or pre-recorded material of any kind. They are, other than the 3 VSTs I'm using, entirely Max synthesis.

Now, the reason I'm bringing this up isn't really about the musical result, which is subjective, of course. As you all full well know, one of my foibles is placing strict limits on my creative process, and this is about the furthest I've taken that particular course of action. It is fundamentally the same as drawing a single Oblique Strategies card at the start of a session, one that says "Make Things Hard On Yourself." I mean, on the track from the screenshot above (which isn't even close to done) I spent about 5 hours coming up with the first sound, the kick drum. Working in a DAW, I'd have spent maybe 5 minutes in the Kick Drum Samples folder (called, in my case, Foot Samples, of course, since I'm funny like that) to find something that worked, then carving it for a bit with some EQ and compression and calling it done, and moving on to the next thing.

Anyhow, the net result of this process, aside from the music itself, is that I have a fundamental understanding, at the physical level, of each and every sound in the track. This makes the creative process quite odd, I find, being so close to the sound generation. Obviously, none of us are strangers to these general techniques, but we usually deal with them at a much more abstracted level (e.g. the front panel of a synthesizer). Whether I could heartily recommend this course of action is open to question as well, but it is an interesting thing, being so close to the sound generation. I think the closest analog would be building a track sound by sound using a modular synth, but even then, once the track has been laid down, you've unpatched it and moved on. Here, the entire song is a living, breathing sculpture with all of its parameters bare for tweaking.

But it ain't what I'd call a Streamlined Digital Workflow, I'll say that much.

Anyhow, has any other AI reader done something similar, so willfully obtuse? Obviously, making wrawk music or jazz with traditional instrumentation is a nearly identical process, but embracing physics instead of pure math. Or at least I think it is. I'm willing to be dissuaded from that viewpoint.


Page 2 of 3

Apr.18.2011 @ 1:36 AM
Another Csound guy here. Back in 1998 and 1999, I worked with the Csound / Common Music combination that was officially sanctioned in the computer music program I snuck into at the University of Washington. I was building up "drum" sounds with FM, additive synthesis, and a noiseband additive synthesis I invented. No samples allowed, no pre-cooked anything.

For one of my tracks, I used Common Music as a compositional language to spit out around 88,000 note events, playing a bunch of windowed sine waves for homebrew granular synthesis. I also used algorithmic rhythms, in order to get fast non-repeating drum patterns that still stayed in rhythm. I was trying to sound like Aphex Twin, and didn't really succeed.

Still, I don't think that the tracks I worked on all came from one single device, where rhythm, melody and timbre are all part of the same process. One of my old co-workers, Tim Stilson, used to set up patches that would do this. Tim was a genius at physical modeling, and would come up with these crazy algorithms where the rhythms would be generated by a physically informed process, and then create sounds through fairly simple patches that would blow my mind. A typical sound might use ramp oscillators with various thresholds to trigger impulses (via sample and hold ugens), which would be sent into a resonant 2-pole filter, the output of which would be both FMing and AMing a sine oscillator.

Apr.18.2011 @ 2:27 AM
I get the most gratification from a good tune (good lyrics hung on a nice melody wrapped around a lovely rhythm).

But I've always been interested in algorhythmically based music though. And I think when I hear a richly creative album it often also has an 'obtuse' outset.

Whether it's 'only recorded on 4 track cassette/iphone/max msp', the contraints create a particular aesthetic.

After all you can only transcend a framework if you got one. The only difference is whether you are aware of the framework or not.

Apr.18.2011 @ 6:21 AM

Apr.18.2011 @ 6:52 AM
I like how DGillespie said, "If" you finish, not "when".

I recently screwed up a project on a deadine [40 minutes sound accompaniment for a dance/movement performance] by trying to switch to a Max/Jitter workflow somewhere around halfway into the project.

Had to abandon Max at the last minute in favor of more familiar and "easier" tools... just couldn't get it all together fast enough. The final project was weakened because of all the wasted time. And equally because the process itself got all unfocused.

Too bad also since given enough time the algorithmic/"physically-informed" process would have been cool. Way cooler. Most important, I would have been more satisfied with the result.

Apr.18.2011 @ 7:26 AM
Not quite the same thing, but I worked on a piece where every track was generated from an instance of NUSofting's Marimka, plus some fx, just to see what I would come up with. It turned out fairly interesting.


Apr.18.2011 @ 9:43 AM
Chris Randall
@moljen: "...you can only transcend a framework if you got one." Quote of the day right there.

I wouldn't ever bother with CSound, because I have at my disposal a fairly vast library of commercial-quality DSP code I can easily piece together. It actually occurred to me that I could simply _code_ self-playing songs and distribute them as multi-platform executables.

Kind of odd, but it might be cool.


Apr.18.2011 @ 1:02 PM
Csound was great in the day, but it is lacking a lot of features (like for() loops) that would make things easier to program.

As far as self-playing songs, seems like the iPhone/iPad would be a natural way of distributing these...

Apr.18.2011 @ 2:08 PM
mike kiraly
This example isn't directly related to yours, but the phrase "willfully obtuse" summed up how I felt about it.

I met a gentleman last week whose personal projects were re-constructing classical pieces using only 1 key in 1 instance of Absynth. He wanted to be able to hold down a single key and use the infamous 68-breakpoint envelopes to manipulate the pitch and timbre of that extended note to reproduce entire scores. He had been working on a Beethoven piece for a month straight. I thought it was definitely "willfully obtuse", but I had a ton of respect for the process and the intent.

Apr.18.2011 @ 9:03 PM
I know many Max users who have a masterful command of Max's inventory/taxonomy and the minutiae involved in getting things to work properly in the current version(s) of MSP/Jitter. I think perhaps Chris is more productive in his Max attempts than 98% of them as he's actually generated (and sold) output in a reasonable timeframe. Sadly I'm closer to the other 98% and would also sit there in minutae for far too long to be productive...at least when creating musical patches.

I'm a huge fan of being able to tie a single controller (knob/slider/etc) to multiple parameters and tuning the values that are applied to each so that they correlate (something doesn't get filtered out of existence at the expense of tweaking another sound in tandem) and work together expressively. This is where Max currently applies for me, turning Live + Max4Live into a glorified Logic Environment.

I'm vastly underusing the MSP side I know... But without Max4Live to sit between Live & Resolume or VDMX my current personal undertakings in minutae wouldn't be possible, so I suppose it's more a matter of where my interests lie when using the tool.

Apr.19.2011 @ 1:55 PM
@seancostello: True, vanilla CSound doesn't have for loops, but a lot of the newer frontends support Python scores (Cecilia comes to mind), which adds a lot of power there.

Personally, I find it refreshing to work in more "abstracted" environments like Max and MATLAB. Working at this level often alerts you to crutches in your compositional process as you think about the things you miss the most from other environments, and often shows you the BS you can live without. At the same time, you'll often find it influencing your past workflows. Working with sampling in Max changed my approach to working with Kontakt, etc.

What I've also found, though, is no matter how abstract and weird you think your compositional process may be, someone else has done something weirder.

You should also take a moment to think about the fact that many of our non-electronic friends think our simplest tools are over-complex bullshit.

Page 2 of 3



Sorry, commenting is closed for this blog entry.