Chris Randall: Musician, Writer, User Interface Designer, Inventor, Photographer, Complainer. Not necessarily in that order.
 

Tags: Dropping Science


August 12, 2018

Poppin' Tags 001...

by Chris Randall
 



One of my favorite things on YouTube is the Rhythm Roulette videos on the Mass Appeal channel; I generally love process videos anyhow, and the particular strictures of the Rhythm Roulette format really push my buttons. If you're not familiar, the tl;dr version is that a hip-hop producer goes to a record store, puts on a blindfold, picks three records at random, and then has to make a beat from what they got. It's worth noting that they hardly ever use the records they get for the percussion parts; usually, it's Rando Trap Drums from PetaBytes O' Beatz 43, and I give a big point deduction for that. The ability to squeeze a drum kit out of any given piece of music should be mandatory.

So, for my own Poppin' Tags, I made that a rule: nothing in the song that wasn't originally on the three records. And my other rule that is a departure from Rhythm Roulette is that it needs to be a full song at the end of the day, not just a beat.

Anyhow, with all that out of the way, the above video is the result. I actually filmed the entire process, but it was almost 3 hours long, and that was silly; who wants to sit and watch me edit samples for 3 hours? So I just cut out little bits here and there, and the track (along with the other four, to make a full 5-song EP), in its finished form, will be released on Bandcamp when they're all done.



In further news, there is now an Analog Industries merch store... Not much in it right now, but as things progress, it'll get full up. Check out the offerings.

 
June 2, 2018

A Grid-Based Lifestyle: Sound Experiments 003

by Chris Randall
 



Yeah, it's been a minute since an AI video, but we're gonna get back to that now. Readers may remember a series of experiments I did back in 2011/2012 with touchscreen-based control paradigms (here, here, here, and here, with some absolutely stellar discussions about usability in the comments.) Those were admittedly somewhat early days for the entire concept; the iPad had only been out a couple months when I started those experiments, and the idea of an app-based control paradigm was a fairly new thing.

Fast forward to 2018, and shit has progressed a bit, and people are generally used to using touchscreens for control. The reason for the video above isn't really about experimenting with the control paradigm, since that's pretty well-trod territory by now. I'm coming at things from a different angle. I've used a monome for years now, and I have a Max4Live step sequencer for that platform I've written that is pretty much only useful for me, and that I'm very happy with. However, I was using it last weekend, and I got to thinking that it would be dope if I could record control gestures along with the beats. Obviously, the monome itself is kind of shit for that sort of thing, so I first "ported" the control logic for the monome to a JUCE app, so I could run it full screen on a touchscreen monitor. When I did this, I was able to break out all the unlabeled control buttons to dedicated buttons, and improve the pattern memory and such.

After that, I gave each lane a four-bar gesture recorder; there are three gestures in all, and the X, Y, and Z planes can be assigned to any parameter in Live. (In the quick demo above, I generally have them going to effects sends and suchlike.) The sequence memory and control is hosted in the M4L patch, but the gesture recording and playback is hosted in the JUCE app. Note this is running on a separate computer entirely from the one hosting the Live session. (It is, in point of fact, that little Intel NUC, stuck to the back of the monitor with double-stick tape. It is communicating with M4L on the host computer via OSC over my home wi-fi network.)

There are actually 10 lanes of gesture recording; in the video above, if you look closely, you can see them labeled D1 - D6 (the drum lanes), Bass, and S1 - S3. I don't actually use the non-drum ones in the video, but they're there and working. There are 8 banks of 8 patterns, and each pattern has its own gesture memory.

I could easily add more buttons to where I could control the session entirely from the app, and not have a Push2 there, but there's no sense re-inventing the wheel. The purpose of the experiment was to proof-of-concept fast, intuitive real-time control of a Live session from a separate computer's touchscreen, and I'm pretty happy with things so far. The next step is to try to put together a whole song (or several songs) to perform; this isn't great for writing as it requires a lot of prior preparation. But for performing, I think there's a lot of potential to be explored.

Side note: the two synth pads I play towards the end are both Quanta. That fairly major undertaking is reaching its final stages, and the synth is perfectly usable in a session now. So yay for that!

 
April 18, 2018

Dante Inferno...

by Chris Randall
 



A month or so ago, I got turned on to the concept of Dante, which is an audio-over-Ethernet protocol used primarily by big production houses and live sound reinforcement. (I imagine it was invented by someone that had to run a 48-channel snake from the stage to the front-of-house every night, and got sick of being covered in spilled-beer-and-shoe-dirt slime.) In the simplest terms, think of a network, but instead of files and web sites, it serves digital audio. You use normal IT shit like switches and CAT 6 cable, but your goal is shunting massive channel counts of digital audio at ludicrously low latencies instead.

I had known about it for years (you can't help but see the references if you're looking at high-end AD/DA converters, since the usual suspects in that world all have Dante capability). I didn't really think about the ramifications until a friend beat me over the head with the concept. My ultimate goal in my home studio/office is simplicity. The fewer cables I have run, and the fewer conversions I have to deal with, the happier I am. It is especially attractive to me because I run multiple computers of different flavors, and having their I/O talk to each other in more-or-less real time would be excessively handy.

Once I was tipped to the potential, the full OCD Experience kicked in, and I started thinking about replacing my current rat's nest of I/O and monitor controlling. The main attraction to me is ultra-low-latency computer-to-computer connections. I've always thought it is dumb to convert to AES or SPDIF to do a digital computer-to-computer audio interaction, and so do the Dante people. Nominally, a computer-to-computer connection in Dante would be the same as any old-fashioned way, with expensive converter boxes in the way. But Audinate (the company that invented Dante and is the Keeper Of The Holy Scriptures regarding the format) got that sorted in a big way, with three pieces of software that make Dante in a home studio an attractive option.



The first is Dante Controller, which is essentially a virtual patchbay that lets you connect Dante sources and destinations. Dante gear all has a Gigabit Ethernet port, and you basically just run everything to a normal Gigabit Ethernet switch in a star fashion. Dante Controller sees everything on the network, and lets you set clock masters and routings and shit. Controller is free.



The second piece of software is the Dante Virtual Soundcard. This is an ASIO and CoreAudio/WDM driver that works like any other sound card driver; it has 64 ins and outs (which is mildly comical in something like Live. Did you know the I/O panel scrolls? Neither did I) and ludicrously low latency. Any computer running the Virtual Soundcard and connected to the Ethernet rig throws its I/O to the network, and shows up in the Dante Controller patchbay. The driver is US$29 per computer.



The third piece of software is the one that seems like magic to me, and which is useful whether or not you have a Dante system. I only assume it isn't more well-known is because Audinate's business model doesn't lend itself to marketing to hobbyists and home studio folks. It is called Dante Via, and basically lets you route _any_ audio source in your computer to any other. Think of it as Soundflower or Audio Hijack, but on pharmacutical-grade methamphetamines. You can run either this or the Virtual Soundcard. It shows up as an ASIO destination in software that supports that, or a WDM/CoreAudio destination elsewhere. Instead of 64 I/O, you get 8 channels, but otherwise, it is more or less the same, as far as how the DAW works. You just drag-and-drop your sources to your destinations, and you can mix-and-match anything as you see fit and turn any piece of software or I/O in to a sender/receiver. Dante Via is US$49 per computer (which is incredibly cheap considering what you get) or US$59 for a combo of the Virtual Soundcard and Via. There is a 15-day demo of Via on the site.

I purchased a Focusrite RedNet X2P to be my main monitor controller. It comes with a pair of Focusrite's high-end mic pres (their good stuff, not the prosumer Scarlett series). This can be powered off a POE Ethernet switch, so it's just one CAT6 cable to the switch and that's that. It is built like a god damn main battle tank, and is one of those Just Plug It In And Go Because It's Really Well Made For People That Don't Want To Dick Around kind of things. My Adam nearfields go from that, and my desk situation is sorted. I connected the Skull Canyon NUC and my main computer to the switch via their Gigabit Ethernet ports, and with the former running the Virtual Soundcard and the latter running Via (I also use the computer for games, and would like to hear YouTube videos and shit), I have VCV Rack running on the NUC, with multi-channel audio at no noticeable latency running in to Live, with Live Link providing wireless sync over my home wi-fi network. (Note that the X2P comes with a license for the Virtual Sound Card, as well as some Focusrite plugins I'll probably never use, but never say never, right?)

It is early days for Dante in a small studio right now, and many of the I/O solutions that are promised aren't quite here yet. I can easily just plug in a high-end convertor or mic pre rack to the system and it'll just show up. Audinate is releasing a 2-channel class-compliant USB dongle so you can plug in iOS or visiting laptops to the network with ease, and there are several 2-channel AES, SPDIF, and analog solutions to bring legacy gear in to the fold. ProCo and Radial also have similar small-and-cheap solutions either already released or in the works. I've only had this rig working for a day, so I can't really speak to its robustness, but it performed flawlessly with the above VCV-on-another-computer situation, as well as an hour or so with World Of Warcraft and a brief writing session in Live. I gave the mic pres on the X2P a quick check to make sure they worked, but I can't really speak for their all-around applications at this point. I'll put up another post with further thoughts after a month or so with this system.

 
March 29, 2018

Let's Talk About MPE In A Roundabout Way...

by Chris Randall
 



Long-time readers will know how I generally feel about "alternative MIDI controllers" when they come down the pike. I have two metrics for a new MIDI controller:

• Is it actually better than a piano-style keyboard?

• Would you look like a giant fucking douchebag on stage playing it?

The first one isn't so hard to overcome, I don't think. It can further be divided in to sub-categories that are context-sensitive. Modern music making breeds Jacks-of-all-trades, and if the device has buttons that have notes in some semblance of order, anyone that can push buttons in a rhythmic fashion can make music with one. It's just a matter of learning where the notes are. So then the question is: are the buttons in some sort of order that makes sense to me? Speaking strictly for myself, I've played keyboards and guitar for at least an hour or two nearly every single day of my life since I was in my tweens. (Honestly, I'd think I'd be better at it, but I plateaued somewhere in the mid-80s.) I'll be 50 here in a couple months, and my hands _hurt_ when I play a keyboard for more than a minute or two. So ergonomics are a big consideration for me, whereas they might not be for a 20-year-old who has full command of his or her digits.

The second point is harder to deal with. It's called a "show" and not a "hear" for obvious reasons, and looking like a giant fucking douchebag is going to negatively impact shareholder value, as far as live performance is concerned. The number of swing-and-a-miss controllers I've seen at NAMM, where I don't even bother to get a demo because whatever the device is instantly triggers my "man, I'd look like a douche playing that" sensors... Definitely in the hundreds. Lasers and spheres and light-up rings and any number of other douchey configurations. Of course you can use any one of these things to make music, and time + dedication = virtuosity, but ultimately you want to look at least a little bit cool doing it. I'm old enough to accept that "cool" is a moving target, one I can't necessarily hit any more, but even so...

Anyhow, let's go ahead and get to the point. Starting early last year, people began writing the Audio Damage info line asking for MPE versions of our synths. Despite being deeply entrenched in music tech, I only had a vague notion of what MPE was ("something for alternative controllers or something" was my general understanding.) After I'd received several of these, I knuckled down to learn about the format, and was intrigued enough to drop Roger Linn a line and ask if I could borrow a Linnstrument for experimenting. I know Roger pretty well, having had a booth next to him at many trade shows, and he very kindly sent a Linnstrument 128 to me. The only MPE-capable synths I owned at the time were Madrona Labs Kaivo and Aalto (as well as the Animoog synth for iOS) so I booted them up, figured out how the hell to get Live to pass MPE, and sat down to experiment. For reasons lost to the dark past, I decided to film my very first play-about with the Linnstrument and Aalto.



As you can see, it clicked pretty quickly. After a few days with it, I decided to move my Kontrol 49 off the desk and see how this felt as my main controller. After a couple weeks, the Kontrol 49 went in the Closet Of Forgotten Toys, and I wrote Roger to tell him I'd be buying this one. A couple more weeks, and I'd talked Adam in to buying one too, and now, the product we're unveiling at Superbooth is fully MPE-aware. There's no zealot like a convert.

I took to the Linnstrument pretty quickly because the notes generally follow a guitar layout, so I knew where everything was, and it was only a matter of getting used to the dynamics. Having three axii of control once you've hit the note is remarkably expressive, and since, at the end of the day, MPE is just MIDI Plus, it more or less works with everything, while synths that are designed to take advantage of the format (e.g. the afore-mentioned Kaivo and Aalto) really shine in new and interesting ways. I won't bother giving a technical description of MPE; Reverb has already done a fairly good breakdown of that here, and there's no reason for me to reinvent the wheel. Long story short, picture a pad controller with aftertouch (like the Push 2), and make it so that after you whack the note, you can move your finger on the X or Y axis as well, and send MIDI CCs with that. Then give each note on the synth its own MIDI channel, so you can apply those MIDI CCs to an individual note without modding the entire patch.

There are really only four MPE controllers worth talking about right now, in my opinion. They are the Roli Seaboard series, the afore-mentioned Linnstrument, the Madrona Labs Soundplane, and the Haken Continuum. So basically your choices are "guitar-like" with the Linnstrument and Soundplane, or "keyboard-like" with the Seaboard and Continuum.

The upshot of all this, and my takeaway: I can fit five octaves of extremely expressive MIDI control in a space that is smaller than the typical PC keyboard, and I don't look like an idiot doing it, nor did I have to learn anything new, since I can already play guitar. This is a net win no matter how you math it out. As I hinted, we'll be unveiling an MPE-capable product at Superbooth, and we will have both a Linnstrument and a Seaboard Rise there to try out with it. (We'd have a Soundplane too, except that is somewhat larger, and our booth is small, and Randy will be there anyhow.)

I'd like to hear about your experiences with MPE or alternative controllers, especially playing live. I haven't played the Linnstrument on stage yet, but I'm comfortable enough with it that I would feel pretty confident doing so at this stage.

 
August 16, 2017

Some Thoughts On iOS Music Software...

by Chris Randall
 



When we started porting our products to JUCE 5, one side effect (and I mean that in the literal sense, as it was in no way a deciding factor to do this) was that I could check a couple boxes in the project manager and poop out a standalone app and AudioUnit V3 for iOS. I was already familiar with the iOS app submission process due to my previous experiences (remember Phaedra?) and Audio Damage already had a paid-up developer account so we could get the Apple signing certificate we need for signing our OS X installers and AAX plugins. So we figured we'd put one up and see where it went.

As it turns out, we sort of lucked in to an empty socket, as only two of our desktop peers (VirSyn and Waldorf) had really taken the platform seriously, and the market was mostly owned, with a couple notable exceptions, by companies building specifically to iOS, who didn't have a lot of experience in making plugins for the much more robust and demanding professional music production market. This is in no way a bad thing, as the market had some very inventive tools that you don't see in the desktop world. But we just happened to stumble in (through no planning of our own) to a situation where our product line, which has heavy competition in desktop DAWs, simply didn't exist in iOS.

Speaking strictly for myself, after I gave up on Phaedra, I kind of set the iPad aside as a music-making tool, and hadn't really thought about it again until late May of this year, when I saw I could build AUv3. I then had to acquire an iPad Pro and go learn what AUv3 was, and after some trial and error, and much crossing of fingers and scratching of backstays later, we shipped Rough Rider (for free) in mid-June. Since then, we've tried to maintain something of a parity with the desktop and iOS platforms. It naturally didn't turn out as easy as I thought it would be, and usually the iOS versions require rather extensive reworking of the user interface because of the space constraints. (The notable exception is Replicant 2, where I decided to use the iOS AUv3 aspect ratio for the desktop one.)

Since we started building them, I've necessarily spent a lot of time testing and poking and prodding, and had to acquire most of the top-shelf iOS audio software. I've made a couple tracks now using only the iPad Pro, and I'm willing to say that, while there are a whole raft of weird little Apple-isms to deal with (better than taming OMS on OS 9, but not as bad as trying to use MIDI in OS 7) in general, I'm confident you can go to the store and buy an iPad Air 2 or Pro 9.7 (my recommended devices for AUv3 hosting right now, as the Pro 10.5 and 12.9 have some nasty RAM allocation bugs), spend about a hundred dollars on software, and have something equivalent to a fully-kitted MPC. Which, as we all know, is a perfectly viable platform for making full tracks.

Can it do everything a blown-out MacBook Pro or Surface Book can do? No, it can not. And Apple spends a lot of time trying to make seamless experiences that hide the machinations we need to know about to get the most out of our machines. Forget about using huge sample libraries, as these devices simply don't have the RAM or horsepower to pull that off. But for scratchpad recording and electronic music production, it's hard to describe the vibe. "Fun," I guess? Not a word I usually use with music-making, which, as a former professional musician, I take seriously and equate to work. New and exciting, definitely, and for creativity that is important.

Anyhow, to make a long story short, yesterday we released Pumphouse, which the video above talks about. Most of the iOS DAWs don't have sidechaining, and love it or hate it, that's an important facet of modern electronic music production. (I don't make EDM at all, and I use it all the time on my pads and basslines.) So we came up with a simple work-around by giving Rough Rider a 16-step sequencer, so you could trigger an envelope to side-chain compress the input in a rhythmic fashion. We had thought this would be an iOS-only release, and had no intention of releasing it for desktops, as that effect is easily attainable elsewhere. However, we wrote the plug as a VST (simply because it is much faster to develop audio software on a desktop than an iOS device) and it would be the work of a day or two to "back-port" it to all our supported platforms. So if this is something you're interested in, let me know in the comments. If you have any questions about iOS music production, I'll be happy to answer them to the best of my knowledge, or point you at the appropriate resource.
 

Displaying 1 to 5 of 49 available blog entries.

Page 1 of 10