Chris Randall: Musician, Writer, User Interface Designer, Inventor, Photographer, Complainer. Not necessarily in that order.
September 30, 2015


by Chris Randall

This article on Engadget caught my eye this morning. The tl;dr version: Deadmau5 is streaming his studio work (and gaming, I guess) on Twitch.

Now, everyone reading this is no doubt familiar with my love of process, and while I don't particularly care about Mr. Zimmerman's process in particular, I like the general idea of sharing your work while you do it. I'm a visual thinker, and I get way more out of watching someone do something than reading an article or instruction manual. The vast majority of my learning comes from watching process videos and talks on YouTube. In point of fact, my favorite YouTube channels are Jimmy Diresta and I Like To Make Stuff, both of which are (while not music related, even a tiny bit) 100% about process.

I don't have any particular problem with people watching me work; in point of fact, the results are generally better because of the audience. (As long as I'm not doing vocals. That's a different story.) My questions about this idea is thus: is this something other people find interesting? I mean, would you sit on your couch for an hour and watch someone patch a Euro system or program beats on Twitch or YouTube Live? I personally don't generally watch music production process videos, because they are (and I am in no way tooting my own horn here; just stating a fact) usually put up by people that are far less experienced than I in electronic music production.

It wouldn't be that much trouble for me to pull this off. I have a commercial broadband connection here you can drive a truck through, and the technical knowledge to provide pretty good video and audio streams. However, I honestly have no idea if it's something you guys would be interested in, and thus worth the trouble.

(It would, however, be an excellent impetus to keep my office clean.)

September 10, 2015

Stereo Effecting...

by Chris Randall

Yeah, long time no post. I know. I'm a terrible person. Email me if you need my lawyer's address. Extra super-duper busy around here this summer, as we prep for our fall releases. We have several hardware products coming out before the end of the year, and it doesn't leave me much time to do the interesting things that I normally talk about here.

Speaking of, we're pleased to announce the release of ADM10 Kompressor and ADM11 Dimensions. These are interesting inasmuch as they are the first direct ports of our plug-in products to hardware; previous releases have all been sort of based on various aspects of our plugin line, but these two are the first to basically go whole-hog. Kompressor contains the entirety of our Rough Rider plug-in, while Dimensions contains the entirety of Fluid. Both have extra hardware-appropriate features, but the DSP code is almost verbatim.

These also mark the debut of our ad-ab-03 platform, which is the replacement for our initial ad-ab-01 platform (from which all of our 3-knob effects were derived.) We have ceased manufacture of the -01 based products; they are still readily available in various retailers around the world, and we have enough stock on hand to honor warranty replacements and what-not, but we won't be making or shipping any more.

ad-ab-03 is far superior to the -01 models. It has, as you can plainly see in the above picture, four panel-bolted BI 100K-turn pots, custom Rogan knobs, and a USB port for firmware updates. We've double-stacked the boards so we could squeeze 2HP more out, resulting in a 6HP module. All in all, this new platform shows everything we've learned about making hardware in the last couple years, and is a sizeable improvement.

Some products from the existing line will be ported to the new system. Some will not. We haven't come to a firm decision in that regard. We'll keep you informed, though.

In any event, both Dimensions and Kompressor are available for immediate shipment from the Audio Damage store, as well as many of our retail partners. If your favorite store doesn't carry them, be sure to ask, and they'll contact us directly. We also have Sequencer 1 back in stock for direct orders.

In other news, yes, we have several more hardware releases before the holidays. At least one of those will be a Eurorack module. Read that however you want. I've been working 12-hour days for the last week, and I'm going to take the evening off to try out this new version of Reaktor.

June 27, 2015

The Origin Of The Species...

by Chris Randall

Steve Hamann asked an interesting pair of questions on Twitter this morning: "What is the origin of the floating hands and electronic gear music video?" And he followed that up with this: "For a lot of people it seems to have become a musical end unto itself, I wonder where/when it started?"

I am obviously a strong proponent of this particular form of expression. My first YouTube upload, in 2007, was only the first in a long string of Hands videos in my channel; roughly two thirds of my uploads fit in to this category. These sorts of videos are de rigueur these days for any aspiring synthNerd, and Audio Damage even makes a product specifically for making them with your modular synth and iPhone.

If you've been a long-time reader of this site (10 YEARS NEXT MONTH HOLY SHIT!!) I've inflicted these videos on you many times. To address the second part of Steve's musings, speaking strictly for myself, the video is absolutely the musical end, and I generally write the music specifically for the video. This started happening in the beginning of 2011; before that, I had generally done any sort of video upload as an afterthought, but this one is the first one where making the video was the goal in and of itself:

Many others followed, of course, and while the early ones were recorded and mixed, and the audio released elsewhere, I've gradually got to the point where the video is the release entire, and I don't actually include any downloadable audio content. I'll admit I hadn't actually thought about the "why" of this until today, and I don't have a good answer for it. In pondering it, I think that a lot of the reason I put up the videos (aside from demonstrating the cool shit we make) is that they show off my skill as a musician, inasmuch as I'm capable of demonstrating skill, and serve as "proof" that no trickery was involved. I think with the growing popularity of modular synths, and the dick-swinging inherent in that group of instruments, they also serve as a nice set of bona fides: "look at all this dope shit I have, and here's proof that I'm good at using it."

So, that's me. But it leaves Steve's questions sort of unanswered: where's the Ür Hands Video? And why do other people do it? I personally am curious as to why people put up so many really shitty ones. One of the groups on Facebook that I belong to, it's basically just a constant stream of really terrible sound (I won't even call it music) with cell phone mic audio. These videos are essentially worthless, whether a demonstration of prowess or a snapshot of a musical moment. Your thoughts? Can we find the first Hands video on YouTube?

May 4, 2015


by Chris Randall

For reasons passing understanding, I've decided I'm going to do my next release 100% Euro. So in my munificent free time the last couple weeks, I've been trying out different workflows to make that experience relatively painless, and by that, I mean that I'm looking in to ways that the context doesn't get in my way. My normal course of action would be to do everything in individual passes, and edit/mix/arrange in the DAW, with additional production being done digitally.

For this release, I'm hoping to avoid that and do entire songs in one pass. So I went through the trouble of assembling a Eurorack instrument-unto-itself; I'm still fooling a bit with the exact layout, but I pretty much have it down right now to something I can make complete tracks with. It's a 12U Monorocket case with a pair of Sequencer 1s and an obvious collection of modules. Taken as a single collective instrument, it needs to be learned and mastered. Which is what I'm doing right now.

The video above shows where I am in the process. This is by no means a musical statement, but rather just a recording of me exploring some different methodologies for performing with this thing, and learning what it is capable of. You'll note the thing I'm fiddling with off to the side, which is a Boss RRV-10 that I circuit-bent some years ago. It has made appearances in many of my videos, but it never occurred to me to "play" it in real time. So I'm experimenting with that here. The monome is hooked up to the Earthsea module, and I'm just using it to play the little melody that is running in to the RRV-10.

So, it's coming along. I think I'm getting close to being ready to patch some music in to life. I will be recording these tracks "stemmed" (in a manner of speaking) direct to 8-track tape, where they'll be mixed to my MTR-11 1/4" deck. I will _try_ to make a video of each performance, but I'm not making any promises there. Stopping a good creative flow to set up the camera, and deal with all that nonsense, is kind of a drag.

For an earlier snapshot of the current journey, you can have a listen to this (no video, sorry.) Unlike the above song, I made this before I made the decision to not multi-take things, so it is actually 4 passes, rather than one.

April 21, 2015

Crandall's Simple Steps To Avoid UI Suck...

by Chris Randall

When I gave my talk on UI design for music software at UCSB, at the end of the talk, I attempted to distill my rant to its essence, and provide a simple set of guidelines for uX and UI for plug-ins and apps for musicians. While some of this seems self-evident, I came up with these steps with the idea of providing some insight in to our world for engineers and academics that might not have any experience with professional musicians.

These are by no means Rules™ that must be adhered to, but rather some simple tips to keep your software product from looking like Pd. Basically. I break them all the time, but I have 75+ commercial products under my belt, so I get to do what I want. :-)

Musicians are generally either in a dark studio/spare bedroom/basement or on stage, and generally working in the evening or at night. Looking at a bright white slab of screen can be irritating, and occasionally painful. A UI for musicians should be lighter colored elements on a dark background. The accepted guideline for contrast is 4.5:1 for normal text and 3:1 for large text. In real world RGB terms, assuming a black background, your text should be at least #959595 or lighter. (I prefer lighter.) Dark text and elements on a light background just sucks for the most part, but if you do it, maintain the same contrast ratio for any element that provides information to the user.

Traditional fonts (and yes, I include the venerated Helvetica in this group) were not designed for readability on high-resolution computer screens at small sizes. They were designed for signs and newspapers. Don't use them. Make the effort with a modern display font, designed for modern systems. I am a DIN whore, I won't deny. But you can do far worse than Source Sans Pro, which is free as in your mom, and made by Google specifically for modern high-resolution displays. (Google actually makes quite a few modern display fonts for UI work that are free-ish.)

Unless, in addition to being a top-notch DSP engineer, you're also highly skilled at using 3D modelling software to make user interfaces, Don't Do It. There is a place for skeumorphism in audio software: this place is usually reserved for interfaces meant to ape vintage gear, to provide the user with a familiar experience. So I won't dismiss it out of hand. But it's something best left to pros. You're far better off just making a circle with a little line on it for a knob. It's hard to fuck that up.

When an engineer or academic sort is intent on making a piece of commercial (or professional, at least) music software, he/she tends to get the DSP done first, then put a UI on it during or after the process. This results in a product that doesn't have a holistic feel. It is far better to codify your initial DSP idea, then design and code a full UI, then fit the DSP in. There's no law that says you can't change the UI during or after the process, but it really helps make a better product when you're building to a set goal, rather than "seeing where things lead." Sometimes, that's unavoidable, but you should really see where things lead before you draw the first pixel.

Designing products for musicians, if you're not one, results in bad products. You wouldn't want to buy a car designed by someone that doesn't drive, would you? There are... well, I won't say "standards," but there are ways of doing things in the music world that can perhaps go against normal uX conventions, and if you've never made music for money, in the studio or on stage (preferably both) then you should get somebody in your Circle of Trust that has, and does. And I don't mean at the beta-test stage. I mean as soon as you have the UI coded. Fitting the DSP in to a musician-friendly context is much better than trying to make a scientific/academic chunk of DSP musician-friendly by brute force after the fact.

Anyhow, these are just ideas that some may find helpful. If I'm way off the reservation, or other designers that read this blog have some different (or better) ideas, by all means hit up the comments.

Displaying 1 to 5 of 1906 available blog entries.

Page 1 of 382