Chris Randall: Musician, Writer, User Interface Designer, Inventor, Photographer, Complainer. Not necessarily in that order.
 

Archives: June 2018


June 27, 2018

Grains, Stacks On Stacks...

by Chris Randall
 



Well, that took a minute, but we are hugely pleased to announce that Quanta, our hybrid granular / subtractive softsynth, is now available for purchase in the Audio Damage store. What seemed like a moderately complicated project back in October of last year turned in to an eight-month exercise in applying the K├╝bler-Ross model to plugin development. Nominally, we try to have a bit of restraint in our designs, and hope to apply the K.I.S.S. principle to most things we make, within reason. But for Quanta, we took a more subtractive approach, and stuck everything up to and including several kitchen sinks in it, then carved away until a firm foundation remained.

Obvs, you can get the highlights, bullet points, and marketing hyperbole in spades over on the AD site, so I'll use this space to add some color commentary about the process. This all started when we were shopping for a new product idea, and I pondered that many (most?) granular synths for desktop were fairly complex and un-fun -- not to disparage them at all, mind; it is a complicated process -- and it might be an "interesting exercise" and furthermore "pretty fun" to make something more like Borderlands, which doesn't really exist outside of the iOS environment. Adam concurred, and we initially thought about using the grain engine we had built years ago for Discord. It became rapidly obvious that wasn't up to the task of a big polysynth, so Adam went in to his DSP Fortress of Solitude, and came out a month or so later with a full granular oscillator that you're using pretty much as-is in Quanta.

Now that we had that, it was largely a matter of wrapping things around it to make it in to a fun instrument to use. After getting several requests for adding MPE functionality to some of our existing products, I dropped Roger Linn a line and got us a couple Linnstruments to work with (detailed in an earlier post, if you recall) and decided that, since there aren't any MPE granular synths, it may be an interesting addition. This exploded the modulation sources to proportions big enough that it required a mod matrix. And since we have a mod matrix anyhow, might as well throw an Imperial fuckton (1.3 metric fucktons) of other mod in the thing. Which is how it ended up with four 99-step arbitrary function/breakpoint generators.

So, with a mod matrix and a pretty good grain engine, and some excellent new mod sources, it became a design project, winnowing all that down to the most usable feature set and parameter ranges, and making it all look pretty. I'm not the best judge of such things, being somewhat too close to it to actually see if there's any forest all these trees are sitting in, but I do know that I love and will definitely use the result, and I can't say that about all of our products.

The one other real departure from our normal modus operandi is that we have decided to add a demo version. We have gotten on fine over the years not having demos, and frankly, any lost sales were more than made up for by the fact that demos are a pain in the ass to make and support. But since we spent so much time on this (about three times as long as a normal product) it necessarily has to cost a bit more than our normal run, and we can't, in good conscience, ask you guys to buy something that expensive on blind faith. Hence, a demo. It is fully functional, but can not save (neither presets, nor its state in a project), and stops making sound after 20 minutes.

Anyhow, head over and give it a spin, and let me know if you have any questions about the decisions we've made, or comments on its overall usefulness. This is, by far, the most complex software product we've made, and we're sweating a bit. It's a large slab of work for a 2.5 person company. Quanta is $79 introductory price, and will go up to $99 on August 1.

 
June 2, 2018

A Grid-Based Lifestyle: Sound Experiments 003

by Chris Randall
 



Yeah, it's been a minute since an AI video, but we're gonna get back to that now. Readers may remember a series of experiments I did back in 2011/2012 with touchscreen-based control paradigms (here, here, here, and here, with some absolutely stellar discussions about usability in the comments.) Those were admittedly somewhat early days for the entire concept; the iPad had only been out a couple months when I started those experiments, and the idea of an app-based control paradigm was a fairly new thing.

Fast forward to 2018, and shit has progressed a bit, and people are generally used to using touchscreens for control. The reason for the video above isn't really about experimenting with the control paradigm, since that's pretty well-trod territory by now. I'm coming at things from a different angle. I've used a monome for years now, and I have a Max4Live step sequencer for that platform I've written that is pretty much only useful for me, and that I'm very happy with. However, I was using it last weekend, and I got to thinking that it would be dope if I could record control gestures along with the beats. Obviously, the monome itself is kind of shit for that sort of thing, so I first "ported" the control logic for the monome to a JUCE app, so I could run it full screen on a touchscreen monitor. When I did this, I was able to break out all the unlabeled control buttons to dedicated buttons, and improve the pattern memory and such.

After that, I gave each lane a four-bar gesture recorder; there are three gestures in all, and the X, Y, and Z planes can be assigned to any parameter in Live. (In the quick demo above, I generally have them going to effects sends and suchlike.) The sequence memory and control is hosted in the M4L patch, but the gesture recording and playback is hosted in the JUCE app. Note this is running on a separate computer entirely from the one hosting the Live session. (It is, in point of fact, that little Intel NUC, stuck to the back of the monitor with double-stick tape. It is communicating with M4L on the host computer via OSC over my home wi-fi network.)

There are actually 10 lanes of gesture recording; in the video above, if you look closely, you can see them labeled D1 - D6 (the drum lanes), Bass, and S1 - S3. I don't actually use the non-drum ones in the video, but they're there and working. There are 8 banks of 8 patterns, and each pattern has its own gesture memory.

I could easily add more buttons to where I could control the session entirely from the app, and not have a Push2 there, but there's no sense re-inventing the wheel. The purpose of the experiment was to proof-of-concept fast, intuitive real-time control of a Live session from a separate computer's touchscreen, and I'm pretty happy with things so far. The next step is to try to put together a whole song (or several songs) to perform; this isn't great for writing as it requires a lot of prior preparation. But for performing, I think there's a lot of potential to be explored.

Side note: the two synth pads I play towards the end are both Quanta. That fairly major undertaking is reaching its final stages, and the synth is perfectly usable in a session now. So yay for that!
 

Displaying 1 to 2 of 2 available blog entries.