Chris Randall: Musician, Writer, User Interface Designer, Inventor, Photographer, Complainer. Not necessarily in that order.
Tags: Touch It
June 18, 2017
by Chris Randall
There are two major side benefits of switching to JUCE for our plugin dev. The first, you've already met: AAX versions essentially for free.
The second, you're about to meet: iOS versions for moderate effort. JUCE 5 projects on OS X have two targets in addition to the bevy of plugin formats: AUv3 and Standalone. Both of these are essentially pointless on OS X, where the AUv3 is an actual step backwards, lacking everything but the most basic ability to talk to anything but the DAW. Standalones have their purpose, but mostly as synths. A standalone effect is about as useful as... well... nothing really comes to mind. I'll have to ponder for a bit to come up with something that useless.
Switch that target from OS X to iOS, and we're on to something. AUv3 is the only audio plugin format allowed on iOS, and standalones actually have some merit. The screenshot above is Rough Rider 2 running as an AUv3 insert effect in GarageBand. These AUv3 builds will work in any host that can stomach it; right now that list is mildly limited: GarageBand, Audiobus 3, Cubasis (full version), and some others. The situation will improve quite a bit when Intua drops BeatMaker 3 on July 15, in my opinion.
Digressions aside, the only difference between Rough Rider 2 for iOS AUv3 and Rough Rider 2 AU/AAX/VST/VST3 is some mild fiddling with the UI to get it to cooperate in the context. It will run on any device that can run iOS 9.3, which is pretty much anything from iPad 3 / iPad Mini 2 / iPhone 6 on.
Rough Rider 2 is available now in the app store, and like any good drug dealer, we give you the first taste for free. If you run in to any issues at all, don't hesitate to drop us a line.
Grind is next in line, and is currently awaiting TestFlight review so the testers can get a piece of that action, but it is pretty much done. Once that's released, we're going to turn our attention back to desktops for a bit, so we can see how things shake out. I don't want to release everything for iOS, and then find out I did something terribly wrong. But once we're sure that things generally work, we'll push out Dubstation 2 and Eos 2 in short order. I don't expect any trouble building either for iOS.
If you're an iOS musician, I'd like to hear about how you feel about pricing. I'm of a mixed mind on this; obviously, these are identical to the desktop plugins internally, and require a bit extra work, so they should be priced accordingly. On the other hand, the iOS music ecosystem doesn't really have a place for a similar pricing model, and we're in a situation where people are expected to effectively double the price of their purchase to get a 12th format to go with the other 11 they already own.
I went through every AUv3 product I could find on the App Store, and I feel that, in general, plugins seem to be in the $5 to $10 neck of the woods. There are some outliers, but on the whole, that seems to be the case. I'm okay with this in general.
The other option would be to do it free, and have an In-App Purchase to unlock all the features. This isn't terribly complicated, but it does add some frustration to the proceedings, both on my part and on the consumer's part. So I'm less likely to look favorably on this, unless someone can offer a compelling argument in its defense.
July 23, 2016
by Chris Randall
This week's Tech Time is a more sophisticated version of something I touched on in the last Push 2 / Modular video, making polysynth patches from a mono analog. I show my whole workflow, from initial sample to laying it in the mix.
The channel is starting to get a little momentum; I'll do my Cranky Old Man video on Sunday.
July 10, 2016
by Chris Randall
A slightly less esoteric Tech_Talk this week. My own personal workflow for dumping Eurorack tracks to Live for further production. This is kind of quasi-basic, I think, but I do get a lot of people asking me how I do it, so I guess it's something that people find interesting.
Looking for topics for next week's Tech_Talk and Weekly. Hook a brother up!
April 2, 2015
by Chris Randall
As you've no doubt seen if you follow me on Twitter or Facebook or down the street in real life or whatever, we've been busy shipping our new module, Odio, which is a little 2-in, 2-out audio interface for using iOS devices in your Eurorack kit. Entirely coincidental to this, but alarmingly convenient, Chris Carlson finally updated his excellent Borderlands granular synth for iPad to v2.0.
Never was a match more made in Heaven than Odio and Borderlands, as Marcus Fischer's post yesterday on his excellent Dust Breeding blog proved. I thought a little video demo of using Borderlands with Odio might be appropo, so after my shipping chores were done for the day, I busted the above out.
In the video, the analog synth melody is coming from my full boat of WMD/SSF modules I just got this week, the hi-hat is coming from our Mad Hatter module, and the speech synthesis is courtesy of a prototype module I'm currently researching that runs a full emulation of the TI speech synthesis chip. (In other words, those aren't samples, at least in the traditional sense of the word.) The whole mess is sequenced via Sequencer 1.
You can hear in the beginning that the synth line is passing through the iPad via the AudioBus app. After a moment, I instance Borderlands. Since Borderlands is a synth, and not a real-time effect, the audio is interrupted. I then flip to the Borderlands instance, and record a couple measures of the synth line.
After fooling with that for a bit, I then move the speech synthesizer's output from the mixer to Odio, and record a bit of it, while I'm twiddling knobs on the prototype. What happens after that should be pretty self-evident from watching the video.
Anyhow, I think the video shows how powerful the Odio/Borderlands combination is. If you'd like to see videos of other iPad-based software in use, let me know your specific requests.
February 4, 2014
by Chris Randall
My attempts at creative endeavors over the weekend were utterly and completely foiled by hardware (not software!) problems. My Maschine Studio got crashy all the sudden, I had to whittle a new tape loop for the echo, the old Doepfer modules/Stackables problem reared its head, and basically everything was conspiring to keep me from making music.
However, creativity struck last night and I was able to pull things together on a track I've been making on and off for a couple weeks now. Just for fun, I let the GoPro Hero 3+ run while I was trying to come up with a part on the DK Synergy for this track. So the video above is actually a snapshot of my writing process, not a finished and arranged song (or real-time improv, like most of my videos, although it does smell like that.) When I'm doing a track with full production that isn't real-time, I like to separate the parts out in the Clip view in Live, blow up the UI so I can run it from the touchscreen, and dick around with different arrangement ideas while I'm trolling for sounds, and that's essentially what I'm doing in the above video.
Sidebar: the DK Synergy is a strange and wonderful beast, and I dearly love owning and playing it, but Jesus fuck the fan in that thing is loud. Something needs to be done about it.
I'm intrigued to learn about your writing process. Since electronic music is almost more about sound design than songwriting, do you play parts first, then do sound programming like me? Or do you come up with cool sounds, then figure out how to use them? Or some other method entirely?