way hey and away we’ll go

I keep apologizing and making excuses for the recent foray into the waters of things of an acoustical and guitarical nature. I mean, even the title of this blogsite here says it’s about music electronica. Alas, the adventure continues and, frankly, I have a feeling that this trend will continue. In my head, the music I hear is a good mix of all of this. Guitars and overdrives, synth bleeps, delay pedals, droney sounds and filed recordings, drum machines and even ukuleles, banjos and accordions. I have no idea where this will lead in the end.
But for here and now, it’s leading below the surface of the sea. This is a melody called “Hieland Laddie” that I found in a book called The Folksinger’s Wordbook, compiled by Fred and Irwin Silber. This wordbook is what it says, which is about 400 pages of lyrics from various periods and locales. The chords for Streets of Laredo were found there, in fact. Folk songs can be corny, but they can also be a rich source of melodies and ideas. Hieland Laddie is a traditional Scottish tune to which there are about a million variations of the lyrics. When I first played it a few months ago I imagined a storyline where a woman is missing her love as he is out to sea, and stands upon a tower looking over the horizon. The song is in Dm, and the first part, Dm, Am, Gm, is melancholy and full of longing. Then the verse kicks in, all in major chords, and I imagine seeing a ship come over the horizon. “Is that the ship I wait for? Is that the ship that will carry my love back to me?” Alas, as the minor chords kick back in, of course, it’s not, and as we can imagine her love is more than likely dead and at the bottom of the ocean. After standing there for years and years and waiting and waiting, she loses hope and drowns herself. Going a bit further, she’s now dead but her ghost still haunts the tower and still waits for her love.

When I was in Maine a few weeks ago, I took some field recordings at Nubble Lighthouse in York. I wasn’t sure what I was going to do with these, but it’s always good to have twenty minutes or so of waves crashing and gulls making noise in one’s archives. So tonight I’m walking the dog and listening to Ugly Casanova’s Sharpen Your Teeth and thinking about that recording of the sea. Like these things do, it just suddenly made sense to put this Hieland Laddie tune over it with plenty of reverb and whatever else I could find that would work.
The thing came together in about an hour. I recorded the guitar and laid it down in Ableton over the field recording. I added a low verby bass with the Teenage Engineering OP-1, which I’ve been playing with a lot lately, and which is really such a versatile little gizmo. I added the bass drum last, and it took some work to get it to sit in the mix nicely. As with most of the stuff I post, I see this as more or less a sketch. I’d like to work on this some more and add an accordion and some modular synth somewhere.
Enjoy the tune.

a TonePad soundtrack

There are many little matrix synthesizers for the iPhone and iPad. I have about a dozen on my iPhone, but the only one I really ever use is TonePad Pro. I spent an hour or so recording some bits and pieces a few months ago and used one of them for a soundtrack to a little movie I made testing out my GoPro camera on my bike. It doesn’t quite fit the mood of the film perfectly but at least its an alternative to the thrashing or stadium anthem rock that seems to be the defacto choice. If Kraftwerk made mountain bike soundtracks, maybe it would be like this.

everything goes in your ear

While this music thing is a hobby in the strictest possible sense, I rationalize a lot of it by telling myself that I obsess over synthesizers and amps and guitars and effects pedals for the purpose of creating soundtracks. This is far more reasonable to me than imagining playing on stage and driving around the Mid-Atlantic in a van.
Every now and then I actually put this logic to use and make a soundtrack. My picture book, Everything Goes: In the Air, will be out in about three weeks and to that end I’m ramping up the publicity and teasers leading up to publication in September. I put together a video promo for the book.

In the past, most of my soundtracks have been created with various synths, either the modular or software. In this case, I use a guitar. While I have two really nice guitars and gear at the house, this soundtrack had to be done rather quickly yesterday afternoon. So I took the cheap Squier Stratocaster that I have here at the studio and plugged it in directly to Ableton Live via an audio interface. I used Ableton’s Amp and Cabinet effects to get the sound I was looking for, and added some EQ-8, compression via AD’s Rough Rider, and Uhbik A for slight reverb on a return channel. The percussion is made up of samples from a goofy little Casio keyboard I found years ago at a garage sale and sequenced in Ableton’s Impulse.

While on vacation in Maine last week, I spent a lot of time listening to Luna’s 1994 album “Bewitched,” and the influence is, to me, definitely there. I’m rather surprised that it sounds a lot like Vampire Weekend as well.

Laredo, the Streets of

If you read this blog, I am pretty certain that you don’t tune in for the sounds of a lonely guitar on the high plains, or cowboy tunes strummed in a saloon. You’re here because, like me, you’re kind of synth nerd and you want to read about and listen to the finer points of control voltages and weird software.
The thing is, if you read this blog, you also know that I’ve developed an affection for the esoteric instrument called “the guitar.” When I’ve had a few hours to kill in the music room, it’s more than likely been in practicing pentatonics and managing the fret board than wiggling knobs and patching cables. This will not always be the case, i promise. Stick with me here for a bit. I’ve fallen for a diversion, but it will lead somewhere, eventually, that will come back around to beepy buzzy beats.

In addition to learning the guitar for the last eighteen months, I’ve also been trying to understand and learn about recording. I’ve been ingesting old TapeOp magazines and any information I can find on the internet about recording methods and gear.As I do this, I realize that I need to expand the purpose and mission of this website to include these newfound links and informations and, well, gear. I recently installed a patchbay, I now have a decent little mixer, and I’m all set up to where it’s very easy now for me to press “record” and create some tracks. Like many music hobbyists, I find that I end up reading about this stuff more than actually using it (there are legitimate reasons for this that I’ll get into some other time) and last Sunday I decided that I need to actually play some music and actually record it.

Just a couple nights earlier I picked up a very large book at a local bookstore called The Folksinger’s Wordbook, edited by Fred and Irwin Silber. As I browsed it, I ran across The Streets of Laredo, which is an old cowboy song that I like. I believe the first version I remember hearing was by Marty Robbins, and more recently I discovered an odd cover by The Blue Aeroplanes. So here I had the lyrics as well as the basic guitar chords. True to the old saying that all you need are three chords and the truth, the chords were D7, A7 and G. That’s it.

On Sunday, I set up some mic stands, opened a few recording tracks in Ableton, and set forth. First was the acoustic guitar rhythm track played to a click at 115 bpm. D7, A7, G. Done. I played around with some positions for the microphone, an Audio-Technica AT-2035 until I found something I liked, and I used a software compressor and EQ to get something I thought sounded good. Once this was down, I spent fifteen minutes or so recording some picking on the same acoustic guitar. Over the next hour, I recorded two or three tracks of accordion, some whistling, some singing, and various electric guitar tracks. The electric guitar was mic’d with an Electro-Voice 635, which is odd in that It seems to have a very low output. I plan to test this specifically, because I also noticed that my Onyx 1220i mixer seems to have a wildly different gain structure than the Motu Ultralight I use. When I had a mic plugged into the Ultralite, I left the input gain/trim at 0. I assume that this is unity. It recorded a nice clear sound at a good recording volume. With the Onyx mixer, I had to turn the gain up near its maximum to even get a usable signal at all, and it was still significantly quieter than the Ultralite. The last little bit on the gain knob ramps the input up from barely audible to overdriven. I don’t know yet enough to know if this something that is supposed to be — like they’re using different input gain methods — or whether it’s even an issue. With digital recording, I read the tracking levels aren’t as vital as they are in analog, since the noise floor is so low and the bit-rate is so high (48k in this case). I can (and did) raise the level up in mixing with no noticeable noise. So maybe it’s no big deal.

Once all the recording was done, I gave myself a break for a day, and went back on Monday to listen to the tracks. It was easy to pull out the whistling, singing, and all of the electric guitar since it was all pretty bad. The whistling and singing was terribly out of tune (I’m not good at either, though I would like to be) and the electric guitar parts were uninspired. I knew my window for time to record this was running out before the house got noisy with people, and I was not into it.

In the end, what I got was a bass line of accordion with two acoustic guitar parts. The rhythm strumming and some surprising lead picking. Surprising because I don’t remember all of what I did here and it’s all one take with no edits. It’s simple and might not warrant all 1000 words of this post. But I’m pretty pleased and it led to a week (so far) of thinking how I might add in the modular synth to something like this, and really giving some thought to recording techniques and some more gear that I could use (like a compressor). After mixing this, I thought of some stuff I’d like to try and make some changes, and I plan to get back in, maybe this weekend, and visit The Streets of Laredo again.

voices for your digital lifestyle

I’m back.
The studio is hooked up, everything seems to work, and as proof I was able to take part in this week’s Disquiet Junto project. It’s the 24th assignment that Marc has sent out, and it’s been since about number nine the last time I was able to participate.

This week’s Junto went like this:

This week’s project is about “functional music.” You will make four individual sounds that serve as alerts for digital communications. They will be in these categories:

1. email arrival
2. incoming phone call
3. new IM received
4. calendar event alert

The goal is that the four alerts will work together as a suite — that is, that they will complement each other, yet be distinct and recognizable from each other.

The term “functional music” threw me, but I went with my first intuition and made evil robot voices. The process began with recording my eleven-year-old daughter read the four alerts into a Zoom digital recorder. I then sampled those phrases into my Teenage Engineering OP-1 and pitched down a few steps. The OP-1 is such a nice little sampler. This was then plugged into the mixer and run through a Korg Kaoss Pad recording a variety of effects into Wave Editor on the Mac. I was perfectly thrilled with anything really, but when I added Sonic Charge’s Bitspeek plug-in to the vocals, it became what I heard in my head.
The alert beeps were made with Ableton’s Operator. I tried it first with a VCO on my modular synth, but the result sounded way to analog-ish. Operator is cold and digital.

I’m aware that no one in their right mind would ever use these in their actual phone. These alerts sound pretty great but for daily use would be annoying as hell. I might install them on my iPhone for a day (anyone know how to do this?). If you’re interested in doing the same, here are the four individual 16-bit WAV files in a zip archive.

I’m writing a long post about the studio hook-up. Stay tuned.

little help?

I’m in the midst of planning a major overhaul to my little home studio. Currently the modular synth is central, plugged in to my MOTU interface. Then there’s the Vox guitar amp with it’s pile of stompboxes on the floor in front of it. Sometimes I mic this, directly into the MOTU Ultralite. Sometimes I run the guitar direct into the MOTU with a few pedals as inserts. As you might imagine, and as I mention in my last post, this is a huge pain in the ass and means that when I get some time in the studio I spend half of that time pulling cables and repatching, and I end up finding the paths of least resistance and only playing with what I know.
So I just got a Mackie 1220i and a patchbay, as well as a couple of mic stands and a pile of cables. I plan to patch the usual suspects to the patchbay and into the mixer. The mixer will send its main outs to the MOTU and then to the computer. However, there are enough points on the bay to go ahead and patch the MOTU’s inputs to it as well, so now and then I may just bypass the mixer. The idea here is to hook up my modular synth, the Alesis Micron, and the Stanton turntable to three of the stereo inputs to the mixer via the bay. I’ll have the remaining stereo input patched to the MOTU Ultralite’s output. The patchbay will allow me to patch in an iPod, or my little OP-1, or whatever else I need patched at any given time when I’m not using the Micron, turntable, or computer output. The four mono/preamp inputs will also be open via the patchbay for either modular synth inputs, guitar direct inputs, or mics. I plan to run the aux send/returns and inserts to the bay as well.

So with all that in mind, anyone have any advice? I’m not certain currently whether the bay I have is normalled, half-normalled, or what? Any advice on this part of it?
I also would mention that if you know what’s what with balanced and unbalanced, and grounded issues, feel free to comment to your heart’s content.

thanks.

curly noodles


The music room in my house is currently in flux. I’ve realized that I have an ongoing frustration with the fact that when I want to do something more than merely play the guitar through the amp or poke around on the modular synth, it usually takes as long to set up the audio path as I actually have time in the studio. So I’m unplugging cables, replugging cables, setting up series of stompboxes, unraveling wires, and so on. And I’m sure that it’s related that when I get all this ready to go and sit down with the headphones on it takes another ten minutes to figure out why I’m not hearing anything (it’s usually because the audio in the MOTU Ultralite is being routed to the main outs rather than the headphone outs, or else the channel I’m using is muted).
So in hopes of fixing this and making it all a bit more fun and efficient, I’ve spent some time this last few weeks learning about things like mixers and patchbays. I recently ordered, and received this afternoon, a Mackie 1220i mixer, and an acquaintance gave me a 48-point patchbay as well as a wad of patch cables. I’ve diagrammed it all out and when I imagine being able to plug in anything to anything and inserting effects into any path, the possibilities really start getting interesting. Due to real life issues and deadlines, I’ll not get to test this theory and put it all together for about a month. But I’ll document this work and write a post or two about the process and results.
On a related note, while looking for mixers and patchbays I came across a used Zoom H4n digital recorder. This is a giant leap of an upgrade from the M-Audio Microtrack I currently use for recording duties. This beast deserves its own post, which I’ll get to at some point. The night before it arrived, however, I decided to bid good riddance to the Microtrack (it’s for sale if anyone is interested) and record some playing around with the guitar and some pedals through my Vox Night Train amp. The path here is G&L ASAT Classic -> MXR Tremolo -> Teese RMC3fl Wah pedal -> Strymon Timeline -> Strymon Blue Sky -> amp. The Multitrack sits in front of the cabinet (a 1×12 Egnater) and as you can tell picks up every bit of hum my system creates.

I’m just noodling here and mainly playing with the reverse mode and looping on the Timeline.

layering reality

not necessarily friends

When Marc Weidenbaum first began the Disquiet Junto project five weeks ago, my first thought was that it seems like a good idea, but there was no way that I was going to be able to take part every week. Just due to the normal schedule of life — work, kids, partner, dog, “things to do,” friends… I couldn’t conceive of how I’d find the time every week to sit down, basically escape from weekend life and responsibilities, and make a complete track (update — I didn’t make this week’s due to the above issues).
However, a funny thing has happened. Having these projects has led to really thinking about process and workflow and goals in a way that fiddling around with gear previously never did. In my day-job I draw pictures every day, and in twenty years I’ve become a believer in deadlines. When I used to teach, I would tell students that if it weren’t for deadlines I’d never complete anything. It’s also kind of a running in-joke that a work is never “done.” Rather, one just has to find a good stopping point, and in my case the deadline is always that stopping point.
Screwing around with gear often creates interesting results, and I often post the results here on Dance Robot Dance. Quite often those results are twenty-second gems buried in eighty minutes of dreck. That signal-to-noise ratio isn’t really acceptable when one has to somehow fit it in between preparing dinner for the family, doing laundry, going to Ikea, walking the dog, and it has to be done by Monday night.
The genius of this project is that it’s an assignment. A specific goal is in mind, which has in all five cases been something I’d never have on my own attempted (field recording? me?) except for the Junto. Limitations are the key not only to the parameters of the projects, but to the workflow and process as well. I’ve written before that when staring at the sonic potential that is my studio desk, and multiplying that potential times infinite when software is considered, the very act of beginning can be daunting. The analogy I use is Photoshop. Given a piece of paper and a pencil, one can focus on the thing one wants to draw and focus on that creative end. One draws a line with a goal in mind. One can erase that line, again with the goal in mind, but chances are that there won’t be a lot of wanking with the tools. When faced with a new open file in, say, Photoshop, knowing that one can use any number of millions of colors, a smorgasbord of tools, and, even more importantly, one can erase and undo forever, never having to commit to anything. With the aforementioned time limitations imposed by ” real life,” this Disquiet Junto project just doesn’t allow for that.
So let’s review: by giving the assignment, the project takes away the lack of direction and focus inherent in sitting down and futzing with musical gear. And by requiring the piece to be done by Monday night, it takes away the possibility for indecision and mental masturbation that is inherent in never having to make anything permanent. For each project I’ve chosen a specific set of tools, sometimes at the beginning of a project and sometimes in the middle of the work, and really focused on what that tool does and how does it contributes to what I need, which in turn gets me to find certain limitations and personalities inherent and applied to my neat-o tools, which leads to better tracks and more interesting results.
The time limitation also encourages one to use what one knows rather than, again, putz around for hours trying out new things.On its face this might seem like an unacceptable limitation, given the want for creativity and breaking new ground. But what it really does is takes us back to that pencil-and-paper analogy. It’s easy to worry oneself into a corner with the idea that one isn’t “good enough” to record, or play live, or whatever. But when it comes down to just making a song and getting it out there, one uses what one has. Right? This plays a big part in this most recent Junto, which I’ll explain in a moment.

This Junto’s assignment was thus:

Plan: The fifth Junto project is about amplifying the inherent musicality of everyday life. Of all the Junto projects so far, this one may call for the lightest touch. Of course, achieving a light touch may require the most amount of work. The project will be accomplished by adding sounds (notes, riffs, tones, beats, noises, processing, drones, what have you) to a foundation track that consists of an original, unedited field recording.

Pre-Production: First, you will make an audio field recording from everyday life. This track will serve as the foundation for your piece. This recording can be made anywhere — on the bus, or while riding a bicycle, or sitting in a field, or waiting in the lobby of a building, or in the kitchen, wherever. There are only two rules regarding the field recording: (1) Do not include intelligible voices unless you are certain that recording people, wherever you are, is legal. (2) Do not edit the field recording, except to fade in and out to achieve the desired length. Chances are you’ll record quite a bit, and then select your favorite segment. You might even, after starting work on one foundation track, make decisions about what constitutes a good foundation and then go and make a new field recording.

Length: Keep the work to between two and five minutes.

Sensibility: In the end, the foundation field recording track should remain fairly discernible in the mix.

I happened to be walking out of a grocery store when I read this email, and since I knew I’d be looking for something to record as the basis that had some significance, I opened FiRe on my iPhone and hit record. I recorded the drive home, and became enamored with the tick tick of the turn signal as a rhythmical base. Once I took a listen to the recording I was bummed that it sounded awful. The internal mic of the iPhone just didn’t cut it. I don’t usually mind inherent flaws in equipment, but this had a lot of noise, was very low-level, and had a weird distortion through-out. So the next day when I had to go to the grocery store again, this time with my 13-year-old son, I carried my little M-Audio digital recorder along for the ride. I recorded the entire trip — shopping, paying and the drive home. But the drive home, again, with that tick tick of the turn signal was what I fell for and ended up using.
The music I recorded was based on a D G A progression that I’d learned that week in my guitar lesson. We’re dealing with triads, and these chords are using just the 1st, 2nd, and 3rd strings with the first D chord starting on the fifth fret. It’s a simple little thing but sounds nice and worked well. After recording the first set of chords at 84bpm (which by the way is the BPM of the turn signal of a 2001 Honda Civic), I just played against that in my headphones for about a half hour. Plucks, strums, rings, different settings on the amp and pedals, different patterns within the chords… just trying to get different sounds so that I could edit it all together later.
In the end, the parts I used were either straight from the guitar to the amp (a Vox Night Train) or with a Real McCoy RMC3 Teese Wahwah, set just so the filter is on a bit, which really gives this G&L ASAT a nice tone and even overdrives a little.

These are some of the guitar parts, isolated.

[audio:http://dancerobotdance.com/audio/junto05guitarsoverlay.mp3]
This first one is two overdubbed parts. I really like the overlaying.

[audio:http://dancerobotdance.com/audio/junto05guitarrock.mp3]
There are two variations on the same thing here — only the first one is in the final track.

Lastly, after playing the guitar parts, I had the inspiration to drag my accordion out of its case and see it might work out. I’m happy to say that it worked out brilliantly. It’s no lie to say that in the year I’ve been taking guitar lessons I’ve learned more about my accordion then in the ten years previous. My accordion lessons ten years ago were about reading music and developing technique for playing. They were never really about understanding how music works and why it’s structured the way it is. That’s a topic for another post, I realize, because I could cover a lot of ground with that.

Here are the accordions near the end of the track, isolated. The very last bit you hear is some editing in Ableton to have the accordions jive with the beeping of the car when the door is opened.
[audio:http://dancerobotdance.com/audio/junto05accordions.mp3]

So here’s the finished piece.

In the end, I don’t think it comes together as well as I’d like. But that’s part of the nature of this Junto project. To me, it’s like sketching. Just get it down. Yes, I could have edited the original field recording. I could have worried about the levels differently. I could have rewritten and edited parts to make it hold together. But instead it was time to make dinner for the kids and get some work done. And move on to the next project*, having learned a lot from this one.

rose's water ice

* After all this, I didn’t get the next week’s project done. I’m way under-water with my current children’s book deadline. Number seven is due tomorrow night and I suspect I’ll be able to get to it. I hope so.

Someone Else's Remains

This is my fourth Disquiet Junto piece.
The project was to remix Marcus Fischer’s Nearly There, with tracks lent by Marcus. Most of the original sounds were created with an ebow on a lap harp, which in turn made for some nice source material, if maybe a little close in feel and timbre to the whistles and glass of the previous two weeks.
I created a four-track Ableton project, and almost randomly assigned these stems of Marcus’ to three of the tracks, and then found a couple of small rhythmic parts to assign to the fourth track. I used a Novation Launchpad to sort it all out, and quickly decided that I’d “perform” this the same way that last week’s project was performed. That is, set it up, hit “record,” mess with knobs (via the Op-1 which makes a sweet MIDI controller), and then hit stop. Whatever happens is what happens. The main difference from last week is that this project would be processed entirely via software. The software I used was Uhbik’s Tremolo and Reverb, and Audio Damage’s Automaton on the percussive sounds (which is the source of the glitchy bzz and hiccups you can hear throughout). The Op-1 was assigned to control the four mixer faders, and again, the launchpad launched the clips. As I mentioned last week, live stuff (not Live stuff) is new to me, and I’m interested in finding a good workflow that will allow and encourage me to play live somewhere, someday.

The track is thus:

The project raised some interesting questions for me, regarding the nature of a remix. I don’t have the headspace to explore this thoroughly right now, but I’ll see if I can get some more down before the end of the weekend. The basic idea is that there are three ways to go with a remix:

• Limiting the remix to the original tracks and sounds only. No matter what you do with the track, the use of the original sounds will capture the spirit of the original track in some way, whether intentional or not.

• Using sounds from anywhere, including the original stems or not, but paying attention to the composition of the original in order to stay true. Otherwise, what is it that makes it a remix? On some tracks, you could keep just one representative part, like a unique vocal, with everything else new and from elsewhere, and it’s still recognizable as a remix.

• The third seems to be not worrying about any of this, and just making whatever it is you want, where you happen to have been given some source material. If one is remixing a pop song, or almost any song with vocals, this still seems inherently destined to capture the spirit of the original in some way. But on a piece like Marcus’, the sounds are, to me, less than the composition. That is, many of the stems sound like outtakes from previous Juntos, to be honest, and could possibly have come from anywhere. I’m not sure it’s the individual sounds that make Marcus work what it is. Maybe it is, but it’s not what I take from it. It’s not like a particular guitar part, or a vocal styliing…

Again, just thinking out loud here. I’m curious about others’ feelings on this (and on the tracks I’ve been posting in general). Hit the comment button.

teenagers

I’m in the middle, or maybe the end, of a significant sell-off of my modular synth. I’ve unloaded about a third of the modules, most of them rarely used, designed for esoteric functions that at some point I thought I needed. I spent part of the proceeds on a Teenage Engineering OP-1 last week, and I now understand the hype. This is a terrific little machine. It samples, it loops, it’s a synth, it plays drums, it sends and received MIDI, it’s got some nice sequencers for creating that MIDI… It’s actually surprisingly closer in workflow to the modular than I expected, and to that end the first thing I made with it sounds more like it may have come from the modular gear than, say, something made it Ableton.
After last week’s Disquiet Junto project with the glass, I had the glass samples still living in the OP-1 as tape recording, and in the synth sampler. So before I erased these — I wanted to make something with the ukulele, which I’ll post later — I turned on the record player bit and “performed” this little piece. All in the box, using OP-1’s effects, the tape loops, sampler, and the “digital” synth.

I’ll write more about this device, I’m sure.