curly noodles


The music room in my house is currently in flux. I’ve realized that I have an ongoing frustration with the fact that when I want to do something more than merely play the guitar through the amp or poke around on the modular synth, it usually takes as long to set up the audio path as I actually have time in the studio. So I’m unplugging cables, replugging cables, setting up series of stompboxes, unraveling wires, and so on. And I’m sure that it’s related that when I get all this ready to go and sit down with the headphones on it takes another ten minutes to figure out why I’m not hearing anything (it’s usually because the audio in the MOTU Ultralite is being routed to the main outs rather than the headphone outs, or else the channel I’m using is muted).
So in hopes of fixing this and making it all a bit more fun and efficient, I’ve spent some time this last few weeks learning about things like mixers and patchbays. I recently ordered, and received this afternoon, a Mackie 1220i mixer, and an acquaintance gave me a 48-point patchbay as well as a wad of patch cables. I’ve diagrammed it all out and when I imagine being able to plug in anything to anything and inserting effects into any path, the possibilities really start getting interesting. Due to real life issues and deadlines, I’ll not get to test this theory and put it all together for about a month. But I’ll document this work and write a post or two about the process and results.
On a related note, while looking for mixers and patchbays I came across a used Zoom H4n digital recorder. This is a giant leap of an upgrade from the M-Audio Microtrack I currently use for recording duties. This beast deserves its own post, which I’ll get to at some point. The night before it arrived, however, I decided to bid good riddance to the Microtrack (it’s for sale if anyone is interested) and record some playing around with the guitar and some pedals through my Vox Night Train amp. The path here is G&L ASAT Classic -> MXR Tremolo -> Teese RMC3fl Wah pedal -> Strymon Timeline -> Strymon Blue Sky -> amp. The Multitrack sits in front of the cabinet (a 1×12 Egnater) and as you can tell picks up every bit of hum my system creates.

I’m just noodling here and mainly playing with the reverse mode and looping on the Timeline.

Someone Else's Remains

This is my fourth Disquiet Junto piece.
The project was to remix Marcus Fischer’s Nearly There, with tracks lent by Marcus. Most of the original sounds were created with an ebow on a lap harp, which in turn made for some nice source material, if maybe a little close in feel and timbre to the whistles and glass of the previous two weeks.
I created a four-track Ableton project, and almost randomly assigned these stems of Marcus’ to three of the tracks, and then found a couple of small rhythmic parts to assign to the fourth track. I used a Novation Launchpad to sort it all out, and quickly decided that I’d “perform” this the same way that last week’s project was performed. That is, set it up, hit “record,” mess with knobs (via the Op-1 which makes a sweet MIDI controller), and then hit stop. Whatever happens is what happens. The main difference from last week is that this project would be processed entirely via software. The software I used was Uhbik’s Tremolo and Reverb, and Audio Damage’s Automaton on the percussive sounds (which is the source of the glitchy bzz and hiccups you can hear throughout). The Op-1 was assigned to control the four mixer faders, and again, the launchpad launched the clips. As I mentioned last week, live stuff (not Live stuff) is new to me, and I’m interested in finding a good workflow that will allow and encourage me to play live somewhere, someday.

The track is thus:

The project raised some interesting questions for me, regarding the nature of a remix. I don’t have the headspace to explore this thoroughly right now, but I’ll see if I can get some more down before the end of the weekend. The basic idea is that there are three ways to go with a remix:

• Limiting the remix to the original tracks and sounds only. No matter what you do with the track, the use of the original sounds will capture the spirit of the original track in some way, whether intentional or not.

• Using sounds from anywhere, including the original stems or not, but paying attention to the composition of the original in order to stay true. Otherwise, what is it that makes it a remix? On some tracks, you could keep just one representative part, like a unique vocal, with everything else new and from elsewhere, and it’s still recognizable as a remix.

• The third seems to be not worrying about any of this, and just making whatever it is you want, where you happen to have been given some source material. If one is remixing a pop song, or almost any song with vocals, this still seems inherently destined to capture the spirit of the original in some way. But on a piece like Marcus’, the sounds are, to me, less than the composition. That is, many of the stems sound like outtakes from previous Juntos, to be honest, and could possibly have come from anywhere. I’m not sure it’s the individual sounds that make Marcus work what it is. Maybe it is, but it’s not what I take from it. It’s not like a particular guitar part, or a vocal styliing…

Again, just thinking out loud here. I’m curious about others’ feelings on this (and on the tracks I’ve been posting in general). Hit the comment button.

teenagers

I’m in the middle, or maybe the end, of a significant sell-off of my modular synth. I’ve unloaded about a third of the modules, most of them rarely used, designed for esoteric functions that at some point I thought I needed. I spent part of the proceeds on a Teenage Engineering OP-1 last week, and I now understand the hype. This is a terrific little machine. It samples, it loops, it’s a synth, it plays drums, it sends and received MIDI, it’s got some nice sequencers for creating that MIDI… It’s actually surprisingly closer in workflow to the modular than I expected, and to that end the first thing I made with it sounds more like it may have come from the modular gear than, say, something made it Ableton.
After last week’s Disquiet Junto project with the glass, I had the glass samples still living in the OP-1 as tape recording, and in the synth sampler. So before I erased these — I wanted to make something with the ukulele, which I’ll post later — I turned on the record player bit and “performed” this little piece. All in the box, using OP-1’s effects, the tape loops, sampler, and the “digital” synth.

I’ll write more about this device, I’m sure.

glass half empty, glass half full

music room tools

This is a process post about the third Disquiet Junto project, called “The Extended Glass Harp.” For this project, Marc wrote the following:

This project is in honor of Benjamin Franklin, after whose Junto Society our little group was named.

In an effort to expand and refine the glass harp, Franklin developed his own lathe-like glass harmonica, which he called the “armonica.” Marie Antoinette took lessons on it and Beethoven composed for it, but Franklin’s invention proved expensive and fragile, and it had a limited lifetime. And it may have given its frequent users lead poisoning.

You are *not* being asked to build a Franklin armonica. But like Franklin, we are going to expand on the glass harp. In our case, we are going to do so digitally.

You’re being asked to use the more common instrument, the glass harp. That involves the familiar “rubbing the top of a wine glass that has water in it” approach:

http://en.wikipedia.org/wiki/Glass_harp

The Junto assignment is to record a live performance on the glass harp, and to employ live processing in the performance. There should be no post-production. And there is no length limit for the piece, though I would suggest that anything over 15 minutes may limit the size of your potential audience.

I’ve never recorded anything live, per se, in my music room before. I use my microphones to record sounds, of course, which then get processed and played at times. But the idea of no post-processing immediately created a bit of anxiety. This project was posted on Thursday evening last week, and I took the weekend to consider what I might do and how I might do it, and run my head through various audio chains. One limitation I knew I wanted was to keep the entire project limited to hardware tools I have. That is, effects pedals, the modular synth, and my (brand new!) OP-1 synth (of which I’ll post more about at a later date). The first two Junto projects were done almost entirely in the box. That is, with software, and I wanted to stay away from that for this assignment.

music room tools

I thought first about what I have that could record samples and, especially, loops. That would be my modified EHX Stereo Memory Man, My Boss RC-3 looper, the Tyme Sefari on the modular, and, after playing with it all weekend and being more than a little surprised at the capabilities of this thing, the OP-1. I decided that I’d take an hour or two on Monday, set everything up and cable it together, and press ‘record.’ I rehearsed a bit, recording the glass into the Tyme Sefari, testing the switches on the Stereo Memory Man, checking for feedback with the microphone (I ended up using headphones; if anyone knows some ways to record live without feedback problems, leave a comment!). I’d like to say when I was ready, I started recording, but part of this project was that, never having done anything like this, I knew “ready” was relative. There was no audience, unless you count my fiancée and our dog in the bedroom next door. Nevertheless, I was nervous. I had some idea of what was going to happen, but I also knew that I’d literally play it by ear, and make a lot of decisions on the fly. That’s one thing about these Junto projects, and this one in particular. I know my gear fairly well, especially the hardware (software is infinitely more complex and what with menus and MIDI, is often a mystery to me). But recording live like this really brings out the strengths and weaknesses, and uncovers possibilities that one might not have considered previously.

music room tools

What you hear here, then, is as follows. The microphone was connected to a mixer, with the Stereo Memory Man on an FX send channel. After beginning to record with Wave Editor on the laptop, I began by making the initial sound by rubbing the lip of the wine glass as I quietly switched on the looper of the Stereo Memory Man. The SMM records 30-seconds of audio, but I just took about five or six seconds, as it’s hard to play a wine glass with one hand while switching on a looper with another. You can hear the click of the looper switch on the audio, and then the loop begins. After a few seconds of this, I then played the second wine glass which had a higher pitch. This overdubs the first sound, so you can hear the changes on the loop (0:38).
At this point, I began sampling that loop to the Tyme Sefari on the modular synth. I had a button on a joystick module set up to start the recording with a gate signal. Concurrently, a four-step sequencer was affecting the sample-rate of the Tyme Sefari, which changed the effective pitch of the sample, and then also changed that pitch as it is played back. This creates a random-sounding sequence of bloops and digital whirrs, which you can hear beginning at 1:15. The Tyme Sefari playes back this sequence for some time, through the Pittsburgh Analog Delay module, and then through a Strymon BlueSky reverb before going into the audio interface and to Wave Editor. With slight changes to the delay times and the sample-rate of the playback, small changes are introduced to the sounds for the next several minutes.
As this played back, I removed the Stereo Memory Man from the chain and replaced it with the Teenage Engineering OP-1 synth. This thing is, again, brand new to me and I wasn’t at all sure that it would be appropriate for this project. As I spent time with it over the weekend, however, I realized that live sampling into its synth engine would work well, and if the line-in was active, it would pass the audio through to its outputs as well. The sampled audio could then be “played” via the keybaord or, more appropriate for these purposes, one of its four sequencers. It’s Pattern Sequencer was going to work best here, since it would create a very regular sequence that would repeat, and to which I could add notes as it repeated. It’s output was muted as I recorded the playing of the glass again (that makes three different pitches total). It needs six seconds to fill its sample memory, and as soon as it was done I began the sequence. Initially with just one note playing on the first downbeat, the volume was turned up as it went through the Tyme Sefari (but not sampled by the TS, merely passed along the dry channel). I cross-faded the random sequence from the TS with this regular sequence using the wet/dry mix on the Tyme Sefari to the point that all you hear for the last four or so minutes is the OP-1 sequence.
At around 9:45 I began removing notes from the sequence up to the point that it was done at 11:01. It’s funny, as I thought I’d recorded maybe six or seven minutes of audio, tops. I was pretty surprised when I saw it was 11:01. It’s easy to get carried away when things are going well.

Here’s the audio.

As I said earlier, these projects are leading to new workflows and results that I would not have otherwise come across. I like the results of all three so far, and I think they’re quite a departure from most of the sounds I make and post. Looking forward to number four.
Check out the entire Disquiet Junto group on SoundCloud. There is a lot of really interesting work there.
.

the horn and the whistle

it was a seagull

A week ago I posted about the ice piece I did for the first Disquiet Junto assignment, and wrote that I’d post about the second assignment “tomorrow.” Tomorrow came and went and here we are. The second project for Marc Weidenbaum’s Disquiet Junto went like this:

Disquiet Junto Project #2: “Duet for Fog Horn & Train Whistle”

Instructions:
Create an original piece of music under five minutes in length utilizing just these two samples:
Fog Horn: http://www.freesound.org/people/schaarsen/sounds/69663/
Train Whistle: http://www.freesound.org/people/ecodios/sounds/119963/
You can only use those two samples, and you can do whatever you want with them.

The horn is this:
[audio:69663__schaarsen__sfx-nebelhorn.mp3]

And the whistle is this:
[audio:119963__ecodios__distant-train-1.mp3]

Unlike the previous assignment, the ice, which was like pulling teeth, this one fell together in minutes. I knew immediately I wanted to draw out that horn as a long drone. I cut a section out of the middle and looped it back to front several times, taking care that the crossing points don’t click (which didn’t matter, really, since the reverb hides artifacts like that so well). I duplicated this track and panned the two identical tracks hard left and hard right, and dropped them each several steps in pitch. As I listened to them together, I started playing with changing the pitch of sections intermittently and seeing how they’d sound. I liked very much. I wish now I’d had some method or rule for these pitch changes, but really it was just a matter of guessing and listening. It ends up sounding somewhat random, which is what I was looking for, I suppose. I thought it sounded rather orchestral, which a few commenters pointed out as well.
I then layered-in the whistle track, hoping the two tones would play nicely. I pitched the whistle down an octave, which was fine if a bit boring, until it got to the very end of the track, in which there are two small little bumps, as if the person recording the whistle touched the mic or recording device. After enjoying the percussion of the ice on the previous Disquiet piece so much, I kept driving down that road. I cut the little clicks out of the whistle and deleted the whistle. The bumps became percussion, played at various speeds and rhythms. One of my favorite plug-ins, which happens to be a free one, is the delay you can hear on the percussion track. It created the feedback that ends up becoming that high static squeal at the end, with the frequency being turned down on the fade out.
And that’s really all it took.

I completed the third Junto project last night, which was a “live” project. More on that soon. Maybe tomorrow, maybe not.

ice and fog

A break from guitars today. Back to electronic seizure music, as the fiancée describes it. This post starts out with a lot of talky talk (seizure-inducing possibly) but bear with me. It pays off.

along PA state hwy 903

Say you’re standing in the middle of a field, and you know you have to go somewhere, and you can actually go anywhere, any direction you want. It can be difficult to decide exactly what to do, right? However, if you know where you need to end up, your choices are narrowed somewhat. Furthermore, if there is a clear path drawn, or visible obstacles that would force you into choosing between two or maybe three directions, the decision is easier still. Starting a piece of music with the number of tools, techniques, and possibilities that are available (and I mean readily available, not to just those who have a pile of gear) actually ends up causing problems, more often than not.

An analogy here is drawing or painting pictures. I’ll use this analogy since it’s what I do for my day job. If I sit down and I know I have to crate a specific picture, my job is usually pretty simple. If I don’t know what I have to make, but I’m limited to making it with, say, one red and one blue pencil, then that limitation is itself a direction. If, on the other hand, I have unlimited tools and the freedom to draw anything I want to draw, any way I want to draw it, then I more or less freeze up and stare at walls. When I used to teach illustration and I’d introduce Photoshop to a student who’d previously been adept and making work with a small tackle-box of oil watercolors and pencil, that student would usually, suddenly, not know what to do. Why? Because with Photoshop, you have access to more colors, more tools, more possibilities than you can ever dream of previously. As an instructor, I found myself spending a lot of time with students creating limitations. Finding destinations that made it possible to begin the journey, as it were. Later on students graduate and limitations are forced upon them as illustrators by the particular client’s needs, by the purpose of the work being created, and by the deadline. And even now when I carve out a day to work on “personal” work, as an artist, I will usually sit at my table, staring at the proverbial blank canvas wondering what to do next.

And it’s the same with music. When I get a few hours to hide in the little room at my house where I keep my music gear, I rarely have any direction in mind, but I do have a lot of possibilities. I could practice guitar, which I’ve been doing more than anything else lately, and try setting up a sequence of effects pedals that might set me off in some unexpected direction. I might try playing an old 45rpm record through my modular synth and see what happens. I might randomly dial in an eight-note phrase on one of my sequencers, and let it run with subtle changes to timbre or pitch, and hope it’s awesome. I might consider the amount of time I have, maybe an hour, maybe an afternoon, and think about what I can accomplish in that time. Anything that will take a lot of programming or re-patching might be off limits.

The point of all of this is that I’m creating arbitrary limitations. Building imaginary fences that allow me to focus on a smaller idea, a more manageable state of things, and maybe actually get something done. When I go back and look at my Soundcloud page, or at the work I’ve posted here on Dance Robot Dance, I can give a list, usually a long one, of limitations that were either forced upon me or I created myself that led to whatever it is that worked. (To that end, by the way, I put a huge chunk of my modular synth up for sale today hoping to narrow the possibilities down to the parts I like most and force me to work better with less, avoiding some wall-staring. If you’re in the market for a bunch of Eurorack synth modules, drop me a line or comment and I’ll let you know what I have.)

Two weeks ago Marc Weidenbaum emailed me about a really interesting project he’s calling Disquiet Junto. Disquiet being his website, and a “junto” being the name of a society that Benjamin Franklin formed here in Philadelphia during the early 1700s as “a structured forum of mutual improvement,” as Marc described in his initial email about the project. The idea here is that on Friday, Marc posts an idea for a piece of music. which is in fact a simple limitation, and gives until the next Monday at midnight to have the piece posted into the Disquiet Junto group on the music-sharing site Soundcloud.

I’ve done a few remix projects with Marc, which are sometimes difficult for the reasons I allude to above. Remixing a song is, to me, standing in a field and being able to go anywhere. I can do a lot of different things with a song, especially when I have several weeks in which to work on it. However, this new thing is right up my alley and I immediately signed up.

The first instructions/limitations came on January 5. “Please record the sound of an ice cube rattling in a glass, and make something of it.” The first part was easy enough. I took my little recorder down to the kitchen and recorded about six minutes of this.

[audio:http://dancerobotdance.com/audio/ice-sample.mp3]

This is when things could go awry, as the possibilities are somewhat endless with what I could do with this audio. I planned to record it through various hardware, but I only had time that Sunday night to plug the recorder into my modified EHX Stereo Memory Man w/Hazarai and see what might happen. The rest of the piece was done at my studio the next day, where I don’t have any music gear, so I was limited to software. This isn’t exactly a limitation, since between Max for Live, Reaktor, Reason, and any number of plug-ins anything can happen. I only gave myself about a half hour, so I opened up a few trusty Reaktor ensembles and went to work. Here is a tiny piece of the resulting audio. Left to right is the EHX Stereo Memory Man pedal, a Reaktor deal called Resynth, and another Reaktor ensemble called SyncSkipper.

It wasn’t too hard from here to pick out parts I liked and start sequencing them in Ableton Live. It wasn’t gluing itself together until I located a two-note guitar thing I’d made over the weekend through the WMD Geiger Counter. This created the loop that everything else could bounce off of. The percussion is made up of small and micro-small edits from these files I created, and once it was drowned in reverb (I’ve been really reverb-happy lately) I saved and exported. Here’s what I posted on the Disquiet Junto group.

Go to the group page and listen to the other entries if you can. There’s an incredible range of what people can do with the sound of ice in a glass.

True to his word, Marc sent out a note yesterday with this week’s Junto limitation: He sent links to two samples on freesound.org, and asked the Junto to create pieces under five minutes in length, using only these two samples, but allowing anything to be done with them. This one went much quicker, and I’ll tell you all about it tomorrow.

it's just a guitar

I’m gonna talk about the guitar here dropping the analogy from the last post. Upon further reflection, I’m not having an affair, really, and I’m not breaking up with my synth. I suppose the closer example would be that I’m perfectly happy taking on multiple, um, lovers.
What brung me to the guitar and keeps me with it for a year now is the same thing that moved me from software to hardware in 2009. Playing an instrument is better than playing a computer. Certainly the computer is an instrument of sorts and that argument is an interesting one. But dude, it’s not the same. A relationship is a valid analogy, frankly. Programming software to make music is to a real human relationship what an online affair is to playing a real instrument. Simplified, easy, without the messiness, but ultimately far less satisfying as well. The modular synth hardware is, to me, somewhere in between. More tactile than a computer, obviously, but also closer to programming than playing an accordion or a guitar. I pick up a guitar and in about eleven seconds I’m playing. I turn on the modular and I consider what I plan to do, and start patching. Even with my relatively small synth, this takes some thought. It ain’t rock-n-roll.
It wasn’t too long after this guitar discovery of mine that I bought a Doepfer A119 for my synth, which is a preamp and envelope follower. With it, the modular basically becomes a large effects box and the guitar a versatile oscillator, and I can easily run the guitar through its ring modulator, filters, Fm the sound, and anything else I can do with the synth. The A119 produces gates as well, so events can be triggered with each string pluck or chord strum. I’ll definitely write more about this with examples.
I also now have a growing collection of pedals. These reproduce many functions of the modular, when it comes to driving the guitar through them. For instance, I can patch up an auto-wah with a filter, a VCA, and a Maths. But like with the guitar itself, just plugging into a pedal straight to the amp is just easier and more immediate. And since much of what I like about this route is the immediacy, I have pedals. Fuzz, tremelo, chorus, vibrato, delays of course… I love ’em and pretty soon I’ll write all about ’em.

What this is leading to is that the whole thing is coming around full-circle, see. Last week I was listening to some of the samples of Christopher Willits guitar on CDM and started thinking of tools to play with loops and samples in an interesting way. I have Max for Live and Reaktor and both of them provide a wealth of tools in which one can mangle and shuffle and wreck loops in fascinating ways, I decided it was a good time to spring for Audio Damage’s Automaton, mainly for the immediacy (natch) and for the pretty graphics. I’ve always loved Conway’s Game of Life ever since I saw Brian Eno talk about it at a tech conference in San Francisco in 1995. It blew my mind back then, and I love how software designers have used it as sequencers in different ways (in fact, I think it’s a good subject for a future post). Reaktor has a drum machine called Newscool, which is one of my favorite ensembles in that package. Audio Damage went another direction with it using it not to create notes and sounds, but to eat them.

It’s just a start, and a too-long one at that (more than four minutes), but below is a simple 16-note guitar loop (B-E-G#-B-E-G#-B-E, A-E-G#-A-E-G#-A-E) recorded into Ableton, and then attacked by robot monkeys. Along with Automaton, I used Max for Live’s Buffer Shuffler (trying to see how much overlapped with Audio Damage’s Replicant — the answer is a little) which is what is creating the backwards recording sounds and some of the misplaced parts of the phrase.

Marc Weidenbaum’s Disquiet picked this up this morning, which is always a bonus.

The unfortunate reality, now, is that today is Christmas Eve and the hour or so I’m taking this morning to write about this is the last hour or so I’ll have for the next week, at least, to do anything not resembling family adventures and holiday cheer. But my resolution for 2012 is twofold.
1. Make more sounds.
2. Write about it here.

Happy Holidays.
[audio:http://dancerobotdance.com/audio/automaton_drd.mp3]

the tyme sefari will eat your children


I think I may have mentioned that I recently went through a small identity crisis with my modular synth. See, one kind of bad thing about a modular is that it is never “complete.” That is, when you get a Juno or an Access Virus or a MS-20, that is your synth. Strengths, weaknesses, limitations and all. With a modular, what’s great about it is that one can add and subtract and make it bigger and look a new module was just released so what the hell I’ll buy a new case… and it never ends. Do the filters sound too “Moogy?” Okay, get one based on a different circuit. You get the idea.
So I had some neat stuff in my little kit, but it wasn’t inspiring me and the music I was hearing didn’t sound like music I wanted to make. I’d missed the Hertz Donut since I sold earlier this year, and the (wonderful) e350 Morphing Terrarium didn’t really get funky with the vocal sounds that I hoped it would. Basically, everything was so nice and wonderful and there wasn’t much with which I could make myself laugh. Stuff with, let’s call it, personality.

In a fit of malaise, I decided to sell of a bunch of stuff and replace it with other bunch of stuff. On the chopping block were the STG Wavefolder, the e350, a little Malekko VCA, and the Tip Top Audio Z2040 filter. I replaced the filter with a Doepfer A120 which, I feel, has a very similar 4-pole fat low pass sound, but includes a 1v/oct CV input so that it can track better, and it is a lot cheaper (helping to fund the purchases I wanted to make). New to the system are another Hertz Donut, the Flame Talking Synth I wrote about in my last post here, and lastly but not leastly, the Harvestman Tyme Sefari.
The Tyme Sefari is a digital looper/delay/buffer thing that basically records audio fed into it with an 8-bit chip, and then plays that recording back in various ways. Some knobs give the user the illusion of control over these various ways, and learning to go along with the quirks of this device is the secret to getting something out of it that you would like. It’s always on the verge of sounding like an Atari trying to kill a radio, and it’s the understanding of how it works that keeps these tendencies just on the other side of that threshold. The first hour or so I had it, I was kind of all “whaa?” and “crap” and stuff. I could kind of make out bits of what I was feeding into it, but it was just crushed noise for the most part. I went away for the evening and read the internets about it, and when I came back I had a better grasp of what the hell is it. I started with some slow simple blippy sequences, fed into the input. It’s got a mix output with a knob for choosing how much of the signal is wet/dry, as well as a delay out which is 100%. Therefore, of course, I had to feed the wet to one channel (right) and the mix out dialed to 100% dry to the left channel. What it does is, when the ‘record’ switch is on, it records whatever is jacked into it, fills the memory, and then plays back that what is recorded. While it’s playing it back, it’s also recording new data, so when both record and play are engaged, it’s a fairly seamless low-fidelity echo of what you’re feeding it. It’s got a loop switch which just begins looping whatever is in its memory at that moment, loop start and end knobs that change the beginning and end points of the playback (imagine you record five seconds of stuff. Normally it plays starting at 0 and ends at 5. With the knobs you can have it play only what is between 1 and 3, for instance.) It also has a direction switch which just reverses the play back. All of these controls can be started and stopped with gates as well.
Most of the effect you hear on these first two tracks are modulation of the sample rate and changing the direction of the recorded loop. The first track is using a nice sine wave from a straight analog oscillator (the Malekko “Unkle” Oscillator), which makes it really easy to hear what’s being screwed with by the Sefari. The second track is exactly the same, just replacing the Osc with the Flame Talking Synth, for giggles. (This track is a prime example of what I imagine when I sit down to make “music.” I love these digital sounds.) The reverb on these tracks is from the Strymom Blue Sky pedal, which I bought a month or two back and need to write about soon. It’s a terrific reverb.

The third and fourth tracks are a bit different. They are songs from a children’s album I bought a while back at a Salvation Army called “Happy Birthday.”

Albums like this provide great source material for electronic mangling and chopping. When the songs include creepy talking teddy bears, it’s even better. The turntable is jacked directly into the audio in of the Sefari, and then recording took place and knobs were turned.

Hopefully the set-up as it is will stay for a bit, as I’m really excited about and happy with everything I’ve got in there right now.

learning to talk

I just installed a new module yesterday called the Flame Talking Synth. I’d been eyeballing this thing for some time, first being intrigued by a standalone MIDI version Flame has had out for some time, but turned off by the price tag and, well, the fact that it was MIDI-based. I love talking synth sounds and it’s always fun to find ways of making stuff sound like a screwed up robot. This eurorack version wasn’t exactly cheap either, so when it was first released a few months back I decided to get the E350 Morphing Terrarium instead, knowing it had formant sounds in its wavetables, and believing I could get something close with that. Well, the formant sounds are lovely on the E350, but it’s not a screwed up robot. So in the recent purge, where I traded out or replaced seven modules from my synth, I sold the E350 and went ahead and grabbed this Flame Talky thing.

The Flame Talking Synth is based around a digital chip called the Speakjet. It’s sophisticated in interesting ways, and it’s got some interesting limitations as well. The module has three modes that each produce quite different sounds. These tracks focus on the “Phoneme” mode and “word” mode (the third is “synth” which is not about the speech but has it’s own sound and nuances. “Phonem” mode has dozens of simple speech sounds (for example, “tu,” “eyrr,” “uh,” “aw,” as well as sounds labeled things like “biological 2” and “Pistol Shot” which you can hear quite a lot in the carnival track below. “Word” mode allows the synth to say actual words like “techno” and, yes of course, “robot.” This is cool and all, but what’s fun is that these words are selected using CV, so they are playable the way a note on a keyboard is playable. For instance, G2 on a keyboard would “play” the word “robot.” But since using a sequencer on a modular synth like mine is not an exact science, a lot of what happens is, let’s say, gibberish-like. In the carnival track you can hear a couple of spots where it leaps into the words mode, but I can’t understand a thing it’s saying.

These tracks were recorded in the first twenty minutes after I installed the module. Basically, it’s random sounds created by running the Noisering and the Choices joystick into various CV inputs, controlling the pitch, the speed in which the thing “speaks,” the bend of the phonemes, and the actual words and sounds it makes. It’s just heaven. It came with a nice detailed manual that I’ve since read and I’m looking forward to attempts to actual get it to say things, and maybe even sing.

These two tracks are, as mentioned earlier, the Flame Talking Synth controlled with the Noisering, the Choices joystick, and a little bit of Pressure Points. The first track is fed directly into the also-new Pittsburgh Analog Delay module, which I’ll get a little deeper into real soon. The second track is run first through a ring modulator (µMod by Intellijel) and then to the delay. The third is self-explanatory. Ha ha.

modified Stereo Memory Man with Hazarai

Last summer a friend of mine gave me an old Boss DD3 digital delay pedal. I’d been looking to add a delay module to my modular synth and this guitar pedal fit the need pretty well. After playing with it for a few weeks I was wishing that it had some way to synchronize the delays with the beat of the synth. If you’ve ever used delay plug-ins with a DAW you know what I’m talking about. Most plugs that I’ve used allow one to choose delay times in milliseconds or in times related to the beat: quarters, eighths, dotted sixteenths, triplets, etc. Having some beat-synched delay taps hopping around the track really can add a lot in the way of syncopation. Having any delay, synched or not, is great. But that extra thing is what I was looking for.

I noticed that the several pedals have a tap tempo switch, which gets close but isn’t quite right for the synth. Tapping tempo is perfect for a guitar player who can subtly change speed to keep time with tapping a pedal. But the timing of a synth is much more machine-like in nature and would work best with the same clock as what’s timing the entire patch. If you’re running a sequencer, LFO, and envelope from a clock trigger, that same trigger could drive the taps of a delay and keep everything in time.

In an email to Navs, a musician in Germany, I happened to mention that a pedal with a trigger input would be a great thing. He replied with a link to a post he’d made on his own site about a year earlier. In this post, he writes about a musician, Rechner7, also in Germany, who had modified an Electro-Harmonix Stereo Memory Man with Hazarai (Hazarai is a Yiddish term meaning something along the lines of “everything and the kitchen sink”). Rechner7 had not just added a trigger jack, but he’d added three of them with a switch to choose between the 2nd and 3rd inputs, as well as a on/off switch for the loop button which would make that particular function much easier. I’d never soldered a thing in my life but onto Craigslist I went and a week or two later had a SMMH pedal.

Aftr studying Rechner7’s photos and a few emails back and forth, I understood a bit more of what was going on. Trigger/Gate input C on his plans is always on, and there’s an on/off/on switch that chooses to add input B or A to the signal at C. This allows a steady beat into C with odd or random beats into the other two inputs, which can add a lot of fun/chaos to the delay signal. The SMMH doesn’t repitch when the delay is sped up or slowed down (its only weak link in my opinion) so having these two inputs is terrific for quickly adding new taps or off-beat taps. He also added a little high-pass filter circuit (found about a third of the way down on Doepfer’s website here) which keeps a slow gate from inadvertently engaging the loop function. On the SMMH, the tap-tempo switch engages the loop if pushed for more than a half-second. What this means is that a long gate (half second or more) would do the same. So the high-pass filter only allows gates that are shorter. The exact length is decided by a capacitor and some math. (I apparently didn’t do the math correctly because mine still slips into loop mode now and then. I need to fix this.) There’s a switch that bypasses this filter for inputs B and A in case one wants to throw the thing into loop mode. Lastly, Rechner7 also suggested I add a transistor to the input circuits, which keeps unwanted voltage from traveling back to the trigger source on the modular.

I wired this all up on a breadboard before doing any permanent damage to my new pedal, and was quite surprised when it worked. Confidence flowing, I took the step of drilling six new holes into the aluminum case of the SMMH. This was rather thrilling in a DIY sort of way. There was no going back now.

101114_smmh mod_022

It took the better part of the next day to get the wiring done and everything in place, and I’ll be the first to admit that my electronics work isn’t the prettiest. But the results are exactly what I wanted. The only change from Rechner7’s design was that I designanted the always-on jack as input ‘A’ rather than ‘C’ which just made more sense to me.

101115_smmh mod_035

Here’s a short track where the different delay timings are really apparent.

One thing I’d not considered was that when the delay lands exactly on the beat, it’s not that interesting. So I find that using the Rotating Clock Divider from 4ms is necessary. A typical patch would be using the /3 output from the RCD as the main clock, and running the /1 and /2 into the inputs of the SMMH gives me triggers on the eighth-notes and triplets. Then I might have something more unusual running to my input C for some chaos tossed in.

Edit: I should probably mention that on the video up there, the same rather boring eight-step sequence is spit out by the synth throughout the entire video. All the syncopations and funny beats and extra rhythms are created entirely by the Stereo Memory Man being clocked by the µStep, a little trigger sequencer from Intellijel. The dry signal is on the left channel and the wet is on the right, so you can listen to just one or the other and hear the differences.

Since completing this mod, I noticed that Rechner 7 had done a similar modification to an EHX Deluxe Memory Boy as well. I’d been thinking about adding an analog delay pedal to my arsenal, and found a used one a few weeks ago. About the same time, Pittsburgh Modular announced an analog delay module for Eurorack that may end up being more what I’m looking for, even without tap tempo, so I’m holding off drilling the holes into the DMMB in case I need to let it go.