Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!

SuperCollider inspired web audio coding environments

SuperCollider is an audio environment that gets a lot of things right in terms of hacking around with multichannel sound, live coding and composing the different structures you need for music.

So it's no surprise that in the world of Web Audio currently being born, various people are getting inspired by SuperCollider. I've seen a few people make pure-JavaScript systems which emulate SuperCollider's language style. So here's a list:

I think there's at least one I've forgotten. Please let me know if you spot others, I'd be interested to keep tabs.

So there are obvious questions: is this a duplication of effort? should these people get together and hack on one system? is any one of them better than the others? I don't know if any of them is better, but one thing I know: it's still very early days in the world of Web Audio. (The underlying APIs aren't even implemented fully by all major browsers yet.) I'm sure some cool live coding web systems will emerge, and they may or may not be based on the older generation. But there's still plenty of room for experimentation.

Wednesday 18th December 2013 | supercollider | Permalink

Notes on how we ran the SuperCollider Symposium 2012

I've just uploaded my notes on how we ran the SuperCollider Symposium 2012 (10-page PDF). sc2012 was a great event and it was a privilege to work with so many great people in putting it together. I hope these notes are useful to future organisers, providing some detailed facts and figures to give you some idea of how we did it.

The document includes details of the timing of things, the budgeting, promotional aspects. I also include some notes about outreach, which I think is important to keep in mind. It's important for community-driven projects to bring existing people together, and to attract new people - and for something like SuperCollider which doesn't have any institution funding it and pushing it forwards, these international gatherings of users are 100% vital both for the existing users angle and the new users angle. Happily, both of these aims can be achieved by putting on diverse shows featuring some of the best SuperCollider artists in the world :)

Shout outs to all the other organisers, who put loads of their own time and enthusiasm in (see the "credits" page), and hi to everyone else I met at the symposium.

(If you weren't there, see also Steve's great photos of sc2012.)

Wednesday 2nd May 2012 | supercollider | Permalink

Cross-correlation signal detection in SuperCollider

If you had to communicate a digital message by playing sound across a noisy room, how would you do it?

That's basically one of the problems signal processing engineers have worked on for decades, and there are many good ways to do it (the same principles allow mobile phones to communicate in a "noisy" radio spectrum too). One thing you can do is spread-spectrum signalling, using a particular signature spread across various different frequency bands. For example, an upwards "chirp" can be more robust than a simple bleep at a single frequency.

I just found this code example I put on the sc-users mailing list. It's quite nice - you use an upwards-chirp and a downwards-chirp, and detect which one has happened, even though we deliberately add loads of noise. (The code uses a technical trick, where we can do cross-correlation signal detection by convolution with the time-reversed signal.)

Server.default = s = Server.internal;
s.boot;
~dursamps = 16384;
~dursecs  = ~dursamps / s.sampleRate;

(
SynthDef(\chirpup, { |dur=0.1|
       var sig = SinOsc.ar(Line.ar(1000, 10000, dur, doneAction: 2));
       Out.ar(0, Pan2.ar(sig * 0.1));
}).add;
SynthDef(\chirpdn, { |dur=0.1|
       var sig = SinOsc.ar(Line.ar(10000, 1000, dur, doneAction: 2));
       Out.ar(0, Pan2.ar(sig * 0.1));
}).add;
SynthDef(\chirprecord, { |dur=0.1, in=0, buf=0|
       var sig = In.ar(in);
       RecordBuf.ar(sig, buf, loop: 0, doneAction: 2)
}).add;
)

~group_chirp    = Group.new(s);
~group_analysis = Group.after(~group_chirp);

////////////////////////////////////
// analysis

// OK now let's create a buffer, then put a frame of data in
~upbuf = Buffer.alloc(s, ~dursamps);
~dnbuf = Buffer.alloc(s, ~dursamps);
(
s.bind{
       Synth(\chirpup , [\dur, ~dursecs], ~group_chirp);
       Synth(\chirprecord, [\dur, ~dursecs, \buf, ~upbuf], ~group_analysis);
}
)
// ...wait for one to finish before doing the next:
(
s.bind{
       Synth(\chirpdn , [\dur, ~dursecs], ~group_chirp);
       Synth(\chirprecord, [\dur, ~dursecs, \buf, ~dnbuf], ~group_analysis);
}
)

// reverse them
~upbuf.loadToFloatArray(action: {|data| ~upRbuf = Buffer.loadCollection(s, data.reverse)});
~dnbuf.loadToFloatArray(action: {|data| ~dnRbuf = Buffer.loadCollection(s, data.reverse)});

//////////////////////////////////////
// detection

(
SynthDef(\chirpdetector, { |in=0, upbuf=0, dnbuf=0, framesize=100, out=0|
       var sig = In.ar(in);
       var conv = [upbuf, dnbuf].collect{|buf| Convolution2.ar(sig, buf, 0,
framesize) };
       var smooth = 0.1 * Amplitude.ar(conv, 0, 0.2);
       smooth[0].poll(smooth[0] > 1, "UP");
       smooth[1].poll(smooth[1] > 1, "DN");
       Out.ar(out, smooth);
}).add;
)

~detectbus = Bus.audio(s, 2);
x = Synth(\chirpdetector, [\framesize, ~dursamps, \upbuf, ~upRbuf, \dnbuf, ~dnRbuf, \out, ~detectbus], ~group_analysis);
~detectbus.scope

// Let's add some noise too, to make it a difficult task:
~noise = {WhiteNoise.ar(0.3)}.play(~group_chirp);

// Now we trigger different types of chirp and watch how the output leaps...
// RUN THIS LINE OVER AND OVER, waiting imbetween:
Synth([\chirpdn, \chirpup].choose, [\dur, ~dursecs], ~group_chirp);
The original discussion is here on the sc-users mailing list.
Wednesday 15th June 2011 | supercollider | Permalink

Dubstep bass in SuperCollider

Dubstep bass in SuperCollider, improveable:

//s.boot

{
    var trig, note, son, sweep;

    trig = CoinGate.kr(0.5, Impulse.kr(2));

    note = Demand.kr(trig, 0, Dseq((22,24..44).midicps.scramble, inf));

    sweep = LFSaw.ar(Demand.kr(trig, 0, Drand([1, 2, 2, 3, 4, 5, 6, 8, 16], inf))).exprange(40, 5000);

    son = LFSaw.ar(note * [0.99, 1, 1.01]).sum;
    son = LPF.ar(son, sweep);   
    son = Normalizer.ar(son);
    son = son + BPF.ar(son, 2000, 2);

    //////// special flavours:
    // hi manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, 1000) * 4]);
    // sweep manster
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, HPF.ar(son, sweep) * 4]);
    // decimate
    son = Select.ar(TRand.kr(trig: trig) < 0.05, [son, son.round(0.1)]);

    son = (son * 5).tanh;
    son = son + GVerb.ar(son, 10, 0.1, 0.7, mul: 0.3);
    son.dup
}.play
Saturday 23rd October 2010 | supercollider | Permalink

Efficiency in SuperCollider: pausing synths

When I use SuperCollider I often create synths and when they're done I free them, but I don't often pause them. For example, if you've got a synth playing back a recorded audio Buffer, and the Buffer ends, hey ho, no problem, I can leave it running and send a trigger when I want the sound to play again. But since I'm currently working with mobile devices I need to be more careful about that kind of thing.

Here's an example of what I mean. Run this code block to define two synthdefs, a playback one and a "supervisor":

(
s.waitForBoot{
SynthDef(\supervisor, { |targetid=0, trigbus=0, rate=0.25|
    var cycle, pausegate;
    cycle = LFPulse.kr(rate);
    Out.kr(trigbus, Trig1.kr(cycle, 0));
    Pause.kr(cycle, targetid); 
}).send(s);


SynthDef(\player, { |buf=0, trigbus=0|  
    // Some meaningless extra cpu-load so we can see the difference
    var heavycruft = 400.collect{[WhiteNoise, PinkNoise, BrownNoise].choose.ar}.squared.mean;
    Out.ar(0, PlayBuf.ar(1, buf, trigger: In.kr(trigbus)) + (heavycruft * 0.00001))
}).send(s);
}
)

Now when I use the player synth like normal...

b = Buffer.read(s, "sounds/a11wlk01.wav")   
t = Bus.control(s);
x = Synth(\player, [\buf, b, \trigbus, t])

...I can see the CPU load on my machine shoots up to 40% (because of the pointless extra load I deliberately added into that second SynthDef), and it stays there even when the Buffer ends.

(Often you'd tell your synth to free itself when the Buffer ends using doneAction:2 - that works fine, but in this case I want the synth to keep running so I can retrigger it.)

Having run the above code, now start the supervisor running:

y = Synth(\supervisor, [\targetid, x.nodeID, \trigbus, t], x, addAction: \addBefore)

This should repeatedly pause-and-restart the playback. When the sound is silent, the CPU load drops from 40% down to 0.1%. So, clearly it makes a saving!

So in order to do this I had to use Pause.kr() in the supervisor, and also to tell the supervisor which node to pause/unpause (using x.nodeID). The supervisor is also sending triggers to a control bus, and the player is grabbing those triggers to reset playback to the start.

Friday 10th September 2010 | supercollider | Permalink

sc140: squeezing entire pieces of music into tweet-sized snippets

The New Scientist has a nice article about my sc140 project to highlight the amazing music that some people have been creating using the medium of Twitter.

This is all made possible by a rather lovely piece of software called SuperCollider, which is a programming language specialised for sound and music. The really nice thing about that is that it has the benefits of "traditional" music software on the one hand (for example, standard effects like echo and reverb are easily at hand), and also the benefits of a modern programming language on the other: it's much more flexible than traditional music software, you can tell the computer abstract things like "play these 7 notes, then a random burst of noise, then 12 random notes of your own choosing, then repeat the whole thing 4 or 5 times". (That description doesn't sound like it'd make a great piece of music does it? :) Well let's just call it an illustrative example.)

Programming languages can be a bit scary to learn at first, and I think it's perfectly reasonable that more traditional music software is still used for a lot of things. Often, music producers have a standard way of working and a standard set of effects that they pick off the shelf, twiddle some knobs, and there you go, no problem.

But as a fairly experimental musician I long ago got fed up of those restrictions and searched for something that would give me a lot more freedom. The flip-side of that freedom is that sometimes you have to spend more time building the tools you want before you use them. But I've been able to do a lot with SuperCollider, to the extent that it's now my main tool in my PhD work where I'm processing beatboxing sounds in real-time and transforming them for interactive music purposes.

The SuperCollider tweets thing is interesting and fun - 140 characters is a very strong limitation so to try and create even a decent sound, let alone a whole piece of music, is quite an achievement. On the other hand, SuperCollider tweeters often resort to clever little programming tricks in order to squeeze as much as possible into that tiny space.

So I do worry that the sc140 project makes SuperCollider look almost completely impenetrable, these 140 characters of obscure alien code! To demonstrate that SuperCollider's not actually too scary to learn, watch this SuperCollider in 60 seconds video.

Thursday 19th November 2009 | supercollider | Permalink

Reverse-engineering the rave hoover

One of the classic rave synth sounds is the "hoover", a sort of slurry chorussy synth line like the classic Dominator by Human Resource. I decided to recreate that sound from scratch, by reverse engineering the sound and recreating it in SuperCollider. Here's how to do it:

Step 1: Listening

Analytical listening doesn't come naturally but it gets easier with practice. Listen to the target sound again and again, and try and get a feel for the different technical aspects of it. Does it sound bright or dull? Is there vibrato, tremolo, chorusing?

When I listened to the opening sound in Dominator I made the following notes:

  • There's a main sound (the slightly fuzzy chorussy synth) plus also some added bass underneath. They could be part of the same sound-maker but to me they sound like separate elements, although the pitch of the two is clearly locked together. The bass might be produced by a separate bass oscillator, or maybe just by boosting the bass frequencies in the main synth, or maybe the main synth just has some octave-doubling in it.
  • It's probably created by playing notes on a piano-style keyboard but adding a massive slur (a "portamento") to the pitch values so that they blur into each other rather than staying separate and steady. I can hear possibly a three-note descending line as being the underlying pattern?
  • What sort of oscillator (sine, square, triangle, saw) might be the basis of the main synth line? It's a buzzy sound and fairly classic analogue synth-sounding so it's probably one of those at base. My first guess was "square" but when I implemented it (described later) this wasn't sounding right, and I settled on "saw" as being much closer to the original sound. (But see the "Update" at the bottom of this article - it turns out the original synth used a hybrid of saw+square!)
  • You can hear some kind of pulsation in the sound, even when it reaches the "stable" bit of the line. The pulsation has a period of about 3 cycles per second (3 Hz). I feel like there is vibrato happening as well as chorusing. Vibrato means directly wobbling the pitch at a slowish rate. A chorus effect (in a synth like this) causes the slow-rate pulsations by blending oscillators at slightly different frequencies: if you play an oscillator at 100 Hz and one at 103 Hz, you get a sound with a 3 Hz pulsation (==103-100) as the oscillators drift in and out of phase with each other.

Step 2: Visualise pitch, spectrum, waveform

There are various programs that can visualise the contents of audio. I like Sonic Visualiser. Just download it, use it to open an audio file, and then you have lots of nice tools like spectrograms, chromagrams, spectrums, pitch trackers....

So that's what I did. By visualising the original ("time-domain") signal and its spectrogram, I could first estimate the durations of things. The vibrato in the sound was indeed about 3 Hz: 0.335 seconds per cycle in my measurement. The duration of the main line was 1.929 secs, or 0.2412 per note if you divide that time into 8 equal measures. (I decided that the line was probably sequenced froma series of 8 notes, although it's hard to tell.)

If you find a pitch detector in the Transform menu you can get an computerised estimate of the fundamental frequency ("f0") and how it changes over time. In the following I've used two different pitch trackers (Yin, and Yin-with-fft) and completely coincidentally, the one marked in green seems to be finding the bass note while the one marked in purple seems to be finding the main synth note (with a few errors):

OK, so it's quite likely that our bass notes are around 70--80 Hz and our synth notes are around 280--300 Hz. This fits with my expectations after having some experience in this, but try synthesising specific tones for yourself if you're unsure:

  x = {SinOsc.ar(290, 0, 0.1)}.play

In the picture above, notice how both curves seem to run closely in parallel, and also, imbetween the repetitions they seem to do a "scoop" right down to some very low pitch indeed.

OK. Next let's look at harmonics. If you choose Pane > Melodic Range Spectrogram you get to see this frequency analysis:

The lowest, thickest, yellowest band at the bottom is the fundamental of the bassline. It matches up with the curve we got from the pitch-tracker.

If the bassline were a pure sinewave then it would have no harmonics stacked above it, but that's not the case here. The next line above is fainter, and the one above it fainter still, and these are the first two harmonics of the bass tone (having pitch f0 * 2 and f0 * 3, respectively).

Just above there we see the fundamental of the main synth sound, at about 290 Hz, which has plenty of harmonics stacked above it too.

How do we know which is a fundamental, and which harmonics belong with it? I had a bit of luck with the pitch-trackers, but it's not always that easy. The fundamental is usually the lowest, the strongest, and the harmonics have a frequency which is an integer multiple of the fundamental. In this case it becomes a little more complicated, because I've asserted that there are two different synth tones (the bass does sound separate, to me). The trace at about 290 Hz is pretty strong, making it a good candidate for being the fundamental of our main mid-range synth.

If you look at the 290ish trace you can also see that the strength of that note seems to be wibbling in and out in a very stable pattern (about 3 Hz) - in other words it looks like a pattern of blobs rather than a steady yellow curve. So there must be some kind of modulation happening. From a spectrogram I wouldn't be sure if this was due to tremolo, chorusing or vibrato. But in combination with listening, I feel pretty confident that the chorusing is doing most of that.

So what do we know so far? Looks pretty clear that we have a main synth playing notes around the 290-300 Hz mark, with chorusing and probably some vibrato, and plenty of harmonics. We also have a bass synth whose fundamental is one-quarter of that frequency, i.e. two octaves below, and which also has some harmonics.

Step 3: Trying to reconstruct

1: A Saw wave with chorusing

We know we're having a Saw wave with chorusing at 3.87 Hz. So how do we do that? Simply take the main frequency and add/subtract multiples of 3.87 to create a set of different frequencies. The following simple SuperCollider patch does that, as well as printing the frequencies for you to see:

x = {
  var freq = 400;
  freq = freq + (3.87 * [-1, 0, 1]);
  "Frequencies: %".format(freq).postln;
  Saw.ar(freq).mean * 0.1
}.play

Listen out for the regular pulsing sound, which should be happening 3.87 times per second.

In the above we created 3 Saw oscillators, using the array expansion starting from [-1, 0, 1]. Try using [-2, -1, 0, -1, 2] to increase it to 5 different oscillators, and see how the sound changes.

This chorussy sound is going to be the basis of the main synth but it'll need some tweaking before it sounds like the Dominator...

2: Vibrato

You know, I thought at first that I was sure there was some vibrato in there. You can add vibrato easily in the above patch, by inserting this line just before the Saw line:

  freq = freq + LFPar.kr(1, 0, 10); // strong vibrato @ 1 Hz

...but I'm actually not sure if I was right, or if it's actually just chorusing that causes the wobblyness in the sound. Anyway, we will add a little bit and see how we go.

3: Portamento

The pitch needs to slide around so that when we change midinote (e.g. hit a different note on a midi keyboard), it converges to that note within 0.1 seconds or so instead of changing instantly. It also needs to slide down to zero and back up again in that characteristic way which contributes so strongly to the overall feel of the synth line. (Sliding down to zero is probably what happens when you release the key on the original synth?)

In SuperCollider there are plenty of ways of doing portamento but I haven't quite hit the perfect one yet for this sound. The standard way is to use the "lag" message, e.g.

  freq = freq.lag(0.1, 0.2);

which would apply an exponential lag so that the frequency takes 0.1 seconds to reach a new target if we're increasing the frequency, or 0.2 seconds if we're decreasing the frequency. There's also Ramp.kr() which can apply a linear lag rather than exponential. But so far, I haven't quite found a lag shape/time that quite seems to match what the original is doing. However, either of these two ways works fine and certainly gets us very close.

Here's a simple example of portamento:

x = {
  var freq = Duty.kr(0.3, 0, Dseq([50, 55, 56, 52, 56, 28].midicps, inf));
  //freq = freq.lag(0.4, 0.3);
  Saw.ar(freq) * 0.1
}.play

Play it once, then un-comment that middle line to activate the portamento and play it again.

4: How do we do the bass?

I wasn't sure if the bassy part was made with a simple siney bass oscillator, or with a similar saw wave as the main synth, so I had to try both out.

As a simple example, here's a saw wave with a bassline added one octave below (freq * 0.5) using a second Saw:

x = {
  var freq = Duty.kr(0.3, 0, Dseq([50, 55, 56, 52, 56, 28].midicps, inf));
  Saw.ar(freq) + Saw.ar(freq * 0.5) * 0.1
}.play

Now, change the bass oscillator from Saw to SinOsc and see what difference it makes to the sound. To my ears, one of these options kind of merges the two sounds while one of them produces parallel sounds with two different characters.

5: What notes do we play?

The pitch-tracker stuff from earlier can be useful in deciding which notes to play. If we have a frequency like 290 Hz, we can use that directly in SuperCollider, or we can convert it to a midi note number using 290.cpsmidi and we get an answer of about 62.

Another nice feature in Sonic Visualiser is the Chromagram, which takes spectrogram data and warps+wraps it so that you can see how much energy there is at frequencies corresponding to traditional western musical notes:

From this image we can be quite confident that D is the note that our sustained sound reaches, during the phrase. And this matches up with what we guessed from the pitch - midi note 62 is the note named "D3".

So, with a lot of guesswork I ended up with the following list of 8 midi notes:

  [40, 67, 64, 62, 62, 62, 62, 62]

The "40" is an almost-arbitrary low note which drags the pitch down so it can slur back up again. Then we slur up to 67 ("G3"), then 64 ("E3") before our held note at 62.

Step 4: Compare your version against the original

If you put all the above components together you can create something approximating the sound at the start of the Dominator track. I did this, then I recorded the result to an audio file and loaded that into Sonic Visualiser as well. Then I could visually compare the results: Do the pitch traces look similar, curve similar? Do the harmonic strengths on the spectrogram seem to match up, or are some too strong? Is the overall "spectral slope" (relative strength of low vs high frequencies) OK? Does the chromagram show that I'm producing the same notes?

Step 5: Tweak, test, iterate

Of course you can then iterate this. Take your results and improve your model. The main tweaks I had to make, on top of my first attempt, were:

1: Bass not loud enough. I hadn't realised how much the bass amplitude really almost dwarfs the amplitude of the main synth. Probably this is because the main synth captures my attention more easily, so in my listening I give it prominence despite it being relatively weak.

2: Choice of bass oscillator. Generating the bass sound via the Saw oscillator (the same as the main synth) just sounds totally wrong. Simply synthesising using a sine-oscillator, plus a couple of quieter sine-oscillators for the harmonics which I saw on the spectrogram, gets it much better. (This is the classic additive synthesis approach, BTW: whatever you want, add sinewaves together until you've got it...)

3: EQ. The EQ balance is slightly different - my version needs more mid-range boost. Of course, most records have EQ added to their sound during production, they don't just use whatever comes straight out of the synth, so a little bit of this tweaking was likely to be needed.

4: Vibrato too soon, too much. The vibrato was a bit heavy, so I lowered the depth of the vibrato. In particular, it sounds good on the "held" note but when applied to the changing note it just sends the pitch wibbling around in an out-of-tune-sounding way, so I needed the vibrato to only come in after the main pitch has been held steady for a while.

5: Portamento. I haven't managed to get the portamento just right yet. As described above, the pitch curve seems to happen slightly differently in the original, compared against my version. There must be some simple combiantion of parameters that gets it, but I haven't quite hit it yet.

6: Goes bad when goes deep. The sound was pretty good on the notes and the way it slurred downwards, but after it had slurred downwards it became a rough kind of sound completely absent from the original. How to fix? Well in the end I added a low-pass filter connected to the main frequency, such that as the frequency drops right down, this low-pass filter squashes everything until at the bottom of the trough there's no sound remaining.

The final sound:

It isn't perfect but it's got most of the elements there.

There's some difference in the onset which I haven't yet got right: when the note kicks in and the pitch slurs up from zero, the original sound has more punch to it, as if perhaps there's some resonance happening in the filters due to the freq sweep, or perhaps just some ADSR shaping or suchlike.

(
SynthDef(\dominator, { |freq=440, amp=0.1, gate=1|
    var midfreqs, son, vibamount;

    // Portamento:
    freq = freq.lag(0.2, 0.6);
    // you could alternatively try:
    //  freq = Ramp.kr(freq, 0.2);

    // vibrato doesn't fade in until note is held:
    vibamount = EnvGen.kr(Env([0,0,1],[0.0,0.4], loopNode:1), HPZ1.kr(freq).abs).poll;
    // Vibrato (slightly complicated to allow it to fade in):
    freq = LinXFade2.kr(freq, freq * LFPar.kr(3).exprange(0.98, 1.02), vibamount * 2 - 1);

    // We want to chorus the frequencies to have a period of 0.258 seconds
    // ie freq difference is 0.258.reciprocal == 3.87
    midfreqs = freq + (3.87 * (-2 .. 2));

    // Add some drift to the frequencies so they don't sound so digitally locked in phase:
    midfreqs = midfreqs.collect{|f| f + (LFNoise1.kr(2) * 3) };

    // Now we generate the main sound via Saw oscs:
    son = Saw.ar(midfreqs).sum 
        // also add the subharmonic, the pitch-locked bass:
        + SinOsc.ar(freq * [0.25, 0.5, 0.75], 0, [1, 0.3, 0.2] * 2).sum;

    // As the pitch scoops away, we low-pass filter it to allow the sound to stop without simply gating it
    son = RLPF.ar(son, freq * if(freq < 100, 1, 32).lag(0.01));

    // Add a bit more mid-frequency emphasis to the sound
    son = son + BPF.ar(son, 1000, mul: 0.5) + BPF.ar(son, 3000, mul: 0.3);

    // This envelope mainly exists to allow the synth to free when needed:
    son = son * EnvGen.ar(Env.asr, gate, doneAction:2);

    Out.ar(0, Pan2.ar(son * amp))
}).memStore;
)

// This plays the opening sound:
p = Pmono(\dominator, \dur, 0.24, \midinote, Pseq([40, 67, 64, 62, 62, 62, 62, 62], inf)).play;
p.stop;

// And this plays the main synth line in the track:
p = Pmono(\dominator, \dur, 0.24, \midinote, Pseq([55, 52, 67, 55, 40, 55, 53, 52], inf)).play;
p.stop;

If you don't have SuperCollider... well why don't you have SuperCollider? Anyway I've uploaded a recording of the reconstructed Dominator patch. See if you can improve it, or recreate some other synth.


Update:

This article triggered various discussions online, and Wouter Snoei produced a version which sounds even nicer. He found the manual for the actual synth used, which described the unusual "pwm'ed sawtooth" waveform used in the synth (a Roland Alpha Juno 2). See the article More Dominator Deconstruction for Wouter's synth, and a graph of the unusual wave shape.

Saturday 13th June 2009 | supercollider | Permalink

SuperCollider Symposium 2009

Just back from the SuperCollider Symposium at Wesleyan. It felt like I was nonstop (did 2 talks, 2 workshops, 2 gigs! Hopefully no-one got sick of me) but there was lots of fascinating stuff to hear about and discuss. Some things:

Sciencey: Chris Kiefer's neural network classes were interesting; he described a continuous-time recurrent neural network, which is a type of neural network that can respond to a sequence of inputs with a sequence of outputs, and showed how it can be used to evolve interesting synth parameters. The Gamelan research done at Graz was also fascinating. They clearly did a lot of work, including musicological and DSP analysis but also cultural considerations: for example, in the West we may be predisposed to search for a "fundamental" in a musical note, but Gamelan tunings are pretty different and it's not clear if Indonesian ears hear it the same way - all of which means it's tricky to build a good model for the instrumental sounds which captures what it should. Anyway they showed a generative Gamelan system which certainly sounded lovely to me, although I'm in no position to judge its accuracy.

Installationy: Dan St Clair's installation was fantastic, little speakers programmed to play birdsong versions of catchy songs. He puts them in a tree and lets them blend into the environment, and then there's a strange moment when you're walking along and you hear birdsong, but then you realise it's singing "Like A Virgin". He used a harmonic analysis technique to morph actual birdsong recordings into the intended melodies. I also liked Renate Wieser's installation, which was an auditory game based on platonic ideas about the yearning for truth! Using a gamepad you do a kind of balancing act and try to lift off and ascend up a scale that passes through stages such as "economist" and "radical".

Musicky: Lots of good stuff. I'll mention Sam Pluta's video piece which was composed of short fragments of well-known films, arranged so that all the sonic-best-bits came together to produce the musical work. The timbral sequences formed from movie "silences" (not really silent) were the best bits. It was impressive enough before I looked down and noticed that Sam was controlling it live in real-time, using a nice live looping-but-not-looping technique which he described in a later talk.

Photo by hecanjog, CC-BY-NC-SA

Others have said this: the diversity was striking, not just from music to science to art, but also the diversity of musical styles and of programming styles. It seems that SuperCollider not just allows but encourages a wide variety of approaches. This can make it tricky to teach or learn SuperCollider, but ends up with a nice range of results.

Some links to further web evidence from the symposium:

Tuesday 14th April 2009 | supercollider | Permalink

YMCA (NYC) and whether it's fun to stay at the

For budget accommodation in New York, I got a tip to try the YMCA. It was $90/night for a single room, which is not cheap although it is cheap for Manhattan. But did it live up to the claims? Let's examine the evidence:

  • "You can get yourself clean" - hmm well the communcal showers on my floor were grotty and a bit smelly, so I never really felt all that clean.
  • "You can have a good meal" - yes the café served good solid food for an OK price, including good breakfast stodge such as bacon+egg buttie or "home fries" whatever they are.
  • "You can do whatever you feel" - well the use of the swimming pool and the sauna is nice, you wouldn't normally get that in a cheap place. And free wi-fi too which was handy.

I didn't sleep that great, cos the place was kind of draughty and noisy with the sound of various groups of European teenagers doing whatever they feel. But it was cheap and cheerful and not bad at all.

Tuesday 14th April 2009 | supercollider | Permalink

SuperCollider mobile device prototype interface

A demo of my prototype interface for mobile touchscreen devices to play a SuperCollider synth. The interface is made using GTK.

Friday 24th October 2008 | supercollider | Permalink

New SuperCollider UGen plugins

I've updated my SuperCollider "MCLD UGens pack" with some recent additions. Here's what's new:

  • GaussClass.kr - a Gaussian classifier
  • SOMTrain.kr - updated Self-Organising Map learner
  • FFTSlope.kr - calculate the spectral slope of a signal
  • PV_Conj - find the complex conjugate of an FFT
  • ListTrig2.kr - a variant on ListTrig, tweaked by nescivi (thanks)

Download links: download for Mac or download for Linux.

Wednesday 20th August 2008 | supercollider | Permalink

SuperCollider on Eee

I've just managed to compile and run SuperCollider on my new Asus Eee PC (tiny little linux PC). It took a lot less hacking than I expected! :)

Photo of SC and Jack running on the Asus Eee

Now with video too

I'm hoping to write a proper little SuperCollider-on-Eee tutorial at some point, but the basic points are:

  1. I thought I might have to remove the default operating system and install a Linux of my own choosing, but the default Xandros seems to be sufficient.
  2. You need to follow the eeeuser.com instructions on "enabling repositories" which makes it easy to install various bits of software (things like jack, fftw, scons).
  3. From that point you just follow the standard instructions for installing SuperCollider on Linux

I haven't done any of the useful stuff like getting the text editor or graphical interface stuff installed btw.

UPDATE: HERE's my detailed walk-through on installing SuperCollider

Friday 25th January 2008 | supercollider | Permalink

SuperCollider Symposium 2007: Club night

The club night was a real climax of the SC Symposium - 8 very varied and all really wicked acts. Including:

  • tn8 who sang a SuperCollider love song ("I love you sooooo much more than Max"), but then she went CRAZY! Standing on a table she demanded to know if there were any Max/MSP users in the audience, zapping them with a massive water-pistol that triggered zappy sounds in SC. Then she carried on but forced SC to crash, and pushed her laptop off the table onto the floor (to screams of horror from the audience). Then (understandably) the sound cut out. Blimey.
  • Timeblind really rocked. I love the texture of his stuff - lots of beats and sounds kind of trying to push through, but none of them ever completely takes over, so it's always really rich-textured. I like the reggae and dancehall flavour too. I danced my little socks off.
  • Sam Pluta lulled us into a false sense of security for a while before bursting into a load of really delicious glitchy noise (made out of gamelan samples sped up to about a hundred times their normal speed), with really well-matched visuals.
  • redFrik's music was fun and mischevious, a perfect use of the retro 8-bit emulated sounds he was using. He was also making live improv visuals at the same time, and it almost distracted me from the music, watching him fiddle about with a video camera and his laptop.

redFrikredFrik

My set was fun to do, and went down pretty well, especially given the technical problems (caused by a severely-curtailed soundcheck). The guitar was inaudible, and some people assumed I must have been using it to control something, given that it didn't seem to be making any sound...

The final track (beatbox improv pushed through an evolving synth network) overloaded the laptop, causing the SC client to lock up entirely. It was a treat for the SC geeks who could watch the problem unfolding, and especially nice that I could continue to make music through the server despite the stuck client. I think the end result sounded pretty good actually, despite not being what I had originally intended...

Tuesday 25th September 2007 | supercollider | Permalink

SuperCollider Symposium 2007: Concerts

The concerts throughout the symposium were dead good. My #1 highlight was the performance by Electronic Hammer, a trio of two laptoppists and a percussionist. I'm a sucker for rhythmic stuff and theirs was really really well performed, including some composed pieces plus an improvisation at the end which developed quite quickly into something really cool.

Electronic Hammer, photographed by G G Karman

Also interesting was an extended-flute piece by Marko Ciciliani. I like it when people make instruments do weird things, and the flautist really did make some great and unexpected noises out of that flute.

Wavefield synthesis concert

The wavefield synthesis concert was fascinating, especially as I hadn't experienced wavefield synthesis before - the 2D spatial clarity of the sound was really impressive. The pieces were interesting and varied, although as Tom Hall knows, his piece will be remembered especially fondly because of the drunk who wandered in and started muttering and rambling, on and off. It went surprisingly well with the music, and lots of people asked Tom afterwards if he'd planned it :)

Tuesday 25th September 2007 | supercollider | Permalink

SuperCollider Symposium 2007: Developments

We got some good dev work done at the symposium, not just in the dev meetings but throughout the week in conversations and dabblings.

One of the big developments was the official announcement that MIT Press will be publishing a SuperCollider book. It's a great recognition of where SuperCollider has got to, and of course I'm really looking forward to writing my chapter...!

We also agreed to tighten things up a bit and have formal "point releases" (3.1, 3.2, etc), which will help people understand compatibility issues more clearly, etc. It also means the book can be targeted to a particular release, again giving more clarity.

Lots of other non-headline things have been agreed/fixed/developed. SC 3.1 is gonna be wicked.

Tuesday 25th September 2007 | supercollider | Permalink

SuperCollider Symposium 2007: Talks

The quality of the talks at the SC symposium this year was really impressive. Some highlights:

  • Juan S Lach's talk about dissonance curves was really illuminating, including an overview of the psychoacoustics of dissonance and some nice consideration of the interaction between instrument timbre, dissonance, and and musical keys. His demos (using a couple of special SC scripts of his) were really helpful.
  • Jason Dixon talked with some insight about the dynamics of collective laptop improvisation, and some aspects of what make it go wrong or go right.
  • Nick Collins demonstrated his generative synthpop system which can generate a wide variety of extremely impressively "real-sounding" synthpop tracks, by building each of the instrument lines with a contextual awareness of what the other instruments are doing. He didn't have time to go through the detail of how it worked (looking forward to a future paper on that topic), but we were treated to a round of generative synthpop karaoke (with Takeko on vocals)! Generative synthpop karaoke!
  • Wouter Snoei described the wavefield synthesis system he'd been involved in building - including some of the tricks needed to get 8 audio servers (on 2 computers) in the exact sync required to drive the 192-speaker system.
  • Hans Holgar Rutz's "Eisenkraut" (a Java audio editor that uses SC as an extremely flexible effects processor) was really impressive, especially the interface, which looked very usable and very well thought-through.
  • Bjorn Erlach and Luc Dobereiner demonstrated Faust, a functional language for creating audio plugins. I like its approach, and the fact that it can create efficient plugins for a number of different systems, but I still haven't found a case where I want to use it. Its language is a bit alien and at present it lacks a few features that I'd be interested in (e.g. FFT stuff).
Tuesday 25th September 2007 | supercollider | Permalink

SuperCollider Symposium 2007: The ferry

I took the ferry from Britain to the Netherlands, and I'm v glad that I did. It's really relaxed, much less stressful than the airport, plus you can wander around the ship instead of being confined.

A view from the ferry

Ferry facilities are pretty tacky (the duty-free shop, the "teenager zone" which is a couple of Pacman machines, the roulette table), and I'm surprised they didn't provide internet access in the cabins (surely that's something they can beat the airlines at?), but it's still quite pleasant. The food was surprisingly good too: I had a really nice beef stir-fry, cooked to order.

Ferry reaching Holland

The ferry ticket cost me about £100 return, including the trains at each end and an overnight cabin, so that's a pretty good deal. I even got free breakfast in the morning, which I hadn't realised was included.

Tuesday 25th September 2007 | supercollider | Permalink
Creative Commons License
Dan's blog articles may be re-used under the Creative Commons Attribution-Noncommercial-Share Alike 2.5 License. Click the link to see what that means...