Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!

How to analyse pan position per frequency of your sound files

Someone on the Linux Audio Users list asked how they could analyse a load of FLAC files to work out if it was true for their music collection, that bass frequencies below about 150 Hz (say) tended to be centre-panned. Here's my answer.

First of all, coincidentally I know that Pedro Pestana published a nice analysis of exactly this phenomenon, at the AES 53rd conference recently. He actually looked at hundreds of number-one singles to determine the relationship between panning and frequency in the habits of producers/engineers for popular tracks. The paper isn't open access unfortunately but there you go.

So anyway here's a Python script I just wrote: script to analyse your audio files and plot the distribution of panning per frequency. And here's how it looks when I analyse the excellent Rumour Cubes album:

(Just to stress, this is a simple analysis. It simply looks at the spectral representation of the complete mix, it doesn't infer anything clever about the component parts of the mix.)

See any patterns? The pattern I was looking for is a bit subtle, but it's right down at the bottom below 100 Hz (i.e. 0.1 kHz on the scale): the bass tends to "pinch in" and not get panned around so much as the other stuff.

This analysis of Lotus Flower by Radiohead (by Daniel Jones) shows the effect more clearly.

This is what's generally observed, and widely known in mixing engineer "folklore": pan your bass to the centre, do what you like with the rest. Not everyone agrees on the reasons: some people say it's because the bass can cause the needle to skip out of vinyl records if it's off-centre, some people say it's because we can't really perceive the spatialisation very well at low frequencies, some people say it's just to maximise the energy in the mix. I have no comment on what the reasons might be, but it's certainly folk wisdom for various audio people, and empirically you can test it for yourself by analysing some of your music collection.

NOTE: Code and image updated 2014-02-08, thanks to Daniel Jones (see comments below) for spotting an issue.

Friday 7th February 2014 | sound | Permalink

Embedded acoustic environments (Barry Truax)

This weekend I was at the Symposium on Acoustic Ecology. Interesting event, but here I just want to note one specific thing from Barry Truax, who gave a keynote as well as a new composition.

Truax has a pretty nice way of talking about acoustic structure at different scales. As a composer he's been an important proponent of granular synthesis, and as a teacher his way of talking about sound meshes rather neatly with the granular approach.

One issue he brought out in his keynote is how, over the past 100 years, our ways of listening have changed, and our sophistication as listeners. He's not just talking about professional or arty listeners, but all of us. In the past, our "acoustic environment" was pretty much synonymous with our immediate environment more generally. This (Truax argues) is one of the reasons that people in the 1910s seemed to be fooled by the sound of an opera singer on a phonograph record, a sound which to us comes across as a feeble imitation. But recording technologies have allowed us to abstract the acoustic environment from our immediate environment: we now have a felicity with embedded acoustic environments that is so sophisticated as to be casual. We know how to relate to the person sitting next to us on the tube listening to headphones; we understand the voices in the radio, why they have different reverb from the room we're sitting in, and why they can't hear us; we understand what is being hinted at when the narrator in a radio play doesn't seem to be in the same room as the characters.

Later that night, at the concert, there was a great example of embedded acoustic environments. We were listening to a multi-channel electronic concert, in a huge ex-ship-building shed ("No. 3 slip") in a dockyard. This hangar allowed plenty of sound in from outdoors, and so as the music played, it was... ahem... "augmented" by various other sounds: the dockyard's big clock chiming the hours; the firework-like sounds of artillery fire in a naval training ground; and also a heavily-echoed "Call Me" by Blondie!

I don't believe any of this was deliberate ;) but it's a great example of an embedded acoustic environment - and furthermore, the challenge that it presents to electronic composers. Composers need to be aware that the environment they're constructing will be usually played back over some speakers which don't form the entirety of the acoustic environment, but a sub-system of it, for the listeners. (Is this challenge equivalent to a demand to always be site-specific? Not quite, but related.) Some of the composers last night I think did not rise to this challenge, and it showed. But some of them did. Barry Truax was premiering a new piece called "Earth and Steel", written specifically for this place, and it worked great, it was very affecting.

Sunday 10th November 2013 | sound | Permalink
Creative Commons License
Dan's blog articles may be re-used under the Creative Commons Attribution-Noncommercial-Share Alike 2.5 License. Click the link to see what that means...