Photo (c) Jan Trutzschler von Falkenstein; CC-BY-NCPhoto (c) Rain RabbitPhoto (c) Samuel CravenPhoto (c) Gregorio Karman; CC-BY-NC

Research

I am a research fellow, conducting research into automatic analysis of bird sounds using machine learning.
 —> Click here for more about my research.

Music


Other

Older things: Evolutionary sound · MCLD software · Sponsored haircut · StepMania · Oddmusic · Knots · Tetrastar fractal generator · Cipher cracking · CV · Emancipation fanzine

Blog

This season, I'm lead organiser for two special conference sessions on machine listening for bird/animal sound: EUSIPCO 2017 in Kos, Greece, and IBAC 2017 in Haridwar, India. I'm very happy to see the diverse selection of work that has been accepted for presentation - the diversity of the research itself, yes, but also the diversity of research groups and countries from which the work comes.

The official programmes haven't been announced yet, but as a sneak preview here are the titles of the accepted submissions, so you can see just how lively this research area has become!

Accepted talks for IBAC 2017 session on "Machine Learning Methods in Bioacoustics":

A two-step bird species classification approach using silence durations in song bouts

Automated Assessment of Bird Vocalisation Activity

Deep convolutional networks for avian flight call detection

Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis

JSI sound: a machine-learning tool in Orange for classification of diverse biosounds

Prospecting individual discrimination of maned wolves’ barks using wavelets

Accepted papers for EUSIPCO 2017 session on "Bird Audio Signal Processing":

(This session is co-organised with Yiannis Stylianou and Herve Glotin)

Stacked Convolutional and Recurrent Neural Networks for Bird Audio Detection preprint

Densely Connected CNNs for Bird Audio Detection preprint

Classification of Bird Song Syllables Using Wigner-Ville Ambiguity Function Cross-Terms

Convolutional Recurrent Neural Networks for Bird Audio Detection preprint

Joint Detection and Classification Convolutional Neural Network (JDC-CNN) on Weakly Labelled Bird Audio Data (BAD)

Rapid Bird Activity Detection Using Probabilistic Sequence Kernels

Automatic Frequency Feature Extraction for Bird Species Delimitation

Two Convolutional Neural Networks for Bird Detection in Audio Signals

Masked Non-negative Matrix Factorization for Bird Detection Using Weakly Labelled Data

Archetypal Analysis Based Sparse Convex Sequence Kernel for Bird Activity Detection

Automatic Detection of Bird Species from Audio Field Recordings Using HMM-based Modelling of Frequency Tracks

Please note: this is a PREVIEW - sometimes papers get withdrawn or plans change, so these lists should be considered provisional for now.

science · Fri 16 June 2017

On Wednesday we had a "flagship seminar" from Prof Andy Hopper on Computing for the future of the planet. How can computing help in the quest for sustainability of the planet and humanity?

Lots of food for thought in the talk. I was surprised to come out with a completely different take-home message than I'd expected - and a different take-home message than I think the speaker had in mind too. I'll come back to that in a second.

Some of the themes he discussed:

Then Hopper also talked about replacing physical activities by digital activities (e.g. shopping), and this led him on to the topic of the Internet, worldwide sharing of information and so on. He argued (correctly) that a lot of these developments will benefit the low-income countries even though they were essentially made by-and-for richer countries - and also that there's nothing patronising in this: we're not "developing" other countries to be like us, we're just sharing things, and whatever innovations come out of African countries (for example) might have been enabled by (e.g.) the Internet without anyone losing their own self-determination.

Hopper called this "wealth by proxy"... but it doesn't have to be as mystifying as that. It's a well-known idea called the commons.

The name "commons" originates from having a patch of land which was shared by all villagers, and that makes it a perfect term for what we're considering now. In the digital world the idea was taken up by the free software movement and open culture such as Creative Commons licensing. But it's wider than that. In computing, the commons consists of the physical fabric of the Internet, of the public standards that make the World Wide Web and other Internet actually work (http, smtp, tcp/ip), of public domain data generated by governments, of the Linux and Android operating systems, of open web browsers such as Firefox, of open collaborative knowledge-bases like Wikipedia and OpenStreetMap. It consists of projects like the Internet Archive, selflessly preserving digital content and acting as the web's long-term memory. It consists of the GPS global positioning system, invented and maintained (as are many things) by the US military, but now being complemented by Russia's GloNass and the EU's Galileo.

All of those are things you can use at no cost, and which anyone can use as bedrock for some piece of data analysis, some business idea, some piece of art, including a million opportunities for making a positive contribution to sustainability. It's an unbelievable wealth, when you stop to think about it, an amazing collection of achievements.

The surprising take-home lesson for me was: for sustainable computing for the future of the planet, we must protect and extend the digital commons. This is particularly surprising to me because the challenges here are really societal, at least as much as they are computational.

There's more we can add to the commons; and worse, the commons is often under threat of encroachment. Take the Internet and World Wide Web: it's increasingly becoming centralised into the control of a few companies (Facebook, Amazon) which is bad news generally, but also presents a practical systemic risk. This was seen recently when Amazon's AWS service suffered an outage. AWS powers so many of the commercial and non-commercial websites online that this one outage took down a massive chunk of the digital world. As another example, I recently had problems when Google's "ReCAPTCHA" system locked me out for a while - so many websites use ReCAPTCHA to confirm that there's a real human filling in a form, that if ReCAPTCHA decides to give you the cold shoulder then you instantly lose access to a weird random sample of services, some of those which may be important to you.

Another big issue is net neutrality. "Net neutrality is like free speech" and it repeatedly comes under threat.

Those examples are not green-related in themselves, but they illustrate that out of the components of the commons I've listed, the basic connectivity offered by the Internet/WWW is the thing that is, surprisingly, perhaps the flakiest and most in need of defence. Without a thriving and open internet, how do we join the dots of all the other things?

But onto the positive. What more can we add to this commons? Take the African soil-sensing example. Shouldn't the world have a free, public stream of such land use data, for the whole planet? The question, of course, is who would pay for it. That's a social and political question. Here in the UK I can bring the question even further down to the everyday. The UK's official database of addresses (the Postcode Address File) was... ahem... was sold off privately in 2013. This is a core piece of our information infrastructure, and the government - against a lot of advice - decided to sell it as part of privatisation, rather than make it open. Related is the UK Land Registry data (i.e. who owns what parcel of land) which is not published as open data but is stuck behind a pay-wall, all very inconvenient for data analysis, investigative journalism etc.

We need to add this kind of data to the commons so that society can benefit. In green terms, geospatial data is quite clearly raw material for clever green computing of the future, to do good things like land management, intelligent routing, resource allocation, and all kinds of things I can't yet imagine.

As citizens and computerists, what can we do?

  1. We can defend the free and open internet. Defend net neutrality. Support groups like the Mozilla Foundation.
  2. Support open initiatives such as Wikipedia (and the Wikimedia Foundation), OpenStreetMap, and the Internet Archive. Join a local Missing Maps party!
  3. Demand that your government does open data, and properly. It's a win-win - forget the old mindset of "why should we give away data that we've paid for" - open data leads to widespread economic benefits, as is well documented.
  4. Work towards filling the digital commons with ace opportunities for people and for computing. For example satellite sensing, as I've mentioned. And there's going to be lots of drones buzzing around us collecting data in the coming years; let's pool that intelligence and put it to good use.

If we get this right, 20 years from now our society's computing will be green as anything, not just because it's powered by innocent renewable energy but because it can potentially be a massive net benefit - data-mining and inference to help us live well on a light footprint. To do that we need a healthy digital commons which will underpin many of the great innovations that will spring up everywhere.

IT · Fri 19 May 2017

This was gorgeous. I hadn't realised that the sweet butternut and the salty halloumi would play so well off each other.

Serves 2, takes 45 minutes overall but with a big gap in the middle.

First get the oven pre-heated to 180 C. While it's warming get the butternut ready to go in the oven. Chop it into bitesize pieces, roughly the size of 2cm cubes but no need to be exact. Then put the pieces in a roasting tin. Take the tines of rosemary off the stalk, chop them up and sprinkle them over the squash, then drizzle generously with olive oil. Chop the garlic into two pieces (no need to skin them - we're not eating them, just using them to add flavour) and place the pieces strategically among the squash. Then put this all into the oven, to roast for maybe 40 minutes.

When there's about 10 minutes left, heat up a griddle pan and a frying pan on the hob. Don't add any oil to either of the pans.

Take the asparagus stalks, toss them in olive oil and lay them on the griddle. Don't move them about.

Put the pine nuts into the hot dry frying pan. You'll want to shuffle these about for the next few minutes, watching them carefully - they need to get a bit toasty but not burn. While you're doing that you can cut the halloumi into bitesize pieces, about 2cm cube size. Turn the asparagus over to cook the other side and add the halloumi to the pan too. (I hope they fit in the pan with the asparagus...) After a couple of minutes you can turn the halloumi over.

Get the tin out of the oven a couple of minutes before you serve it. Find and discard the garlic.

To serve, place the asparagus on each plate, then next to it you put the squash and the halloumi. Sprinkle the pine nuts over the squash and halloumi. Finally sprinkle a squeeze of lemon over.

recipes · Sun 14 May 2017

I don't always agree with The Economist magazine but it's interesting. It thinks bigger than many of the things you can buy on an average news stand. The current issue has an article about Britain and Marx, which happens to end with a clear and laudable shopping-list of things that our country needs to do to ensure the health of the economy and of worker's conditions. Let me quote:

"The genius of the British system has always been to reform in order to prevent social breakdown. This means doing more than just engaging in silly gestures such as as fixing energy prices, as the Conservatives proposed this week (silly because this will suppress investment and lead eventually to higher prices).

politics · Sun 14 May 2017

People love to take the vegans down a peg or two. I guess they must unconsciously agree that the vegans are basically correct and doing the right thing, hence the defensive mud-slinging.

There's a bullshit article "Being vegan isn’t as good for humanity as you think". Like many bullshit articles, it's based on manipulating some claims from a research paper.

The point that the article is making is summarised by this quote:

"When applied to an entire global population, the vegan diet wastes available land that could otherwise feed more people. That’s because we use different kinds of land to produce different types of food, and not all diets exploit these land types equally."

This is factually correct, according to the original research paper which itself seems a decent attempt to estimate the different land requirements of different diets. The clickbaity inference, especially as stated in the headline, is that vegans are wrong. But that's where the bullshit lies.

Why? Look again at the quote. "When applied to an entire global population." Is that actually a scenario anyone expects? The whole world going vegan? In the next ten years, fifty years, a hundred? No. It's fine for the research paper to look at full-veganism as a comparison against the 9 other scenarios they consider (e.g. 20% veggy, 100% veggy), but the researchers are quite clear that their model is about what a whole population eats. You can think of it as what "an average person" eats, but no it's not what "each person should" eat.

The research concludes that a vegetarian diet is "best", judged on this specific criterion of how big a population can the USA's farmland support. And since that's for the population as a whole, and there's no chance that meat-eating will entirely leave the Western diet, a more sensible journalistic conclusion is that we should all be encouraged to be a bit more vegetarian, and the vegans should be celebrated for helping balance out those meat-eaters.

Plus, of course, the usual conclusion: more research is needed. This research was just about land use, it didn't include considerations of CO2 emissions, welfare, social attitudes, geopolitics...

The research illustrates that the USA has more than enough land to feed its population and that this could be really boosted if we all transition to eat a bit less meat. Towards the end of the paper, the researchers note that if the USA moved to a vegetarian diet, "the dietary changes could free up capacity to feed hundreds of millions of people around the globe."

science · Fri 05 May 2017

It's asparagus season, plus I have a half-used packet of ready-cooked chestnuts. Wait a moment - maybe those flavours can come together over a risotto. Yes they can.

Note: I would have started with some leek or onion to help get things going - if I'd had some.

Quantities are to serve 1, but scale it as you like. Took about 30 mins.

Rinse the asparagus, snip off the hardest end bits and chop the rest into bite-size pieces (about half an inch).

In a good-sized saucepan heat up 1 knob of butter. When it's melted add the rice and the asparagus and give it a good stir. Let it cook for a minute or so before you add a small cup-worth of stock and/or wine. Stir the rice gently as it absorbs the liquid. Eventually when pretty much all is absorbed add more liquid, and continue stirring. Continue this way for about 20 minutes, until all the liquid is added and the rice is approaching being nicely soft.

In a small frying pan heat up a big knob of butter. When it's melted and ready to sizzle add the halved chestnuts. Stir-fry them around for 3-5 minutes until coloured and smelling nice, then add the chestnuts and the butter to the risotto, stirring them in. Chop the parsley finely and add that too, stirring.

You'll want the chestnuts to spend about 5 minutes in the risotto to meld the flavours together. Then add a good twist of pepper, stir, and serve with plenty of shaved parmesan on top.

recipes · Wed 26 April 2017

People who do technical work with sound use spectrograms a heck of a lot. This standard way of visualising sound becomes second nature to us.

As you can see from these photos, I like to point at spectrograms all the time:

(Our research group even makes some really nice software for visualising sound which you can download for free.)

It's helpful to transform sound into something visual. You can point at it, you can discuss tiny details, etc. But sometimes, the spectrogram becomes a stand-in for listening. When we're labelling data, for example, we often listen and look at the same time. There's a rich tradition in bioacoustics of presenting and discussing spectrograms while trying to tease apart the intricacies of some bird or animal's vocal repertoire.

But there's a question of validity. If I look at two spectrograms and they look the same, does that mean the sounds actually sound the same?

In strict sense, we already know that the answer is "No". Us audio people can construct counterexamples pretty easily, in which there's a subtle audio difference that's not visually obvious (e.g. phase coherence -- see this delightfully devious example by Jonathan le Roux.) But it could perhaps be even worse than that: similarities might not just be made easier or harder to spot, in practice they could actually be differently arranged. If we have a particular sound X, it could audibly be more similar to A than B, while visually it could be more similar to B than A. If this was indeed true, we'd need to be very careful about performing tasks such as clustering sounds or labelling sounds while staring at their spectrograms.

So - what does the research literature say? Does it give us guidance on how far we can trust our eyes as a proxy for our ears? Well, it gives us hints but so far not a complete answer. Here are a few relevant factoids which dance around the issue:

Really, what does this all tell us? It tells us that looking at spectrograms and listening to sounds are different in so many myriad ways that we definitely shouldn't expect the fine details to match up. We can probably trust our eyes for broad-brush tasks such as labelling sounds that are quite distinct, but for the fine-grained comparisons (which we often need in research) one should definitely be careful, and use actual auditory perception as the judge when it really matters. How to know when this is needed? Still a question of judgment, in most cases.

My thanks go to Trevor Agus, Michael Mandel, Rob Lachlan, Anto Creo and Tony Stockman for examples quoted here, plus all the other researchers who kindly responded with suggestions.

Know any research literature on the topic? If so do email me - NB there's plenty of literature on the accuracy of looking or of listening in various situations; here the question is specifically about comparisons between the two modalities.

science · Mon 24 April 2017

My blog has been running for more than a decade, using the same cute-but-creaky old software made by my chum Sam. It was a lo-fi PHP and MySQL blog, and it did everything I needed. (Oh and it suited my stupid lo-fi blog aesthetics too, the clunky visuals are entirely my fault.)

Now, if you were starting such a project today you wouldn't use PHP and you wouldn't use MySQL (just search the web for all the rants about those technologies). But if it isn't broken, don't fix it. So it ran for 10 years. Then my annoying web provider TalkTalk messed up and lost all the databases. They lost all ten years of my articles. SO. What to do?

Well, one thing you can do is simply drop it and move on. Make a fresh start. Forget all those silly old articles. Sure. But I have archivistic tendencies. And the web's supposed to be a repository for all this crap anyway! The web's not just a medium for serving you with Facebook memes, it's meant to be a stable network of stuff. So, ideal would be to preserve the articles, and also to prevent link rot, i.e. make sure that the URLs people have been using for years will still work...

So, job number one, find your backups. Oh dear. I have a MySQL database dump from 2013. Four years out of date. And anyway, I'm not going back to MySQL and PHP, I'm going to go to something clean and modern and ideally Python-based... in other words Pelican. So even if I use that database I'm going to have to translate it. So in the end I found three different sources for all my articles:

  1. The old MySQL backup from 2013. I had to install MySQL software on my laptop (meh), load the database, and then write a script to iterate through the database entries and output them as nice markdown files.
  2. archive.org's beautiful Wayback Machine. If you haven't already given money to archive.org then please do. They're the ones making sure that all the old crap from the web 5 years ago is still preserved in some form. They're also doing all kinds of neat things like preserving old video games, masses and masses of live music recordings, and more. ... Anyway I can find LOTS of old archived copies of my blog items. There are two problems with this though: firstly they don't capture everything and they didn't capture the very latest items; and secondly the material is not stored in "source" format but in its processed HTML form, i.e. the form you actually see. So to address the latter, I had to write a little regular expression based script to snip the right pieces out and put them into separate files.
  3. For the very latest stuff, much of it was still in Google's web cache. If I'd thought of this earlier, I could have rescued all the latest items, since Google is I think the only service that crawls fast enough and widely enough to have captured all the little pages on my little site. So, just like with archive.org, I can grab the HTML files from Google, and scrape the content out using regular expressions.

That got me almost everything. I think the only thing missing is one blog article from a month ago.

Next step: once you've rescued your data, build a new blog. This was easy because Pelican is really nice and well-documented too. I even recreated my silly old theme in their templating system. I thought I'd have problems configuring Pelican to reproduce my old site, but it's basically all done, even the weird stuff like my separate "recipes" page which steals one category from my blog and reformats it.

Now how to prevent linkrot? The Pelican pages have URLs like "/blog/category/science.html" instead of the old "/blog/blog.php?category=science", and if I'm moving away from PHP then I don't really want those PHP-based links to be the ones used in future. I need to catch people who are going to one of those old links, and point them straight at the new URLs. The really neat thing is that I could use Pelican's templating system to output a little lookup table, a CSV file listing all the URL rewrites needed. Then I write a tiny little PHP script which uses that files and emits HTTP Redirect messages. ........... and relax. a URL like http://www.mcld.co.uk/blog/blog.php?category=science is back online.

HTTP status code 302: Found

IT · Sat 22 April 2017

Other recent posts: