Photo of Dan

Email: dan.stowell [aatt]
Twitter: @mclduk
Blog: here

Dan Stowell

EPSRC Research Fellow

Recent news

I'm a researcher in machine listening - which means using computation to understand sound signals. Currently my research is all about bird sounds, but I have also worked on voice, music and environmental soundscapes.

I am an EPSRC research fellow based at Queen Mary University of London, giving me five years to research "structured machine listening for soundscapes with multiple birds". I am developing automatic processes to analyse large amounts of sound recordings - detecting the bird sounds in there and how they vary, how they relate to each other, how the birds' behaviour relates to the sounds they make.

I use various machine learning methods, including Bayesian graphical models, point process models, flow networks, random forests, feature learning, PHD filters...

Selected publications

Full publication listing on my QMUL homepage.

In the media

Science: "Computer becomes bird enthusiast"

BBC: "Software can decode bird songs"

BBC Radio 4: Featured on The World Tonight talking about automatic bird identification and Warblr

RTE Radio 1: Conversation about automatic birdsong identification on The Mooney Show: MP3 link

PhD students


I'm pleased to be working with some great people across different research fields. This includes researchers in my home group the Centre for Digital Music, plus QMUL zoologist colleagues including David Clayton lab and Alan McElligott.

Outside QMUL, collaborators include Richard Turner (Cambridge, UK), Thierry Aubin (Univ Paris Sud, France), Marc Naguib (Wageningen UR, Netherlands), Manfred Gahr and Lisa Gill (Max Planck Institute for Ornithology, Germany).

Why does it matter?

What's the point of analysing bird sounds? Well...

One surprising fact about birdsong is that it has a lot in common with human language, even though it evolved separately. Many songbirds go through similar stages of vocal learning as we do, as they grow up. And each species is slightly different, which is useful for comparing and contrasting. So, biologists are keen to study songbird learning processes - not only to understand more about how human language evolved, but also to help understand more about social organisation in animal groups, and so on. I'm not a biologist but I'm going to be collaborating with some great people to help improve the automatic sound analysis in their toolkit - for example, by analysing much larger audio collections than they can possibly analyse by hand.

Bird population/migration monitoring is also important. UK farmland bird populations have declined by 50% since the 1970s, and woodland birds by 20% (source). We have great organisations such as the BTO and the RSPB, who organise professionals and amateurs to help monitor bird populations each year. If we can add improved automatic sound recognition to that, we can help add some more detail to this monitoring. For example, many birds are changing location year-on-year in response to climate change (source) - that's the kind of pattern you can detect better when you have more data and better analysis.

Sound is fascinating, and still surprisingly difficult to analyse. What is it that makes one sound similar to another sound? Why can't we search for sounds as easily as we can for words? There's still a lot that we haven't sorted out in our scientific and engineering understanding of audio. Shazam works well for music recordings, but don't be lulled into a false sense of security by that! There's still a long way to go in this research topic before computers can answer all of our questions about sounds.