I'm a researcher in machine listening - which means using computation to understand sound signals. Currently my research is all about bird sounds, but I have also worked on voice, music and environmental soundscapes.
I am an EPSRC research fellow based at Queen Mary University of London, giving me five years to research "structured machine listening for soundscapes with multiple birds". I am developing automatic processes to analyse large amounts of sound recordings - detecting the bird sounds in there and how they vary, how they relate to each other, how the birds' behaviour relates to the sounds they make.
- D. Stowell and D. Clayton, Acoustic event detection for multiple overlapping similar sources. Proceedings of IEEE WASPAA, 2015.
- D. Stowell and M. D. Plumbley, Automatic large-scale classification of bird sounds is strongly improved by unsupervised feature learning. PeerJ 2:e488, 2014.
- D. Stowell and M. D. Plumbley, Segregating event streams and noise with a Markov renewal process model. Journal of Machine Learning Research 14, 1891-1916, 2013.
- D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange and M. D. Plumbley, Detection and Classification of Audio Scenes and Events. IEEE Transactions on Multimedia 17(10), 1733-1746, 2015.
- D. Stowell and M. D. Plumbley, Large-scale analysis of frequency modulation in birdsong databases. Methods in Ecology and Evolution, 2014.
Full publication listing on my QMUL homepage.
In the media
Science: "Computer becomes bird enthusiast"
RTE Radio 1: Conversation about automatic birdsong identification on The Mooney Show: MP3 link
- Veronica Morfi: "Machine transcription of wildlife bird sound scenes"
- Pablo Alvarado Duran: "Physically and Musically Inspired Probabilistic Models for Audio Content Analysis"
I'm pleased to be working with some great people across different research fields. This includes researchers in my home group the Centre for Digital Music, plus QMUL zoologist colleagues including David Clayton lab and Alan McElligott.
Why does it matter?
What's the point of analysing bird sounds? Well...
One surprising fact about birdsong is that it has a lot in common with human language, even though it evolved separately. Many songbirds go through similar stages of vocal learning as we do, as they grow up. And each species is slightly different, which is useful for comparing and contrasting. So, biologists are keen to study songbird learning processes - not only to understand more about how human language evolved, but also to help understand more about social organisation in animal groups, and so on. I'm not a biologist but I'm going to be collaborating with some great people to help improve the automatic sound analysis in their toolkit - for example, by analysing much larger audio collections than they can possibly analyse by hand.
Bird population/migration monitoring is also important. UK farmland bird populations have declined by 50% since the 1970s, and woodland birds by 20% (source). We have great organisations such as the BTO and the RSPB, who organise professionals and amateurs to help monitor bird populations each year. If we can add improved automatic sound recognition to that, we can help add some more detail to this monitoring. For example, many birds are changing location year-on-year in response to climate change (source) - that's the kind of pattern you can detect better when you have more data and better analysis.
Sound is fascinating, and still surprisingly difficult to analyse. What is it that makes one sound similar to another sound? Why can't we search for sounds as easily as we can for words? There's still a lot that we haven't sorted out in our scientific and engineering understanding of audio. Shazam works well for music recordings, but don't be lulled into a false sense of security by that! There's still a long way to go in this research topic before computers can answer all of our questions about sounds.