I am an Associate Professor of AI & Biodiversity working in the Netherlands (Tilburg University, Naturalis Biodiversity Centre, JADS). My research is about machine listening and computational bioacoustics - which means using computation (especially machine learning) to understand animal sounds and other sound signals.
I develop automatic processes to analyse large amounts of sound recordings - for example detecting the bird sounds in there and how they vary, how they relate to each other, how the birds' behaviour relates to the sounds they make. The research work is focussed on the machine learning and signal processing methods that can help with these questions. I also work with others to apply these methods to biodiversity monitoring.
I am also a Fellow of the Alan Turing Institute. I work with OpenClimateFix and OpenStreetMap on addressing climate change through solar panel mapping.
I use various machine learning methods, including Gaussian processes, point process models, feature learning... sure, yes, and deep learning too, lots of deep learning.
Selected publications
- V. Morfi, R. Lachlan and D. Stowell, Deep perceptual embeddings for unlabelled animal sound events. Journal of the Acoustical Society of America 150, 2 (2021).
- D. Stowell, J. Kelly, D. Tanner et al. A harmonised, high-coverage, open dataset of solar photovoltaic installations in the UK. Sci Data 7, 394 (2020).
- D. Stowell, T. Petruskova, M. Salek, P. Linhart, Automatic acoustic identification of individual animals: Improving generalisation across species and recording conditions, Journal of the Royal Society Interface 16 (153), article 20180940, 2019.
- W.J. Wilkinson, M. Riis Andersen, J.D. Reiss, D. Stowell, A. Solin, End-to-End Probabilistic Inference for Nonstationary Audio Analysis. In Proceedings of the International Conference on Machine Learning (ICML 2019), 2019.
- V. Morfi and D. Stowell, Deep Learning for Audio Event Detection and Tagging on Low-Resource Datasets, Applied Sciences, 8(8), article 1397, 2018.
- D. Stowell, Y. Stylianou, M. Wood, H. Pamuła, H. Glotin, Automatic acoustic detection of birds through deep learning: the first Bird Audio Detection challenge, Methods in Ecology and Evolution 2018.
- D. Stowell. Computational Bioacoustic Scene Analysis. In Computational Analysis of Sound Scenes and Events, T. Virtanen, M. D. Plumbley, and D. P. W. Ellis (eds.), Springer, Oct. 2017.
- D. Stowell, L. F. Gill, and D. Clayton. Detailed temporal structure of communication networks in groups of songbirds. Journal of the Royal Society Interface, 13(119), 2016.
- D. Stowell and M. D. Plumbley, Segregating event streams and noise with a Markov renewal process model. Journal of Machine Learning Research 14, 1891-1916, 2013.
Full publication listing on my Google Scholar profile, plus preprints on arXiv.
Selected chairing/organising
- Chair of VIHAR 2019 two-day research workshop
- Co-chair of ICASSP 2019 special session on wildlife bioacoustics
- Co-chair of ICEI 2018 session on "Analysis of ecoacoustic recordings: detection, segmentation and classification"
- Chair of IBAC 2017 session on "Machine Learning Methods in Bioacoustics"
- Chair of EUSIPCO 2017 special session on "Bird Audio Signal Processing"
- Lead organiser of the Bird Audio Detection challenge
Datasets
All are published under open data licences:- Few-shot bioacoustic event detection (2021) - animal calls (mammal, bird)
- UKPVGeo (2020) - Solar panels and solar farms in the UK - geographic open data - more info in our journal article
- NIPS4bPLUS (2019) - Detailed transcriptions of NIPS4B 2013 Bird Challenge Training Dataset
- Automatic acoustic identification of individual birds (2018) - audio of little owl, chiffchaff, tree pipit, labelled with species AND individual identity
- Bird Audio Challenge 2018 - large datasets of 10-second clips, annotated for presence/absence of birds
- Bird Audio Challenge 2016 - large datasets of 10-second clips, annotated for presence/absence of birds
- Call timing data in zebra finch groups (2016)
- British Library dawn chorus species annotations (2014)
- FM and frequency stats of songbird sound recordings (2013)
- DCASE 2013 - datasets for "acoustic scene classification", "sound event detection". (We later also released the testing sets: scene classification, event detection)
- Freefield1010 (2013) - tagged 10-second audio clips of soundscapes
- beatboxset1 (2008) - beatboxing audio data set
In the media
Nature: interviewed in March 2019 for article "AI empowers conservation biology"
Climate Home News: "One million solar panels! If only we knew where they were"
Science: "Computer becomes bird enthusiast"
BBC: "Software can decode bird songs"
BBC Radio 4: Featured on The World Tonight talking about automatic bird identification and Warblr
RTE Radio 1: Conversation about automatic birdsong identification on The Mooney Show: MP3 link
PhD students
- Shubhr Singh: "Novel mathematical methods in deep learning for audio"
- Ines de Almeida Nolasco: "Automatic acoustic identification of individual animals in the wild"
- Veronica Morfi: "Machine transcription of wildlife bird sound scenes"
- Pablo Alvarado Duran: "Physically and Musically Inspired Probabilistic Models for Audio Content Analysis"
- Will Wilkinson: "Physically and Musically Inspired Probabilistic Models for Audio Content Analysis"
- Delia Fano Yela: "Signal Processing Methods for Source Separation in Music Production"
Why does it matter?
What's the point of analysing animal sounds? And why do it with computational methods? Well...
One surprising fact about birdsong is that it has a lot in common with human language, even though it evolved separately. Many songbirds go through similar stages of vocal learning as we do, as they grow up. And each species is slightly different, which is useful for comparing and contrasting. So, biologists are keen to study songbird learning processes - not only to understand more about how human language evolved, but also to help understand more about social organisation in animal groups, and so on. I'm not a biologist but I collaborate regularly with some great people to help improve the automatic sound analysis in their toolkit - for example, by analysing much larger audio collections than they can possibly analyse by hand.
Bird population/migration monitoring is also important. UK farmland bird populations have declined by 50% since the 1970s, and woodland birds by 20% (source), and similar patterns are being recorded worldwide. We have great organisations such as the BTO and the RSPB (in the UK) or Sovon (in the Netherlands), who organise professionals and amateurs to help monitor bird populations each year. If we can add improved automatic sound recognition to that, we can help add some more detail to this monitoring. For example, many birds are changing location year-on-year in response to climate change (source) - that's the kind of pattern you can detect better when you have more data and better analysis.
Sound is fascinating, and still surprisingly difficult to analyse. What is it that makes one sound similar to another sound? Why can't we search for sounds as easily as we can for words? There's still a lot that we haven't sorted out in our scientific and engineering understanding of audio. Shazam works well for music recordings, but don't be lulled into a false sense of security by that! There's still a long way to go in this research topic before computers can answer all of our questions about sounds.