I am a research fellow, conducting research into automatic analysis of bird sounds using machine learning.
—> Click here for more about my research.
Serves 1 (fairly big portion), takes 25 mins.
In a deep pan which has a lid, heat up about 1 tbsp vegetable oil, while you chop the onion. You want to chop about three-quarters of the onion into whatever size pieces, and the remaining one-quarter of the onion slice it into nice rings, about half a centimetre thick.
The misc pieces of onion, put them in the pan and give them a good fry to get them softened. Add the three spices and stir around. Then add the chorizo - not too much, it's mainly for flavour. Let this cook for two minutes or so.
Then add the lentils and stir, then add enough boiling water to only-just-cover. Put a lid on, turn the heat right down, and let this bubble for 15 minutes.
In the final five minutes, heat about 2 tbsp of vegetable oil in a frying pan. Make sure the onion rings are separated into circles, and put them in the pan to fry briskly for 5 minutes, turning halfway. While these are getting a little crispy, chop the mangetouts roughly into maybe 3 pieces each and chuck them into the stew, and also add the bits of parsley and the lemon juice.
When the onion rings are ready, simply put the stew in a bowl and sprinkle the onion rings on top.
Just before the Brexit referendum I was wondering how Brexit would affect the kind of people coming to work with us. That's a long-term effect and very hard to measure. But really, like most of the country I hadn't really thought deeply about the direct practical consequences of an Exit vote, in this case the consequences for research that would show themselves within the first month.
The effect is on EU funding in particular. Since the UK hasn't actually left the EU, you might think that things carry on "as normal" until that point - existing projects continue, and you can even apply for new projects. (In fact, that's essentially the official guidance so far.) The problem is that collaborative EU grants are the lifeblood of a lot of research, and they're also very competitive. I know of at least one colleague who's been taken off a grant proposal (which is being organised by someone in another EU country), because a UK partner now means a risk factor that could easily cause a reviewer or a programme administrator to mark the proposal down.
Similarly, at least two colleagues who have been leading on EU grant proposals, they're now in a difficult situation. After having put a lot of work into preparing a proposal, do they submit and risk getting marked down as a risk factor? Do they rewrite the proposal with Brexit backup strategies? Do they stop and wait to see what happens?
(I'm not writing this down to change anyone's mind about Brexit, by the way. Just documenting.)
Less concrete, but in my own first-hand experience: we were intending to invite a good researcher to come and work with us under the Marie Sklodowska Curie scheme (which funds researchers to spend time in another country), but I'm not sure how we can do that now. The funding is still there, but apart from the "risk factor" effect mentioned above, the potential researchers would obviously need to know how it affects their right to work in the UK (will they need a visa?) and what career options might follow on afterwards; and there's pretty much nothing we can say in answer to such questions.
This Guardian article, "UK scientists dropped from EU projects because of post-Brexit funding fears" puts the same phenomena in a wider context. This quote, for example:
Joe Gorman, a senior scientist at Sintef, Norway's leading research institute, said he believed UK industry and universities would see "a fairly drastic and immediate reduction in the number of invitations to join consortiums. [...] I strongly suspect that UK politicians simply don’t understand this, and think it is 'business as usual', at least until negotiations have been completed. They are wrong, the problems start right now."
OK a brief PAUSE on my vegetarian year as we report back on the latest iteration of our ongoing project to develop the bacup.
This time I had some of my black bean chorizo in the fridge so we did bacups with a layer of black bean chorizo, then an egg on top, then some grated cheese on the egg.
After pre-cooking the bacups we wanted to take them out of the little pots so they would crisp up properly in stage two, but they weren't holding together enough for that so we kept them in the pots. Added the fillings, blast them about 10 mins in the oven, and look at this lovely result:
The egg almost-perfectly cooked, slightly runny yolk, protected from the oven by the cheese - and complemented perfectly with the dark chorizo-y bean mix. Crispy top of the bacon, even though not crispy all through. Best bacups on record, IMHO - taste-wise, at least, even though we still need to work on structural issues.
Many "Remainers" are writing to their MPs, emphasising the referendum was "advisory" and sometimes demanding a second referendum. I think both of those are damaging and alienating ideas to cling to, they won't help to fix our politics.
The ideal way forward is for the next PM to get some informal outline, the likely shape of a UK/EU deal, and put it to Parliament at the same time as a bill authorising them to trigger Article 50. We know that the Leave campaign's fairytale deal is not going to be on the table. So, armed with better information, Parliament could then choose to enable or refuse the triggering of Article 50. I don't know what the outcome would be; but it would be the right way for our democracy to work, and it leaves open a route "back into the EU" now that some of the referendum's consequences (not least, for the future of the UK as a union) are crystal clear.
Arch1 is a tiny little music venue in East London. It's got a great atmosphere, a great acoustic, and it's run by one man, a lovely fella called Rob. It's just what you want.
And now this:
A sudden flash flood due to the storms, and all of a sudden it's taken out lots of expensive equipment (amplifiers, mixer, drumkit, and the handsome little piano at the back) as well as obviously ruining the place.
The venue is a labour of love, Rob's been working at it 7 days a week, and it needs our support. Please support the crowdfunder to save this venue.
So, fine, there's a letter in The Times signed by over 5500 scientists arguing that UK science would suffer in the event of Brexit. They talk about funding, and collaboration, and shared infrastructure. There are cited sources for their evidence. I agree with the letter. I even signed it. But it's so boring and abstract. And all this stuff about financial stuff just disappears into the mist of the general economic to-and-fro.
Then the other day it hit me:
In our research group, in the exact office I work in every day, we have researchers from all sorts of countries, but mostly from the EU. Would they all be here if the UK had divorced itself from the EU? I don't think so. Have you seen the bureaucracy that an American has to go through to work or study in the UK? (I say "American" (meaning USA) to emphasise that the burden is there even for the richer countries.) I don't know how they maintain the energy to go through that!
So if it was much more hassle to study here than in Paris, Barcelona, Berlin, it's clear to me that we'd lose some proportion of those scientific minds coming over to collaborate or to study. I'm not even talking about the people who are directly funded by the EU, and nor am I assuming some massive limitation on free movement. We'd lose out from the multiple little frictions of no longer being part of the big club that makes so easy the sharing of people who have good ideas.
Some people would counter this with suggestions about collaborating with other countries instead: the Commonwealth, China, India. Well guess what? We already do plenty of that too. It's not a zero-sum game.
So yes, our excellent science definitely benefits from the free movement of people in the EU. But if I say it like that, it sounds so abstract again. - The great people I've encountered in my research career, the great ideas they've come up with and developed together, which of them would not be there?
I just read this new paper, "Metrics for Polyphonic Sound Event Detection" by Mesaros et al. Very relevant topic since I'm working on a couple of projects to automatically annotate bird sound recordings.
I was hoping this article would be a complete and canonical reference that I could use as a handy citation to refer to in any discussion of evaluating sound event detection. It isn't that, for a single reason:
Just so you know, the context of that paper is the DCASE2016 challenge. For the purposes of the challenge, they've released a public python toolbox with their evaluation metrics, and that's a great way to go about things. This paper, then, is oriented around the evaluation paradigm used in DCASE2016.
In that paradigm, they evaluate systems which supply a list of inferred event annotations which are entirely on or off. They're not probabilistic, or ranked, or annotated with certainty/uncertainty. Fair enough, this happens a lot, and it's a perfectly justifiable way to set up the contest. However, in many of my scenarios, we work with systems that output a probabilistic or rankable set of output events - you can turn this into a definite annotation simply by thresholding, but actually what we'd like to do is evaluate the fully "nuanced" output.
Why? Why should evaluation care about whether a system labels an event confidently or weakly? Well, it's all about what happens downstream. An example: imagine you have an automatic system for detecting events, and you apply it to a dataset of 1000 hours of audio. No automatic system is perfect, and so you often want to either (a) only focus on the strongly-detected items in later analysis, or (b) ask a human expert to go through the results to cross-check them. In the latter case, the expert does not have time to listen to all 1000 hours; instead you'd like to prioritise their work, for example by focussing on the annotations that are the most ambiguous. This kind of work is very likely in the applications I'm working with.
The statistics focussed on in the above paper (F-measure, precision, recall, accuracy, error rate) are all based on definite binary annotations, so they don't make use of the nuance. I'm generally an advocate of the "area under the ROC curve" (AUC) statistic, which doesn't tell the whole story but it helps make use of the nuance by averaging over a whole range of possible detection thresholds.
A nice example of a paper which uses AUC for event detection is "Chime-home: A dataset for sound source recognition in a domestic environment" by Foster et al. The above paper does mention this in passing, but doesn't really tease out why anyone would use AUC or how it differs from the DCASE2016 paradigm.
I want to be clear that the Mesaros et al. paper is not "wrong" or anything like that. I just wish it had a section on evaluating ranked/probabilistic outputs, why that might matter, and what metrics come in useful. Similarly, the sed_eval toolbox doesn't have an implementation of AUC for event detection. Presumably fairly straightforward to add it to its "segment-wise" metrics. Maybe one day!
On Monday I did a bit of what-you-might-call standup - at the Science Showoff night. Here's a video - including a bonus extra bird-imitation contest at the end!