I am a research fellow, conducting research into automatic analysis of bird sounds using machine learning.
—> Click here for more about my research.
Veggieburgers have come a long way - and London's vegetarian scene has quite a lot going on in the burger department! - so I wanted to try a few of them. I particuarly wanted to work out, for my own cooking, what's a good strategy for making a veggieburger that is hearty, tasty, and, well, frankly, worth ordering. For me, your old-fashioned beanburger isn't going to do it.
So here's my list of top veggieburgers in London so far, mostly in East London. It's not an exhaustive survey, and I might update it, but I wanted to get at least 5 really solid recommendations and then write this up. I'm surprised to find that most of them are not only vegetarian but vegan. Here's the top 5! You should definitely eat these:
...so that's the top list so far. Plenty more to try. Here's something that surprised me: although there are plenty of nouveaux popup veggie burger stalls popping up and down all around, those aren't where I seem to find the best burgers. They get the twitter hype but they don't seem to have got their recipes to match the hype. The list I just gave you is mostly well-established places (and two of them are omni).
Some other burgers I tried:
Mooshies (Brick Lane) - I had the "where's the beef" burger which is made of black bean and quinoa, plus all the trimmings. I was disappointed that the burger pattie had not much bite to it - it was more like "vegan mince" than "vegan burger", splodging everywhere at the slightest provocation. The flavour was nice, the black beans providing a dark enough flavour (more beef-like that a nut-burger or a veg-burger) and the quinoa provided a good bit of structured feel on the tongue. So I think black-bean-and-quinoa is a good idea, but you really need to put those together with something that'll make a good firm patty. Flavour-wise, the mayo left me with an overly vinegary aftertaste, so I hope they do something about that too.
On a second visit we had the pulled-pork bbq jackfruit which was fine (but my recipe's better ;), and the onion bhaji burger which was really nice - crispy and full of flavour.
Greedy Cow (Mile End) - nice fresh-tasting vegetable burger with a nice crispy exterior to the patty. Definitely tasted like they care about their veggieburger. I liked it and for an omni place it's very good indeed, but it's not up in the top league since I'm more interested in the more "meaty" angle on a veggieburger.
The Hive Wellbeing (Bethnal Green) - The burger pattie was oddly small, about 1/2 the size of the bun, which was silly. The pattie itself had a lovely clear fresh pumpkin-seed flavour and texture (it's made of mushroom and courgette too). Nice chutney underneath. The flavours overall are savoury and sharp (also from the mustard mayo too) - good, though potentially not for everyone. On a second visit, I found again that the burger was made of good stuff but was kinda awkwardly put together - proportions a little bit off - maybe it's a good recipe, inattentively prepared? It seems to me it could be in the running to be the best, if it were given a bit more care.
Vurgers (popup) - I had the "BBQ nut" one, because it looked likely to be the most substantial. It was a decent meal and hefty enough, but way way overseasoned - so much sugar, so much salt, so much acid. I know that "BBQ" largely implies they'll overdo those things, but there's supposed to be more than those blunt notes. The burger was filling and decent but didn't have much bite - it kept its shape through inertia rather than strength, if you get what I mean. Certainly good enough to mention, but not getting near the top, and served in a poncey-seeming location with price to match.
In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at a distance". They turned to philosophy and developed new metaphysics of their own. Thought-experiments such as Schrodinger's cat, originally intended to highlight the absurdity of the standard "Copenhagen interpretation", became standard teaching examples.
In the twenty-first century, researchers in artificial intelligence (AI) and machine learning (ML) find themselves in a roughly analogous position. There has been a sudden step-change in the abilities of machine learning systems, and the dream of AI (which had been put on ice after the initial enthusiasm of the 1960s turned out to be premature) has been reinvigorated - while at the same time, the deep and widespread industrial application of ML means that whatever advances are made, their effects will be felt. There's a new urgency to long-standing philosophical questions about minds, machines and society.
So I was glad to see that Neil Lawrence, an accomplished research leader in ML, published an article on these social implications. The article is "Living Together: Mind and Machine Intelligence". Lawrence makes a noble attempt to provide an objective basis for considering the differences between human and machine intelligences, and what those differences imply for the future place of machine intelligence in society.
In case you're not familiar with the arXiv website I should point out that articles there are un-refereed, they haven't been through the peer-review process that guards the gate of standard scientific journals. And let me cut to the chase - in this paper, I'm not sure which journal he was targeting, but if I was a reviewer I wouldn't have recommended acceptance. Lawrence's computer science is excellent, but here I find his philosophical arguments are disappointing. Here's my review:
A key difference between humans and machines, notes Lawrence, is that we humans - considered for the moment as abstract computational agents - have high computational capacity but a very limited bandwidth to communicate. We speak (or type) our thoughts, but really we're sharing the tiniest shard of the information we have computed, whereas modern computers can calculate quite a lot (not as much as does a brain) but they can communicate with such high bandwidth that the results are essentially not "trapped" in the computer. For Lawrence this is a key difference, making the boundaries between machine intelligences much less pertinent than the boundaries between natural intelligences, and suggesting that future AI might not act as a lot of "agents" but as a unified subconscious.
Lawrence quantifies this difference as the numerical ratio between computational capacity and communicative bandwidth. Embarrassingly, he then names this ratio the "embodiment factor". The embodiment of cognition is an important idea in much modern thought-about-thought: essentially, "embodiment" is the rejection of the idea that my cognition can really be considered as an abstract computational process separate from my body. There are many ways we can see this: my cognition is non-trivially affected by whether or not I have hay-fever symptoms today; it's affected by the limited amount of energy I have, and the fact I must find food and shelter to keep that energy topped up; it's affected by whether I've written the letter "g" on my hand (or is it a "9"? oh well); it's affected by whether I have an abacus to hand; it's affected by whether or not I can fly, and thus whether in my experience it's useful to think about geography as two-dimensional or three-dimensional. (For a recent perspective on extended cognition in animals see the thoughts of a spiderweb.) I don't claim to be an expert on embodied cognition. But given the rich cognitive affordances that embodiment clearly offers, it's terribly embarrassing and a little revealing that Lawrence chooses to reduce it to the mere notion of being "locked in" (his phrase) with constraints on our ability to communicate.
Lawrence's ratio could perhaps be useful, so to defuse the unfortunate trivial reduction of embodiment, I would like to rename it "containment factor". He uses it to argue that while humans can be considered as individual intelligent agents, for computer intelligences the boundaries dissolve and they can be considered more as a single mass. But it's clear that containment is far from sufficient in itself: natural intelligences are not the only things whose computation is not matched by their communication. Otherwise we would have to consider an air-gapped laptop as an intelligent agent, but not an ordinary laptop.
The argument that the boundaries between AI agents dissolve also rests on another problem. In discussing communication Lawrence focusses too heavily on 'altruistic' or 'honest' communication: transparent communication between agents that are collaborating to mutually improve their picture of the world. This focus leads him to neglect the fact that communicating entities often have differing goals, and often have reason to be biased or even deceitful in the information shared.
The tension between communication and individual aims has been analysed in a long line of thought in evolutionary biology under the name of signalling theory. For example the conditions under which "honest signalling" is beneficial to the signaller. It's important to remember that the different agents each have their own contexts, their own internal states/traits (maybe one is low on energy reserves, and another is not) which affect communicative goals even if the overall avowed aim is common.
In Lawrence's description the focus on honest communication leads him to claim that "if an entity's ability to communicate is high [...] then that entity is arguably no longer distinct from those which it is sharing with" (p3). This is a direct consequence of Lawrence's elision: it can only be "no longer distinct" if it has no distinct internal traits, states, or goals. The elision of this aspect recurs throughout, e.g. "communication reduces to a reconciliation of plot lines among us" (p5).
Unfortunately the implausible unification of AI into a single morass is a key plank of the ontology that Lawrence wants to develop, and also key to the societal consequences he draws.
Lawrence considers some notions of human cognition including the idea of "system 1 and system 2" thinking, and proposes that the mass of machine intelligence potentially forms a new "System Zero" whose essentially unconscious reasoning forms a new stratum of our cognition. The argument goes that this stratum has a strong influence on our thought and behaviour, and that the implications of this on society could be dramatic. This concept has an appeal of neatness but it falls down too easily. There is no System Zero, and Lawrence's conceptual starting-point in communication bandwidth shows us why:
Disturbingly Lawrence claims "Sytem Zero is already aligned with our goals". This starts from a useful observation - that many commercial processes such as personalised advertising work because they attempt to align with our subconscious desires and biases. But again it elides too much. In reality, such processes are aligned not with our goals but with the goals of powerful social elites, large companies etc, and if they are aligned with our "system 1" goals then that is a contingent matter.
Importantly, the control of these processes is largely not democratic but controlled commercially or via might-makes-right. Therefore even if AI/ML does align with some people's desires, it will preferentially align with the desires of those with cash to spend.
On a positive note: Lawrence argues that our limited communication bandwidth shapes our intelligence in a particular way: it makes it crucial for us to maintain "models" of others, so that we can infer their internal state (as well as our own) from their behaviour and their signalling. He argues that conversely, many ML systems do not need such structured models - they simply crunch on enough data and they are able to predict our behaviour pretty well. This distinction seems to me to mark a genuine difference between natural intelligence and AI, at least according to the current state of the art in ML.
He does go a little too far in this as well, though. He argues that our reliance on a "model" of our own behaviour implies that we need to believe that our modelled self is in control - in Freudian terms, we could say he is arguing that the existence of the ego necessitates its own illusion that it controls the id. The argument goes that if the self-model knew it was not in control,
"when asked to suggest how events might pan out, the self model would always answer with "I don't know, it would depend on the precise circumstances"."
This argument is shockingly shallow coming from a scientist with a rich history of probabilistic machine learning, who knows perfectly well how machines and natural agents can make informed predictions in uncertain circumstances!
I also find unsatisfactory the eagerness with which various dualisms are mapped onto one another. The most awkward is the mapping of "self-model vs self" onto Cartesian dualism (mind vs body); this mapping is a strong claim and needs to be argued for rather than asserted. It would also need to account for why such mind-body dualism is not a universal, across history nor across cultures.
However, Lawrence is correct to argue that "sentience" of AI/ML is not the overriding concern in its role in our society; rather, its alignment or otherwise with our personal and collective goals, and its potential to undermine human democratic agency, is the prime issue of concern. This is a philosophical and a political issue, and one on which our discussion should continue.
This season, I'm lead organiser for two special conference sessions on machine listening for bird/animal sound: EUSIPCO 2017 in Kos, Greece, and IBAC 2017 in Haridwar, India. I'm very happy to see the diverse selection of work that has been accepted for presentation - the diversity of the research itself, yes, but also the diversity of research groups and countries from which the work comes.
The official programmes haven't been announced yet, but as a sneak preview here are the titles of the accepted submissions, so you can see just how lively this research area has become!
A two-step bird species classification approach using silence durations in song bouts
Automated Assessment of Bird Vocalisation Activity
Deep convolutional networks for avian flight call detection
Estimating animal acoustic diversity in tropical environments using unsupervised multiresolution analysis
JSI sound: a machine-learning tool in Orange for classification of diverse biosounds
Prospecting individual discrimination of maned wolves’ barks using wavelets
(This session is co-organised with Yiannis Stylianou and Herve Glotin)
Stacked Convolutional and Recurrent Neural Networks for Bird Audio Detection preprint
Densely Connected CNNs for Bird Audio Detection preprint
Classification of Bird Song Syllables Using Wigner-Ville Ambiguity Function Cross-Terms
Convolutional Recurrent Neural Networks for Bird Audio Detection preprint
Joint Detection and Classification Convolutional Neural Network (JDC-CNN) on Weakly Labelled Bird Audio Data (BAD)
Rapid Bird Activity Detection Using Probabilistic Sequence Kernels
Automatic Frequency Feature Extraction for Bird Species Delimitation
Two Convolutional Neural Networks for Bird Detection in Audio Signals
Masked Non-negative Matrix Factorization for Bird Detection Using Weakly Labelled Data
Archetypal Analysis Based Sparse Convex Sequence Kernel for Bird Activity Detection
Automatic Detection of Bird Species from Audio Field Recordings Using HMM-based Modelling of Frequency Tracks
Please note: this is a PREVIEW - sometimes papers get withdrawn or plans change, so these lists should be considered provisional for now.
I'm writing this on the morning of the day of voting for the 2015 election.
Opinion polls are notorious here in the UK for having a complex relationship with reality. What I expect will happen is that the Tories will win but with an embarrassingly modest lead. From the last election they had a working majority of 12 seats. The polls in April suggested a Labour wipe-out was on the cards, and unfortunately for Theresa May she grabbed that opportunity and took it on herself to throw it away: it's hard to see her doing anything but failing to make good on her potential.
Theresa May called this election for entirely selfish reasons. She wanted her own mandate, yes, but she'd previously said it wasn't needed. She called the election, as she said herself, taking advantage of the moment to get herself a lovely big majority. It's highly likely that this gambit will fail and that she'll be back in a position rather similar to the starting position, in which case she'll have wasted two months of all of our time - and, crucially, two months out of the two-year time limit when she was supposed to be negotiating Brexit.
So even if the Tories win, Theresa May is likely to have failed badly. Jeremy Corbyn, however, has defied the expectations of the pundits and built up organic support for Labour. I was sceptical about him and in particular about his election strategy, but it seems really to have worked, and he's shown himself to be a much better leader than many of us thought. Will his parliamentary party finally get behind him after the election? We shall see.
There's another thing we can thank May-vs-Corbyn for. Putting aside for the moment differences of policy, this election seems to me to be a victory for
And it's been the first election in my adult life in which the two big parties have actually represented a meaningful choice of two options. In previous years, New Labour and the Tories may have come from different stock but their political visions were so close as to be redundant. Corbyn's Labour have offered not just a coherent vision, but a genuine alternative. I don't expect them to be able to win, but given that they're fighting uphill against a hell of an onslaught of negative media, it's been heartening to see their principled and engaged version of political campaigning to reap massive rewards, building themselves a massive swing from 25% up to almost 40% (that's according to voting-intention polls). I don't consider myself a Labour voter but Corbyn's made it plausible to consider that a possibility.
I had expected this election to be dispiriting but it has been heartening.
On Wednesday we had a "flagship seminar" from Prof Andy Hopper on Computing for the future of the planet. How can computing help in the quest for sustainability of the planet and humanity?
Lots of food for thought in the talk. I was surprised to come out with a completely different take-home message than I'd expected - and a different take-home message than I think the speaker had in mind too. I'll come back to that in a second.
Some of the themes he discussed:
Then Hopper also talked about replacing physical activities by digital activities (e.g. shopping), and this led him on to the topic of the Internet, worldwide sharing of information and so on. He argued (correctly) that a lot of these developments will benefit the low-income countries even though they were essentially made by-and-for richer countries - and also that there's nothing patronising in this: we're not "developing" other countries to be like us, we're just sharing things, and whatever innovations come out of African countries (for example) might have been enabled by (e.g.) the Internet without anyone losing their own self-determination.
Hopper called this "wealth by proxy"... but it doesn't have to be as mystifying as that. It's a well-known idea called the commons.
The name "commons" originates from having a patch of land which was shared by all villagers, and that makes it a perfect term for what we're considering now. In the digital world the idea was taken up by the free software movement and open culture such as Creative Commons licensing. But it's wider than that. In computing, the commons consists of the physical fabric of the Internet, of the public standards that make the World Wide Web and other Internet actually work (http, smtp, tcp/ip), of public domain data generated by governments, of the Linux and Android operating systems, of open web browsers such as Firefox, of open collaborative knowledge-bases like Wikipedia and OpenStreetMap. It consists of projects like the Internet Archive, selflessly preserving digital content and acting as the web's long-term memory. It consists of the GPS global positioning system, invented and maintained (as are many things) by the US military, but now being complemented by Russia's GloNass and the EU's Galileo.
All of those are things you can use at no cost, and which anyone can use as bedrock for some piece of data analysis, some business idea, some piece of art, including a million opportunities for making a positive contribution to sustainability. It's an unbelievable wealth, when you stop to think about it, an amazing collection of achievements.
The surprising take-home lesson for me was: for sustainable computing for the future of the planet, we must protect and extend the digital commons. This is particularly surprising to me because the challenges here are really societal, at least as much as they are computational.
There's more we can add to the commons; and worse, the commons is often under threat of encroachment. Take the Internet and World Wide Web: it's increasingly becoming centralised into the control of a few companies (Facebook, Amazon) which is bad news generally, but also presents a practical systemic risk. This was seen recently when Amazon's AWS service suffered an outage. AWS powers so many of the commercial and non-commercial websites online that this one outage took down a massive chunk of the digital world. As another example, I recently had problems when Google's "ReCAPTCHA" system locked me out for a while - so many websites use ReCAPTCHA to confirm that there's a real human filling in a form, that if ReCAPTCHA decides to give you the cold shoulder then you instantly lose access to a weird random sample of services, some of those which may be important to you.
Another big issue is net neutrality. "Net neutrality is like free speech" and it repeatedly comes under threat.
Those examples are not green-related in themselves, but they illustrate that out of the components of the commons I've listed, the basic connectivity offered by the Internet/WWW is the thing that is, surprisingly, perhaps the flakiest and most in need of defence. Without a thriving and open internet, how do we join the dots of all the other things?
But onto the positive. What more can we add to this commons? Take the African soil-sensing example. Shouldn't the world have a free, public stream of such land use data, for the whole planet? The question, of course, is who would pay for it. That's a social and political question. Here in the UK I can bring the question even further down to the everyday. The UK's official database of addresses (the Postcode Address File) was... ahem... was sold off privately in 2013. This is a core piece of our information infrastructure, and the government - against a lot of advice - decided to sell it as part of privatisation, rather than make it open. Related is the UK Land Registry data (i.e. who owns what parcel of land) which is not published as open data but is stuck behind a pay-wall, all very inconvenient for data analysis, investigative journalism etc.
We need to add this kind of data to the commons so that society can benefit. In green terms, geospatial data is quite clearly raw material for clever green computing of the future, to do good things like land management, intelligent routing, resource allocation, and all kinds of things I can't yet imagine.
As citizens and computerists, what can we do?
If we get this right, 20 years from now our society's computing will be green as anything, not just because it's powered by innocent renewable energy but because it can potentially be a massive net benefit - data-mining and inference to help us live well on a light footprint. To do that we need a healthy digital commons which will underpin many of the great innovations that will spring up everywhere.
This was gorgeous. I hadn't realised that the sweet butternut and the salty halloumi would play so well off each other.
Serves 2, takes 45 minutes overall but with a big gap in the middle.
First get the oven pre-heated to 180 C. While it's warming get the butternut ready to go in the oven. Chop it into bitesize pieces, roughly the size of 2cm cubes but no need to be exact. Then put the pieces in a roasting tin. Take the tines of rosemary off the stalk, chop them up and sprinkle them over the squash, then drizzle generously with olive oil. Chop the garlic into two pieces (no need to skin them - we're not eating them, just using them to add flavour) and place the pieces strategically among the squash. Then put this all into the oven, to roast for maybe 40 minutes.
When there's about 10 minutes left, heat up a griddle pan and a frying pan on the hob. Don't add any oil to either of the pans.
Take the asparagus stalks, toss them in olive oil and lay them on the griddle. Don't move them about.
Put the pine nuts into the hot dry frying pan. You'll want to shuffle these about for the next few minutes, watching them carefully - they need to get a bit toasty but not burn. While you're doing that you can cut the halloumi into bitesize pieces, about 2cm cube size. Turn the asparagus over to cook the other side and add the halloumi to the pan too. (I hope they fit in the pan with the asparagus...) After a couple of minutes you can turn the halloumi over.
Get the tin out of the oven a couple of minutes before you serve it. Find and discard the garlic.
To serve, place the asparagus on each plate, then next to it you put the squash and the halloumi. Sprinkle the pine nuts over the squash and halloumi. Finally sprinkle a squeeze of lemon over.
I don't always agree with The Economist magazine but it's interesting. It thinks bigger than many of the things you can buy on an average news stand. The current issue has an article about Britain and Marx, which happens to end with a clear and laudable shopping-list of things that our country needs to do to ensure the health of the economy and of worker's conditions. Let me quote:
"The genius of the British system has always been to reform in order to prevent social breakdown. This means doing more than just engaging in silly gestures such as as fixing energy prices, as the Conservatives proposed this week (silly because this will suppress investment and lead eventually to higher prices).
People love to take the vegans down a peg or two. I guess they must unconsciously agree that the vegans are basically correct and doing the right thing, hence the defensive mud-slinging.
There's a bullshit article "Being vegan isn’t as good for humanity as you think". Like many bullshit articles, it's based on manipulating some claims from a research paper.
The point that the article is making is summarised by this quote:
"When applied to an entire global population, the vegan diet wastes available land that could otherwise feed more people. That’s because we use different kinds of land to produce different types of food, and not all diets exploit these land types equally."
This is factually correct, according to the original research paper which itself seems a decent attempt to estimate the different land requirements of different diets. The clickbaity inference, especially as stated in the headline, is that vegans are wrong. But that's where the bullshit lies.
Why? Look again at the quote. "When applied to an entire global population." Is that actually a scenario anyone expects? The whole world going vegan? In the next ten years, fifty years, a hundred? No. It's fine for the research paper to look at full-veganism as a comparison against the 9 other scenarios they consider (e.g. 20% veggy, 100% veggy), but the researchers are quite clear that their model is about what a whole population eats. You can think of it as what "an average person" eats, but no it's not what "each person should" eat.
The research concludes that a vegetarian diet is "best", judged on this specific criterion of how big a population can the USA's farmland support. And since that's for the population as a whole, and there's no chance that meat-eating will entirely leave the Western diet, a more sensible journalistic conclusion is that we should all be encouraged to be a bit more vegetarian, and the vegans should be celebrated for helping balance out those meat-eaters.
Plus, of course, the usual conclusion: more research is needed. This research was just about land use, it didn't include considerations of CO2 emissions, welfare, social attitudes, geopolitics...
The research illustrates that the USA has more than enough land to feed its population and that this could be really boosted if we all transition to eat a bit less meat. Towards the end of the paper, the researchers note that if the USA moved to a vegetarian diet, "the dietary changes could free up capacity to feed hundreds of millions of people around the globe."