Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

How to package SuperCollider for Debian and Ubuntu Linux

SuperCollider works on Linux just great. I've been responsible for one specific part of that in recent years, which is that when a new release of SuperCollider is available, I put it into the Debian official package repository - which involves a few obscure admin processes - and then this means that in the next releases of Debian AND Ubuntu, it'll be available really easily.

These are my notes to help others understand how it's done.

First a few words: on Ubuntu you can also make SuperCollider available through an Ubuntu "PPA", and there's even a sort-of-official PPA where you can get it. Some people like this because it happens much quicker (there's less official approval needed). I strongly advise maintainers: it's really valuable to go through the Debian official repository, even though it's slower. There's no need to feel rushed! Getting it into Debian often means fixing a few little packaging quirks to make sure it installs nicely and interoperates nicely, and your work will result in much wider benefit. It's OK to do the PPA thing as well, of course, but you mustn't rely purely on the PPA. (You may as well do the debian thing and then repurpose the same codebase for PPA.)

The things I'm going to cover, i.e. the things you'll need to know/do in order to get a new SuperCollider release into Debian, include:

  1. joining the debian-multimedia team
  2. debian's lovely way of using git together with buildpackage
  3. importing a fresh sourcecode release of SC into the git
  4. compiling it, checking it, releasing it

But I'm mainly going to do this as a step-by-step walkthrough, NOT a broad overview. Sorry if that means some things seem unexplained.

DO IT IN DEBIAN

I'm an Ubuntu user normally, but to keep things clean I do this work in a Debian virtual machine, by using Virtualbox. Ubuntu is based on Debian so you might think you can do it directly in Ubuntu but in practice it tends to go wrong because you end up specifying the wrong versions of package dependencies etc.

Of course, Ubuntu "inherits" packages from Debian, so after we push the Debian package it will magically appear in Ubuntu too.

In the debian you'll also need these packages, which you can get from apt install as normal:

  • build-essential
  • git-buildpackage
  • cmake

You'll also need to install whatever is needed ordinarily to compile SuperCollider - check the readme. (There's a tool mk-build-deps which can help with this, as long as the dependencies haven't changed since the previous SC.)

GET THE DEBIAN-FLAVOURED CODE

The Debian "multimedia team" has a special git repository of their own, which contains the released version of SuperCollider plus the debian scripts and metadata.

Here are shell commands for fetching the git repo and specifically checking out the three branches that are used in debian's git-buildpackage workflow:

git clone https://anonscm.debian.org/git/pkg-multimedia/supercollider.git
cd supercollider
git checkout -b upstream origin/upstream
ls
git checkout -b pristine-tar origin/pristine-tar
ls
git checkout master
ls

If you do those "ls" commands you get a rough idea of what's in the 3 branches:

  1. "upstream": this should be the exact same as the contents of the "-Source-linux.tar.bz2" sourcecode downloaded from the main SuperCollider release.
  2. "master": this is the same as upstream EXCEPT that it has the special "debian" folder added, which contains all of the magic to compile and bundle up SuperCollider correctly.
  3. "pristine-tar": the file layout in here is very different from the others. It's simply an archive of all the source code tar files, created automatically.

This might seem a bit arcane, but don't change it - the debian "git-buildpackage" scripts expect the git repo to be laid out EXACTLY like this.

A shortcut that actually pulls all three branches is provided by gbp:

gbp clone https://anonscm.debian.org/git/pkg-multimedia/supercollider.git

but I'm doing it explicitly because it's kinda useful to get a bit of an idea what's going on in those three branches.

IMPORTING A FRESH SOURCE CODE RELEASE

Let's imagine the main SuperCollider team have released a new version, including putting a new sourcecode download on the website. IMPORTANT: it needs to be the "-Source-linux.tar.bz2" version, because that strips out some Windows- and Mac-specific stuff. Some people don't care about whether there's extra Windows and Mac cruft in a zip file, but the Debian adminstrators do care, because they monitor the code in the repository to be careful there's no non-free material in there etc.

Do this every time there's a new release of SuperCollider to bundle up:

  1. Run uscan which checks the SuperCollider website for a new source code download. If it finds one it'll download it, and it'll also automatically repack it (removing some crufty files that are either not needed or lead to copyright complications). It puts the resulting tar.gz in ../tarballs. You can run uscan --verbose and it'll show some text details that might help you understand what actions the program is actually doing.
  2. Run gbp import-orig --pristine-tar --sign-tags ${path-to-fetched-tarball} the path, for me at least, is ../tarballs/ followed by the actual tarball file. Make sure it's the "repack" one. The procedure will check with you what the upstream version number is. Is it "3.8.0~repack"? No, it's "3.8.0".
  3. Refresh the patches. What this means is, the debian folder has a set of patches that it uses to modify the supercollider source code, to fix problems. These patches might not apply exactly to the new code, so we need to go through,

    export QUILT_PATCHES=debian/patches
    quilt push -a
    quilt refresh
    quilt pop      # repeat refresh-and-pop until all are popped
    

    Did you run the last two lines again and again? Eventually it says "No patches applied".

    After this, it's a good idea to do a git commit debian to store any changes you made to the patches in a git commit of their own.

    You may need to remove a patch - typically, this happens if it's been "upstreamed". To do that, you can git rm the patch itself as well as edit its name out of debian/patches/series, then commit that. You may also find you need to make a new patch, to fix some issue such as getting the latest code to build properly on all the architectures that Debian supports.

    (Recently the debian admins have started using gbp pq to look after the patches. Maybe that's useful. I haven't got into it yet.)

  4. Create a changelog entry.

    gbp dch -a debian

Here's something that might be surprising: the changelog file is what tells debian which version of SC it's building. If it sees 3.7.0 as the top item, it tries to build from 3.7.0 source. It doesn't matter what's been happening in the git commits, or which source code you have downloaded. So if you're importing a new version you have to make sure to add a new entry to the top of the changelog. Hopefully the pattern is obvious from the file itself, but you can also look at general Debian packaging guidelines to understand it more.

NOW TEST THAT IT BUILDS

First let's get gbp to build a debian style source package. (You may be wondering: we started with SuperCollider's original source code bundle, why are we now building a source code bundle? This is different, it makes a .dsc file that could be used to tell the Debian servers how to compile a binary.)

The main reason I'm telling you to do this is that it performs some of the build process but not the actual compiling. So it's a good way to check for any errors before doing the hardcore building:

gbp buildpackage -S --git-export-dir=../buildarea

This will also run lintian to check for errors in rule-following. Debian's rules are quite strict and you'll probably find some little error or other, which you should fix, then do a git commit for, then try again.

You can also then build proper binary debs:

gbp buildpackage --git-export-dir=../buildarea

This might take a long time. Eventually... it should produce some .deb files. It might even ask you to sign them.

THINGS YOU MAY HAVE TO CHANGE

  1. SuperCollider uses "boost" code library. SuperCollider comes bundled with a recent version of it, and the exact version gets updated now and again. Debian makes this a little more complicated - they don't want to use the bundled version, instead they want to use the version that's built in to debian. So if you look in debian/control you'll see some "libboost" dependencies specified. If SuperCollider's dependency has changed, you may need to update this to get it building properly. You'll also need to use apt install to fetch those boost dependencies.
  2. If SC source code files get renamed or the folders change, it's fairly common that you need to edit one of the text files in the debian folder to point it at the right thing.

NOW TEST THAT IT RUNS

You've successfully made the .deb files, i.e. the actual installable binaries. Install them on your system. You can do it using dpkg -i like you would with most deb files, or for convenience you can use the debi command which makes sure you're installing the whole set of packages you've built:

debi ../buildarea/supercollider_3.8.0-1_*.changes

NOTE that this installing step is "real" installing. If you're working in a virtualbox like I am then you're probably not worried about whether you'll be overwriting your existing SC. Otherwise do bear in mind - this install will overwrite/upgrade the SC that's installed on your system.

Once installed, run it, make sure the thing is OK.

NOW PUBLISH THIS STUFF

In order to get this stuff actually live on the official debian package system, you need to do a few things. You'll need to join "debian multimedia team" as a guest (see their webpages for more info on that), and once you've done that it gives you permission to push your git changes up to their server:

git push origin master upstream pristine-tar
git push --follow-tags

Then after that you need to do a "request for upload" - i.e. asking one of the debian multimedia team with upload rights, to give it a quick check and publish it. You do this via the debian multimedia team mailing list. It's also possible to get upload rights yourself, but that's something I haven't gone through.

CONCLUSION

So there we have it. Thanks to Felipe Sateler and other Debian crew for lots of help inducting me into this process.

Interested in helping out? Whether it's SuperCollider or some other audio/video linux tool, the Debian MultiMedia team would love you to join!

P.S. some added notes from Mauro about using Docker as part of this

| supercollider |

IBAC 2017 India - bioacoustics research

I'm just flying from the International Bioacoustics Congress 2017, held in Haridwar in the north of India. It was a really interesting time. I'm glad that IBAC was successfully brought to India, i.e. to a developing country with a more fragmented bioacoustics community (I think!) than in the west, and getting to know some of the Indian countryside, the people, and the food was ace. Let me make a few notes about research themes that were salient to me:

  • "The music is not in the notes, but in the silence between" - this Mozart quote which Anshul Thakur used is a lovely byline for his studies - as well as others' - on using the durations of the gaps between units in a birdsong, for the purposes of classification/analysis. Here's who investigated gaps:
    • Anshul Thakur howed how he used the gap duration as an auxiliary feature, alongside the more standard acoustic classification, to improve quality.
    • Florencia Noriega discussed her own use of gap durations, in which she fitted a Gaussian mixture model to the histogram of log-gap-durations between parrot vocalisation units, and then used this to look for outliers. One use that she suggested was that this could be a good way to look for unusual vocalisation sequences that could be checked out in more detail by further fieldwork.
    • (I have to note here, although I didn't present it at IBAC, that I've also been advocating the use of gap durations. The clearest example is in our 2013 paper in JMLR in which we used them to disentangle sequences of bird sounds.)
    • Tomoko Mizuhara presented evidence from a perceptual study in zebrafinches, that the duration of the gap preceding a syllable exhibits some role in the perception of syllable identity. The gap before? Why? - Well one connection is that the gap before an event might relate if it's the time the bird takes to breathe in, and thus there's an empirical correlation, whether the bird is using that purely empirically or for some more innate reason.
  • Machine learning methods in bioacoustics - this is the session that I organised, and I think it went well - I hope people found it useful. I won't go into loads of detail here since I'm mostly making notes about things that are new to me. One notable thing though was Vincent Lostanlen announcing a dataset "BirdVox-70k" (flight calls of birds recorded in the USA, annotated with the time and frequency of occurrence) - I always like it when a dataset that might be useful for bird sound analysis is published under an open licence! No link yet - I think that's to come soon. (They've also done other things such as this neat in-browser audio annotation tool.)

  • Software platforms for bioacoustics. When I do my research I'm often coding my own Python scripts or suchlike, but that's not a vernacular that most bioacousticians speak. It's tricky to think what the ideal platform for bioacoustics would be, since there are quite some demands to meet: for example ideally it could handle ten seconds of audio as well as one year of audio, yet also provide an interface suitable for non-programmers. A few items on that theme:

    • Phil Eichinski (standing in for Paul Roe) presented QUT's "Eco-Sounds" platform. They've put effort into making it work for big audio data, managing terabytes of audio and optimising whether to analyse the sound proactively or on-demand. The false-colour "long duration spectrograms" developed by Michael Towsey et al are used to visualise long audio recordings. (I'll say a bit more about that below.)
    • Yves Bas presented his Tadarida toolbox for detection and classification.
    • Ed Baker presented his BioAcoustica platform for archiving and analysing sounds, with a focus on connecting deposits to museum specimens and doing audio query-by-example.
    • Anton Gradisek, in our machine-learning session, presented "JSI Sound: a machine learning tool in Orange for classification of diverse biosounds" - this is a kind of "machine-learning as a service" idea.
    • Then a few others that might or might not be described as full-blown "platforms":
      • Tomas Honaiser wasn't describing a new platform, but his monitoring work - I noted that he was using the existing AMIBIO project to host and analyse his recordings.
      • Sandor Zsebok presented his Ficedula matlab toolbox which he's used for segmenting and clustering etc to look at cultural evolution in the Collared flycatcher.
      • Julie Elie mentioned her lab's SoundSig Python tools for audio analysis.
    • Oh by the way, what infrastructure should these projects be built upon? The QUT platform is built using Ruby, which is great for web developers but strikes me as an odd choice because very few people in bioacoustics or signal processing have even heard of it - so how is the team / the community going to find the people to maintain it in future? Yves Bas' is C++ and R which makes sense for R users (fairly common in this field). BioAcoustica - not sure if it's open-source but there's an R package that connects to it. --- I'm not an R user, I much prefer Python, because of its good language design, its really wide user base, and its big range of uses, though I recognise that it doesn't have the solid stats base that R does. People will debate the merits of these tools for ever onwards - we're not going to all come down on one decision - but it's a question that I often come back to, how best to build software tools to ensure they're useable and maintainable and solid.

So about those false-colour "long duration spectrograms". I've been advocating this visualisation method ever since I saw Michael Towsey present it (I think at the Ecoacoustics meeting in Paris). Just a couple of months ago I was at a workshop at the University of Sussex and Alice Eldridge and colleagues had been playing around with it too. At IBAC this week, ecologist Liz Znidersic talked really interestingly about how she had used them to detect a cryptic (i.e. hard-to-find) bird species. It shows that the tool helps with "needle in a haystack" problems, including those where you might not have a good idea of what needle you're looking for.

In Liz's case she looked at the long-duration spectrograms manually, to spot calling activity patterns. We could imagine automating this, i.e. using the long-dur spectrogam as a "feature set" to make inferences about diurnal activity. But even without automation it's still really neat.

Anyway back to the thematic listings...

  • Trills in bird sounds are fascinating. These rapidly-frequency-modulated sounds are often difficult and energetic to do, and this seems to lead to them being used for specific functions.
    • Tereza Petruskova presented a poster of her work on tree pipits, arguing for different roles for the "loud trill" and the "soft trill" in their song.
    • Christina Masco spoke about trills in splendid fairywrens (cute-looking birds those!). They can be used as a call but can also be included at the end of a song, which raises the question of why are they getting "co-opted" in this way. Christina argued that the good propagation properties of the trill could be a reason - there was some discussion about differential propagation and the "ranging hypothesis" etc.
    • Ole Larsen gave a nice opening talk about signal design for private vs public messages. It was all well-founded, though I quibbled his comment that strongly frequency-modulated sounds would be for "private" communication because if they cross multiple critical bands they might not accumulate enough energy in a "temporal integration window" to trigger detection. This seems intuitively wrong to me (e.g.: sirens!) but I need to find some good literature to work this one through.
  • Hybridisation zones are interesting for studying birdsong, since they're zones where two species coexist and individuals of that species might or might not breed with individuals of the other species. For birds, song recognition might play a part in whether this happens. It's quite a "strong" concept of perceptual similarity, to ask the question "Is that song similar enough to breed with?"!
    • Alex Kirschel showed evidence from a suboscine (and so not a vocal learner) which in some parts of Africa seems to hybridise and in some parts seems not to - and there could be some interplay with the similarity of the two species' song in that region.
    • Irina Marova also talked about hybridisation, in songbirds, but I failed to make a note of what she said!
  • Duetting in birdsong was discussed by a few people, including Pedro Diniz and Tomasz Osiejuk. Michal Budka argued that his playback studies with Chubb's cisticola showed they use duet for territory defence and signalling commitment but not for "mate-guarding".
    • Oh and before the conference, I was really taken by the duetting song of the grey treepie, a bird we heard up in the Himalayan hills. Check it out if you can!

As usual, my apologies to anyone I've misrepresented. IBAC has long days and lots of short talks (often 15 minutes), so it can all be a bit of a whirlwind! Also of course this is just a terribly partial list.

A postscript, for joining-the-dots purposes: I'll link back to my previous blog about IBAC 2015 in Murnau, Germany.

| science |

Food I found in India

I just spent a couple of weeks in northern India (to attend IBAC). Did I have some great food? Of course I did.

No fancy gastro stuff, just tried the local cuisine. Funny thing is, as an English person and living in East London for more than a decade, I already knew plenty about the food! Or at least I knew more than the other international delegates did. The Indian names, the difference between naan and chapatti and poori, for example, or what's a palak paneer.

Credit's got to go to Sarab Sethi for showing me his favourite street food when we were in downtown Haridwar - aloo tiki, which is fried potatoes and yogurt with sweet and savoury sauces. Kind of like an Indian equivalent of chips with mayo+ketchup, maybe... or maybe that doesn't do it justice! And thanks Phil for the photo:

In the north India region, two staples of the local cuisine seem to be dhal makhani (buttery thick lentils) and paneer (cheese) curry, which we were served often. To be honest, though, those are very rich, too much to eat every day. On quite a few days I went for a lovely dosa instead - a crispy curry-filled pancake, which is from south India originally, and also not an everyday meal but I do like it.

I did have some top dishes locally though. At a dhaba (a roadside caff) we had some excellent-tasting chana (chickpeas) - no idea how they were flavoured so well but they were. The curried brinjal (okra) dishes were good too, nice and fresh-tasting.

In Delhi (at the start of the trip) I found a well-reputed eatery called Khaki di Hatti. They do massive naan breads - seriously massive, I recommend you agree to order the "baby" one when they suggest it! - and I had a great spinach kofte dish there. It was kofte ("meatballs", you might say, though vegetarian in this case) made of breadcrumbs and who-knows-what-else, in a lovely savoury spinach sauce.

Untitled Untitled

Eating veggie is easy in this region, since so many people are veggie. But then so is eating non-veggie. Lots of places are "pure veg" and in most other places it's really clearly labelled.

| food |

A pea soup

A nice fresh pea soup can be great sometimes, and also a good thing to do with leftovers. This worked well for me when I had some leftover spring onions, creme fraiche and wasabi. You can of course leave out the wasabi, or swap the creme fraiche for cream or a dab of milk, or you could add watercress perhaps.

  • A small knob of butter
  • 4 spring onions
  • 3-4 handfuls frozen peas (no need to defrost them!)
  • A dab of wasabi paste
  • About 75ml creme fraiche
  • Black pepper

Boil a kettle.

In a smallish pan melt the butter. Chop the spring onions, and fry the white bits gently to soften them, about 4 minutes. Then add the green bits of the spring onions, as well as the peas and the tiny dab of wasabi.

Turn up the heat and also add the boiling water, just enough to cover things. Once you've brought the pan to the boil you can turn it right down low, put a lid on it, and let it bubble gently for approx 10 minutes, no need for more.

Take the pan off the heat, and with a hand blender you can whizz up the pan's contents to blend it to a smooth soup. Add the black pepper and creme fraiche and stir it through.

| recipes |

Best veggieburgers in East London

Veggieburgers have come a long way - and London's vegetarian scene has quite a lot going on in the burger department! - so I wanted to try a few of them. I particuarly wanted to work out, for my own cooking, what's a good strategy for making a veggieburger that is hearty, tasty, and, well, frankly, worth ordering. For me, your old-fashioned beanburger isn't going to do it.

So here's my list of top veggieburgers in London so far, mostly in East London. It's not an exhaustive survey, and I might update it, but I wanted to get at least 5 really solid recommendations and then write this up. I'm surprised to find that most of them are not only vegetarian but vegan. Here's the top 5! You should definitely eat these:

  1. Untitled Arancini (Dalston) "vegan burger" - Really tasty and substantial fast food, made of their risotto stuff. A good whack of umami from the aubergine sauce, and good chips too. Highly recommended burger. As this other blogger puts it: "This is a burger where all the elements are nicely thought through - good bun, fresh salad, lots of good chutney [and the burger is] one note in a well-played burger orchestra."
  2. Untitled Mildred's (Camden and elsewhere - Dalston coming soon) "smoked tofu, lentil, piquillo pepper burger" - A great burger with plenty of bite to it, nice smoky burgery flavour. I think the tofu provides the heft while the lentils add texture. It was a little bit dry so I had to add tomato ketchup (nowt wrong with that), and with that in place it was the full package. Just look at it!
  3. UntitledEat17 (Homerton): This was a curious place, housed in the corner of a Spar supermarket, and also serving the Castle cinema upstairs. Anyway they gave me a nice black bean + quinoa burger, with a good crispy exterior. Presumably it's got some beetroot in it too, given the rich purpley colour. The flavour of the burger itself could be a bit heavier, and the chips were too salty as always, but otherwise a very good showing, especially coming from a non-veggie place.
    (BTW I think this is the only non-vegan burger in the top five.)
  4. Untitled Black Cat Cafe (Hackney): "beefish burger" I think they called their vegan burger, but actually to me it had a pleasing chorizo-like taste. The burger is firm and hefty (good), the chips are good, and their vegan mayo tastes great.
  5. Untitled Vx (Kings Cross): a tasty and filling cheezburger, near to Kings Cross station - handy! They're not trying to be fancy-fancy - in fact this burger has quite a McDonaldsy taste to it, except that the burger patty (made of seitan) is more generous/filling than a McD. Good stuff.

...so that's the top list so far. Plenty more to try. Here's something that surprised me: although there are plenty of nouveaux popup veggie burger stalls popping up and down all around, those aren't where I seem to find the best burgers. They get the twitter hype but they don't seem to have got their recipes to match the hype. The list I just gave you is mostly well-established places (and two of them are omni).

-

Some other burgers I tried:

  • Mooshies (Brick Lane) - I had the "where's the beef" burger which is made of black bean and quinoa, plus all the trimmings. I was disappointed that the burger pattie had not much bite to it - it was more like "vegan mince" than "vegan burger", splodging everywhere at the slightest provocation. The flavour was nice, the black beans providing a dark enough flavour (more beef-like that a nut-burger or a veg-burger) and the quinoa provided a good bit of structured feel on the tongue. So I think black-bean-and-quinoa is a good idea, but you really need to put those together with something that'll make a good firm patty. Flavour-wise, the mayo left me with an overly vinegary aftertaste, so I hope they do something about that too.

    On a second visit we had the pulled-pork bbq jackfruit which was fine (but my recipe's better ;), and the onion bhaji burger which was really nice - crispy and full of flavour.

  • Greedy Cow (Mile End) - nice fresh-tasting vegetable burger with a nice crispy exterior to the patty. Definitely tasted like they care about their veggieburger. I liked it and for an omni place it's very good indeed, but it's not up in the top league since I'm more interested in the more "meaty" angle on a veggieburger.

  • The Hive Wellbeing (Bethnal Green) - The burger pattie was oddly small, about 1/2 the size of the bun, which was silly. The pattie itself had a lovely clear fresh pumpkin-seed flavour and texture (it's made of mushroom and courgette too). Nice chutney underneath. The flavours overall are savoury and sharp (also from the mustard mayo too) - good, though potentially not for everyone. On a second visit, I found again that the burger was made of good stuff but was kinda awkwardly put together - proportions a little bit off - maybe it's a good recipe, inattentively prepared? It seems to me it could be in the running to be the best, if it were given a bit more care.

  • Vurgers (popup) - I had the "BBQ nut" one, because it looked likely to be the most substantial. It was a decent meal and hefty enough, but way way overseasoned - so much sugar, so much salt, so much acid. I know that "BBQ" largely implies they'll overdo those things, but there's supposed to be more than those blunt notes. The burger was filling and decent but didn't have much bite - it kept its shape through inertia rather than strength, if you get what I mean. Certainly good enough to mention, but not getting near the top, and served in a poncey-seeming location with price to match.

| Food |

Review of "Living Together: Mind and Machine Intelligence" by Neil Lawrence

In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at a distance". They turned to philosophy and developed new metaphysics of their own. Thought-experiments such as Schrodinger's cat, originally intended to highlight the absurdity of the standard "Copenhagen interpretation", became standard teaching examples.

In the twenty-first century, researchers in artificial intelligence (AI) and machine learning (ML) find themselves in a roughly analogous position. There has been a sudden step-change in the abilities of machine learning systems, and the dream of AI (which had been put on ice after the initial enthusiasm of the 1960s turned out to be premature) has been reinvigorated - while at the same time, the deep and widespread industrial application of ML means that whatever advances are made, their effects will be felt. There's a new urgency to long-standing philosophical questions about minds, machines and society.

So I was glad to see that Neil Lawrence, an accomplished research leader in ML, published an article on these social implications. The article is "Living Together: Mind and Machine Intelligence". Lawrence makes a noble attempt to provide an objective basis for considering the differences between human and machine intelligences, and what those differences imply for the future place of machine intelligence in society.

In case you're not familiar with the arXiv website I should point out that articles there are un-refereed, they haven't been through the peer-review process that guards the gate of standard scientific journals. And let me cut to the chase - in this paper, I'm not sure which journal he was targeting, but if I was a reviewer I wouldn't have recommended acceptance. Lawrence's computer science is excellent, but here I find his philosophical arguments are disappointing. Here's my review:

Embodiment? No: containment

A key difference between humans and machines, notes Lawrence, is that we humans - considered for the moment as abstract computational agents - have high computational capacity but a very limited bandwidth to communicate. We speak (or type) our thoughts, but really we're sharing the tiniest shard of the information we have computed, whereas modern computers can calculate quite a lot (not as much as does a brain) but they can communicate with such high bandwidth that the results are essentially not "trapped" in the computer. For Lawrence this is a key difference, making the boundaries between machine intelligences much less pertinent than the boundaries between natural intelligences, and suggesting that future AI might not act as a lot of "agents" but as a unified subconscious.

Lawrence quantifies this difference as the numerical ratio between computational capacity and communicative bandwidth. Embarrassingly, he then names this ratio the "embodiment factor". The embodiment of cognition is an important idea in much modern thought-about-thought: essentially, "embodiment" is the rejection of the idea that my cognition can really be considered as an abstract computational process separate from my body. There are many ways we can see this: my cognition is non-trivially affected by whether or not I have hay-fever symptoms today; it's affected by the limited amount of energy I have, and the fact I must find food and shelter to keep that energy topped up; it's affected by whether I've written the letter "g" on my hand (or is it a "9"? oh well); it's affected by whether I have an abacus to hand; it's affected by whether or not I can fly, and thus whether in my experience it's useful to think about geography as two-dimensional or three-dimensional. (For a recent perspective on extended cognition in animals see the thoughts of a spiderweb.) I don't claim to be an expert on embodied cognition. But given the rich cognitive affordances that embodiment clearly offers, it's terribly embarrassing and a little revealing that Lawrence chooses to reduce it to the mere notion of being "locked in" (his phrase) with constraints on our ability to communicate.

Lawrence's ratio could perhaps be useful, so to defuse the unfortunate trivial reduction of embodiment, I would like to rename it "containment factor". He uses it to argue that while humans can be considered as individual intelligent agents, for computer intelligences the boundaries dissolve and they can be considered more as a single mass. But it's clear that containment is far from sufficient in itself: natural intelligences are not the only things whose computation is not matched by their communication. Otherwise we would have to consider an air-gapped laptop as an intelligent agent, but not an ordinary laptop.

Agents have their own goals and ambitions

The argument that the boundaries between AI agents dissolve also rests on another problem. In discussing communication Lawrence focusses too heavily on 'altruistic' or 'honest' communication: transparent communication between agents that are collaborating to mutually improve their picture of the world. This focus leads him to neglect the fact that communicating entities often have differing goals, and often have reason to be biased or even deceitful in the information shared.

The tension between communication and individual aims has been analysed in a long line of thought in evolutionary biology under the name of signalling theory. For example the conditions under which "honest signalling" is beneficial to the signaller. It's important to remember that the different agents each have their own contexts, their own internal states/traits (maybe one is low on energy reserves, and another is not) which affect communicative goals even if the overall avowed aim is common.

In Lawrence's description the focus on honest communication leads him to claim that "if an entity's ability to communicate is high [...] then that entity is arguably no longer distinct from those which it is sharing with" (p3). This is a direct consequence of Lawrence's elision: it can only be "no longer distinct" if it has no distinct internal traits, states, or goals. The elision of this aspect recurs throughout, e.g. "communication reduces to a reconciliation of plot lines among us" (p5).

Unfortunately the implausible unification of AI into a single morass is a key plank of the ontology that Lawrence wants to develop, and also key to the societal consequences he draws.

There is no System Zero

Lawrence considers some notions of human cognition including the idea of "system 1 and system 2" thinking, and proposes that the mass of machine intelligence potentially forms a new "System Zero" whose essentially unconscious reasoning forms a new stratum of our cognition. The argument goes that this stratum has a strong influence on our thought and behaviour, and that the implications of this on society could be dramatic. This concept has an appeal of neatness but it falls down too easily. There is no System Zero, and Lawrence's conceptual starting-point in communication bandwidth shows us why:

  • Firstly, the morass of machine intelligence has no high-bandwidth connection to System 1 or to System 2. The reason we talk of "System 1 and System 2" coexisting in the same agent is that they're deeply and richly connected in our cognition. (BTW I don't attribute any special status to "System 1 and System 2", they're just heuristics for thinking about thinking - that doesn't really matter here.) Lawrence's own argument about the poverty of communication channels such as speech also goes for our reception of information. However intelligent, unified or indeed devious AI becomes, it communicates with humans through narrow channels such as adverts, notifications on your smartphone, or selecting items to show to you. The "wall" between ourselves as agents and AI will be significant for a long time.
    • Direct brain-computer interfacing is a potential counterargument here, and if that technology were to develop significantly then it is true that our cognition could gain a high-bandwidth interface. I remain sceptical that such potential will be non-trivially realised in my lifetime. And if they do come to pass, they would dissolve human-human bottlenecks as much as human-computer bottlenecks, so in either case Lawrence's ontology does not stand.
  • Secondly AI/ML technologies are not unified. There's no one entity connecting them all together, endowing them with the same objective. Do you really think that Google and Facebook, Europe and China, will pool their machine intelligences together, allowing unbounded and unguarded communication? No. And so irrespective of how high the bandwidth is within and between these silos, they each act as corporate agents, with some degrees of collusion and mutual inference, sure, but they do not unify into an underlying substrate of our intelligence. This disunification highlights the true ontology: these agents sit relative to us as agents - powerful, information-rich and potentially dangerous agents, but then so are some humans.

Disturbingly Lawrence claims "Sytem Zero is already aligned with our goals". This starts from a useful observation - that many commercial processes such as personalised advertising work because they attempt to align with our subconscious desires and biases. But again it elides too much. In reality, such processes are aligned not with our goals but with the goals of powerful social elites, large companies etc, and if they are aligned with our "system 1" goals then that is a contingent matter.

Importantly, the control of these processes is largely not democratic but controlled commercially or via might-makes-right. Therefore even if AI/ML does align with some people's desires, it will preferentially align with the desires of those with cash to spend.

We need models; machines might not

On a positive note: Lawrence argues that our limited communication bandwidth shapes our intelligence in a particular way: it makes it crucial for us to maintain "models" of others, so that we can infer their internal state (as well as our own) from their behaviour and their signalling. He argues that conversely, many ML systems do not need such structured models - they simply crunch on enough data and they are able to predict our behaviour pretty well. This distinction seems to me to mark a genuine difference between natural intelligence and AI, at least according to the current state of the art in ML.

He does go a little too far in this as well, though. He argues that our reliance on a "model" of our own behaviour implies that we need to believe that our modelled self is in control - in Freudian terms, we could say he is arguing that the existence of the ego necessitates its own illusion that it controls the id. The argument goes that if the self-model knew it was not in control,

"when asked to suggest how events might pan out, the self model would always answer with "I don't know, it would depend on the precise circumstances"."

This argument is shockingly shallow coming from a scientist with a rich history of probabilistic machine learning, who knows perfectly well how machines and natural agents can make informed predictions in uncertain circumstances!

I also find unsatisfactory the eagerness with which various dualisms are mapped onto one another. The most awkward is the mapping of "self-model vs self" onto Cartesian dualism (mind vs body); this mapping is a strong claim and needs to be argued for rather than asserted. It would also need to account for why such mind-body dualism is not a universal, across history nor across cultures.

However, Lawrence is correct to argue that "sentience" of AI/ML is not the overriding concern in its role in our society; rather, its alignment or otherwise with our personal and collective goals, and its potential to undermine human democratic agency, is the prime issue of concern. This is a philosophical and a political issue, and one on which our discussion should continue.

| science |

Sneak preview: papers in special sessions on bioacoustics and machine listening

This season, I'm lead organiser for two special conference sessions on machine listening for bird/animal sound: EUSIPCO 2017 in Kos, Greece, and IBAC 2017 in Haridwar, India. I'm very happy to see the diverse selection of work that has been accepted for presentation - the diversity of the research itself …

| science |

Jeremy Corbyn has already won

I'm writing this on the morning of the day of voting for the 2015 election.

Opinion polls are notorious here in the UK for having a complex relationship with reality. What I expect will happen is that the Tories will win but with an embarrassingly modest lead. From the last …

| politics |

Computing for the future of the planet: the digital commons

On Wednesday we had a "flagship seminar" from Prof Andy Hopper on Computing for the future of the planet. How can computing help in the quest for sustainability of the planet and humanity?

Lots of food for thought in the talk. I was surprised to come out with a completely …

| IT |

Roast squash, halloumi and pine nuts with asparagus

This was gorgeous. I hadn't realised that the sweet butternut and the salty halloumi would play so well off each other.

Serves 2, takes 45 minutes overall but with a big gap in the middle.

  • 1/2 a butternut squash
  • 1 sprig rosemary
  • 2 cloves garlic
  • olive oil
  • a generous …
| recipes |

social