Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!
[Blog archives] [Categories]

The H-index can not be relied on for recruitment

The H-index is one of my favourite publication statistics. It's really simple to define: a person's H-index is the biggest number H of publications which have been cited H times each. It's robust to outliers: if you've a million publications with no citations, or one publication with a million citations, this doesn't influence the outcome - it's the "core" of your H most cited publications that matter. This makes it quite a nice heuristic for the academic impact of a body of work. A common source of the H-index is Google Scholar, which automatically calculates it for each scholar who has an account, and influential academics with long publication records typically have a high H-index.

However, the H-index should not be used as a primary measure for evaluating academics, e.g. for recruitment or promotion.

Why?

The main reason is it's straightforward, in fact almost trivial, to manipulate your own H-index. You can make it artificially high.

Google Scholar doesn't exclude self-citations from its counting. It even counts self-citations in preprints, so the citations might not even be peer-reviewed. You could chuck a handful of hastily-written preprints into arXiv just before you apply for a job. (Should Google exclude self-citations? Yes, in my opinion: it's trivially easy given that they have groundtruth of which academic "owns" which paper. However, that wouldn't remove the vulnerability, because pairs of authors could go one level beyond and conspire to cross-cite each other etc.) Self-citations are often valid things to do, but they're also often used by academics to promote their own previous papers, so it's a grey area.

Google Scholar often automatically adds papers to a person's profile, using text matching to guess if the author matches. I've seen real examples in which an academic's profile included extremely highly-cited papers... that were not by them. In fact they were from completely different research topics! Google's text-matching isn't perfect, and like most text-matching it often has a problem with working out which names are actually the same author.

You can further manipulate your H-index, by choosing how to publish: you can divide research outputs into multiple smaller publications rather than single integrated papers.

Or you can do that after the fact, by tweaking your options in Google about whether two particular publications should be merged into one record or not. (Google has this option, since it often picks up two slightly-different versions of the same publication.)

Most of the vulnerabilities I've listed relate to Google's chosen way of implementing the H-score; however, at least some of them will apply however it is counted.

The H-index is a heuristic. It's OK to look at it as a quick superficial statistic, or even to use it as part of a general assessment making use of other stats and other evidence. But I'm increasingly seeing academic job adverts that say "please submit your Google Scholar H-index". This should not be done: it sends a public signal that this number is considered potentially decisive for recruitment (which it shouldn't be), creating a strong incentive to game the value. It also enforces a new monopoly position for a private company, demanding that academics create Google accounts in order to be eligible for a job. Academia is too important to have single points of failure centred on single companies (witness the recent debates around Elsevier!).

When trying to sift a large pile of applications, people like to have simple heuristics to help them make a start. That's understandable. It's naive to think that one's opinion isn't influenced by the first-pass heuristics - and so it's vital that you use heuristics that aren't so trivially gameable.

| science |

New journal article: Automatic acoustic identification of individuals in multiple species

New journal article from us!

"Automatic acoustic identification of individuals in multiple species: improving identification across recording conditions" - a collaboration published in the Journal of the Royal Society Interface.

For machine learning, the main takeaway is that data augmentation is not just a way to create bigger training sets: used judiciously, it can mitigate the effect of confounds in the training data. It can also be used at test time to check a classifier's robustness.

For bioacoustics, the main takeaway is that previous automatic acoustic individual ID research may have been overconfident in their claimed accuracy, due to dataset confounds - and we provide methods to try and quantify such issues, even without gathering new data.

This journal article is the output of a nice collaboration we've been working on, to try and bring machine learning closer to solved the problems zoologists really need solved. It's been very pleasant working on these ideas with Pavel Linhart and Tereza Petrusková (I didn't actually meet Martin Šálek!). The problem of detecting individual animals' vocal signatures is not yet a solved one, but I hope this paper helps nudge us part of the way there, and helps the field to get there more efficiently by a careful use of audio datasets.

| science |

Where are the solar panels in Britain?

Where are all the solar panels in Britain? Are they in the south? The sunny east? The countryside, the city?

The UK's office "Ofgem" publishes open data about the solar PV installations that they know about. In the latest "feed-in tariff" (FiT) data, there are about 800,000 of them. The "installed capacity" adds up to about 4.9 gigawatts, about half of which comes from big industrial field-scale installations and half from domestic rooftop solar.

It would be handy to know where the solar panels are - for example, if you're searching for solar panels to map...

For privacy purposes, Ofgem don't publish exact locations, nor unique IDs, in their big spreadsheet. So the data aren't perfect for mapping, but they do give us the postcode district for 90% of these 800 thousand. So, using that postcode info, I've taken their data and simply plotted them on a choropleth. Let's take a look!

Before you look, please note that I'm plotting the raw numbers per postcode district, and NOT normalising the data to account for the size of the district. This partly explains why the plots look "dark" in the regions (such as London) which are chopped up into lots of small districts. Smaller districts should have fewer things in... but on the other hand, smaller districts are supposed to equate to higher density of households, so maybe the postcode district is a good unit of analysis after all.

Here are the plots - three plots showing, respectively, the raw number of installations per district, the total installed capacity in each district, and finally to get an idea of household density I also plot the number of households there are in each district according to census data:

And here's a CSV spreadsheet of the summary FiT numbers I used to plot these. Sorry for not showing (Northern) Ireland, it's not in the data I found.

(The CSV and the images are all derived from Ofgem's FiT data which are published under the Open Government Licence.)

Note that there are A LOT of caveats about this data. About 10% of the solar installations (80 thousand!) whose postcode district was listed as "unknown". Also some postcodes are allegedly not quite right (e.g. some of them are the postcode of the person who registered, not the location of the thing itself). Some of the installations they've listed might have been discontinued, and we don't really have much way of knowing. Oh, and... the postcode area data I'm using seems to have some omissions, hence the occasional white gap in Britain. But notwithstanding all that, this gives us some indication of the distribution.

One thing that pops out to me is that these three plots don't seem very correlated. I'd have expected them all to be highly correlated. For some reason it looks like a relatively high number of small-capacity installations across Yorkshire down into Essex. There's plenty of regional variation and clustering, which may be due to geographical/weather differences, or perhaps to local initiatives.

| openstreetmap |

Spotting solar panels in London

Jack had this great idea to find the locations of solar panels and add them to OpenStreetMap. (Why's that useful? He can explain: Solar PV is the single biggest source of uncertainty in the National Grid's forecasts.)

I think we can do this :) The OpenStreetMap community have done lots of similar things, such as the humanitarian mapping work we do, collaboratively adding buildings and roads for unmapped developing countries. Also, some people in France seem to have done a great job of mapping their power network (info here in French). But how easy or fast would it be for us to manually search the globe for solar panels?

(You might be thinking "automate it!" Yes, sure - I work with machine learning in my day job - but it's a difficult task even for machine learning to get to the high accuracy needed. 99% accurate is not accurate enough, because that equates to a massive number of errors at global scale, and no-one's even claiming 99% accuracy yet for tasks like this. For the time being we definitely need manual mapping or at least manual verification.)

(Oh, or you might be thinking "surely someone officially has this data already?" Well you'd be surprised - some of it is held privately in one database or other, but not with substantial coverage, and certainly almost none of it has good geolocation coordinates, which you need if you're going to predict which hours the sun shines on each panel. Even official planning application data can be out by kilometres, sometimes.)

Solar panel aerial image examples

Jerry (also known as "SK53" on OSM) has had a look into it in Nottingham - he mapped a few hundred (!) solar panels already. He's written a great blog article about it.

This weekend here in London a couple of us thought we'd have a little dabble at it ourselves. We assumed that the aerial imagery must be at least as good as in Nottingham (because that's what London people think about everything ;) so we had a quick skim to look. Now, the main imagery used in OSM is provided by Bing, and unfortunately our area doesn't look anywhere near as crisp as in Nottingham.

We also went out and about (not systematically) and noticed some solar panels here and there, so we've a bit of ground truth to put alongside the aerial imagery. Here I'm going to show a handful of examples, using the standard aerial imagery. The main purpose is to get an idea of the trickiness of the task, especially with the idea of mapping purely from aerial imagery.

It took quite a lot of searching in aerial imagery to find any hits. Within about 30 minutes we'd managed to find three. Often we were unsure, because the distinction between solar panels, rooftop windows or other rooftop infrastructure is hard to spot unless you've got crystal-clear imagery. We swapped back and forth with various imagery sources, but none of the ones we had available by default gave us much boost.

While walking around town we saw a couple more. In the following image (of this location), the building "A" had some stood-up solar panels we saw from the ground; it also looks like "B" had some roof-mounted panels too, but we didn't spot them from the ground, because they don't stick up much.

Solar example

Finally this picture quite neatly puts 3 different examples right next to each other in one location. At first we saw a few solar panels mounted flush on someone's sloping roof ("A"), and you can see those on the aerial - though my certainty comes from having seen them in real life! Then next to it we saw some stood-up solar panels on a newbuild block at "B", though you can't actually see it in the imagery because the newbuild is too new for all the aerials we had access to. Then next to that at "C" there definitely looks to be some solar there in the aerials, though we couldn't see that from the ground.

Solar example

Our tentative learnings so far:

  • We will need to use a combination of aerial mapping and on-the-ground "solar spotting" from people.
  • Whether on-the-ground or aerial, it's often hard to get a clear idea of the size of the installation. In lieu of that, maybe it's fine to map them as points rather than areas. People can come along later and tell us the actual power ratings, efficiencies etc.
  • We will need to make the most of heuristics about where to find solar panels. For example Jerry notes that social housing is relatively likely to install solar, and I've noticed it on plenty of schools too; there may also be vendor/official lists (e.g. planning applications) out there - we'll need multiple sources to get a well-rounded coverage.

See Jerry's blog for more learnings.

There are plenty of virtuous feedback loops in here: the more we do as a community, the better we'll get (both humans and machines) at finding the solar panels and spotting the gaps in our data.

| openstreetmap |

Suggested reading: getting going with deep learning

Based on a conversation we had in the Machine Listening Lab last week, here are some blogs and other things you can read when you're - say - a new PhD student who wants to get started with applying/understanding deep learning. We can recommend plenty of textbooks too, but here it's mainly blogs and other informal introductions. Our recommended reading:

PRACTICAL:

ADVANCED:

| Science |

Favourite audio/science/misc software to install on Linux

I was setting a new laptop up recently. If you're not familiar with Linux you probably don't know how amazing is the ecosystem of software you can have for free, almost instantly. Yes sure the software is free but what's actually impressive is how well it all stitches together through "package managers". I use Ubuntu (based on Debian) and Debian provides this amazing jiu-jitsu wherein you can just type

sudo apt install sonic-visualiser

and hey presto, you get Sonic Visualiser nicely installed and ready to go.

So what that means for me is that when I'm setting up a new computer, I don't need to go running around clicking on a million websites, clicking through download links and licence agreements. I can just copy over the list of all my favourite software packages, and apt will install them for me in just a few steps.

For whatever reason - for my own recollection, at least - here's a list of lots of great packages I tend to install on my desktop/laptop. General useful stuff, plus things that an audio hacker, Python machine-learning developer, and computer science academic might use. I'll add some comments to highlight notable things:

# file sharing, synchronisation
syncthing          # for fabulous dropbox-without-dropbox file synchronising
syncthing-gtk
git
transmission-gtk

# graphics/photo editing
cheese
darktable
gimp            # great for bitmap (e.g. photo) editing
imagemagick
inkscape        # great for vector graphics
openshot        # great for video editing

# for a nice desktop environment:
pcmanfm
gnome-tweak-tool
caffeine-indicator   # helps to pause screensaver etc when you need to watch a film, give a talk, etc
xcalib               # I use this to invert colours sometimes

# academic
jabref
r-base
texlive
texlive-latex-extra
texlive-bibtex-extra
texlive-fonts-extra
texlive-fonts-recommended
texlive-publishers
texlive-science
texlive-extra-utils  # for texcount (latex word counting)
graphviz
gnuplot
latexdiff          # Super-useful for comparing original text against the re-submission text...
poppler-utils      # PDF manipulation
psutils
bibtex2html
pandoc

# for python programming fun
jupyter-notebook
virtualenv
python-matplotlib
python-nose
python-numpy
python-pip
python-scipy
python-six
python-skimage
python3-numpy
cython
ipython
ipython3

# for music playback
mopidy
mopidy-local-sqlite
ncmpcpp
pavucontrol
paprefs
brasero
banshee
qjackctl
jack-tools
jackd2
mixxx
mplayer
vlc

# music/audio file manipulation
audacity
youtube-dl
ffmpeg
rubberband-cli
sndfile-tools
sonic-visualiser
sox
id3v2
vorbis-tools
lame
mencoder
mp3splt

# audio programming libraries
libsndfile1
libsndfile1-dev
libfftw3-dev
librubberband-dev
libvorbis-dev

# for blogging / websiting:
pelican
lftp

# office
simple-scan
ttf-ubuntu-font-family
thunderbird-locale-en-gb
orage
xul-ext-lightning  # alt calendar software

# misc programming stuff
ansible
ant
build-essential
ccache
cmake
cmake-curses-gui
debhelper
debianutils
default-jdk
default-jre
devscripts
git-buildpackage
vim-gtk

# system utilities
apparmor
apport
anacron
nmap
hfsprogs
printer-driver-hpijs
dconf-editor
chkrootkit
dmidecode
zip
zsh             # zsh is so much better than bash
gparted
htop
baobab
wireshark-qt
bzip2
curl
dnsutils
dos2unix
dvd+rw-tools
less
openssh-server
openvpn
screen
unrar
unzip
wget
| IT |

Veganuary 2019 - the results

For a cook, Veganuary was a really interesting challenge. A whole month of being vegan! Here are some things I learnt:

  1. How to make a chia egg - it's a replacement for egg white made of... crushed seeds. Probably the main party-trick I learnt, since I already knew some of the …
| Food |

Vegan chorizo carbonara

OK, "vegan chorizo carbonara" - I think neither the Italians nor the Mexicans will forgive me for this one! But it's a veganuary experiment and I like it.

Thanks to veganuary I'm learning about chia egg, and here it really does work to provide the gloopy egg-like saucing. The chia also …

| recipes |

Black bean chorizo

I've been using "black bean chorizo" in my cooking for years. It's based on Hugh Fearnley-Whittingstall's "tupperware chorizo" recipe - it makes a densely-flavoured black bean paste, not as firm as real chorizo but with the same kind of flavour depth.

It keeps in the fridge for a long time (let's …

| recipes |

Veganuary - some vegan recipe tips

Going to try veganuary? We're going for it this year.

Here are some great recipes I made/found recently, all vegan. Maybe they'll help you to enjoy January extra-special:

  1. Pad thai - do it Vegan Black Metal Chef style! (We used peanut butter instead of peanuts which makes it easier to …
| food |

social