Inspired by this version
This is easy to make. The tricky thing thugh is "pulling" the jackfruit at the end of the cooking - it's a bit labourious but it makes a massive difference to the way the food tastes in the mouth afterwards. Please don't skip it! You could ask your guests to pull their own platefuls if that works.
Serves 3-4. (It keeps fine in the fridge and tastes good next day...)
- 1 tin young (green) jackfruit (drained)
- The dry spices:
- 1 tbsp cayenne pepper
- 1 tbsp paprika
- 1 tbsp cumin
- 1 onion (diced)
- 2 cloves garlic (finely chopped, or pressed)
- 1/2 or 1 red chilli, according to preference (thinly sliced)
- 1 box brown button mushrooms (or whatever mushrooms you prefer) (chopped into chunks)
- 1 red pepper (sliced)
- 1 fairly large carrot (diced)
- 1 packet chopped tomatoes
- 1 splash red wine
- 1 packet kidney beans, drained
- 100g dark chocolate
- 1 small handful fresh coriander
Preheat a medium grill. Coat the jackfruit pieces in the three dried spices. Put them on kitchen foil on a baking tray, and put them under the grill for about 15 minutes, turning halfway through. This helps dry the jackfruit out.
Meanwhile, in a large deep pan, start the base of the stew: fry the diced onion (gently, don't let it burn at all) for about 5 minutes, then add the garlic and the chili. Stir. Let them fry for a minute or so before adding the mushrooms on top and stirring again. Optionally you can let this cook a bit more to get a touch of colour on the mushrooms, but either way, add the carrots, stir, and then add the shopped tomatoes, the red wine, and a splash of hot water (enough to loosen it to an ordinary stew thickness).
This is going to bubble for a good half hour, on a medium-low heat, with the lid on (take the lid off near the end if it needs to thicken up). You can start the rice cooking perhaps, if that's what'll be accompanying it.
Meanwhile, optionally you can griddle the red pepper to get some colour on it. Or just add it to the stew directly.
When there's about 5 minutes left, take the lid off the stew, and add the chocolate broken into pieces. Let it melt and then stir it through. It should make the stew thicken up and become more of a dark brown colour.
The main final thing you need to do is "pull" the jackfruit. With a pair of forks - ideally strong ones! - grab each piece of jackfruit one by one with a fork, and with the second fork rake at it to make it come apart into stringy pieces. (Don't do this in a non-stick pan, you'll ruin it with the forks.)
Chop the coriander roughly and mix it in to the stew just before you serve it.
Serve with rice, crusty bread, slices of lime... as you like.
Aubergine makes a great simple vegetarian/vegan alternative to battered fish and chips. Cook it like this.
Serves 2. Takes 25 to 30 minutes.
- 2/3 ratio of plain flour (100g? dunno)
- 1/3 ratio of cornflour (50g? dunno)
- 1/4 tsp salt
- 1 bottle beer (ideally a flavourful pale ale or bitter) - you won't need all of it
- 1 medium but fattish aubergine, or whatever you need to get two fillet-like pieces out of it
And to serve:
- Oven chips
- Mushy peas
Preheat the oven for the oven chips. (The aubergine will be going in too, later.)
Mix the flours and salt in a medium-sized bowl, and then start to pour the beer in, stirring with a fork to get everything combined and beat the lumps out. Try to use as little beer as possible to get the batter smooth - you want it to stay nice and thick so it'll form a thick coating.
Cut your aubergine(s) into big fillets. It'll depend on the size and shape of your aubergines, but for the medium-sized fattish ones I buy you can cut one in half lengthways and that gives two nice pieces, one for each person. But! You need nice big fillet-like pieces with both sides having flat white flesh exposed - so that the batter sticks better, and so that it's easier to fry. So if one side of a fillet is umblemished purple round skin, cut a thin slice off. You can discard that slice or you can keep it to batter+fry as scraps later.
Put the oven chips in the oven.
Heat a frying pan with a decent amount of oil for shallow-frying. Be a bit generous.
Dip the aubergine pieces in the batter, turning them around to coat them properly. Then immediately pop them into the hot oil.
Let them fry about 5 minutes on one side - don't move them around much, just let them fry to get a good coating. Just before you turn them over for the other side, take the leftover batter and pour a little bit on top of the aubergine, to replenish the raw batter on the uncooked side. Then you can turn the aubergine fillet over and fry the other side for 5 minutes.
When the aubergine fillets have fried to a nice golden crust on each side, take them out of the pan and put them on a baking tray, and pop them in the oven alongside the chips, to cook for another ten minutes. This will get the inside of the fillets nice and cooked and yielding.
During the last ten minutes, you can warm up the mushy peas or sort out whatever you want as accompaniment. You can also batter and fry those leftover pieces of aubergine. Or just fry some of the batter to make scraps.
Warning - the aubergine fillets retain heat really really well. Beware of burning your mouth when you tuck in to them!
I seem to know more and more vegetarians. (The official stats say it's growing and growing in the UK, so it's no surprise.) Me too. And for some reason, I was wondering, what does everyone tend to eat, on an ordinary weekday evening meal? Although it's nice to share recipes and maybe even food photos, that's a completely different thing from the mid-week everyday cooking.
So what do people have on a normal Wednesday...? Well. I asked them all. This Wednesday. Here are the answers I got:
- "Lemon and spinach risotto"
- "Pasta with courgettes, toasted pine nuts and goats cheese. Then I had some strawberries and blueberries. Then I had a pepsi max and some Milka because I'm a fatty!"
- "Golden beets + lettuce in lemon vinaigrette, cauliflower rice, grenaille potatoes, olive oil-based chocolate cake"
- "Soba noodles"
- "Runner bean and aubergine Keralan curry"
- "Pad thai! Home made. No fish sauce. Ridiculously delicious."
"Cereal (sorry! my bad!)"
- "Phoned in a curry :)"
- "Pizza with my vegan cheese"
- "Veg casserole and dumplings"
- "Griddled halloumi, fried red onion, coucous and salad"
- "One-pot gnocchi with halloumi, peppers, courgete, leeks and tomatoes & side salad"
- "Quorn chicken & leek pie, roast potatoes, peas & sweetcorn"
"Veggie moussaka made with Quorn mince and aubergine"
"Homemade patatas bravas with salsa and sour cream and tomato & cucumber slad with croutons"
- "Soup made from beans, tomatoes, carrots, swede, and broccoli (read: contents of the veg drawer)"
- "Cheddar cheese sandwich, using sourdough spelt bread, rocket and cucumber,with a small amount of mayo"
- "Thai green curry (quorn and general veg) and soba noodles on the side because I had no rice"
- "Skyr with musli (too busy to cook)"
- "Thai takeout over riced cauliflower instead of rice"
- "Lentil lasagne and spinach salad"
Mmmmmmmmmmmmmmmmmmmm niiiiiiiiiiiice. That all sounds lovely.
Have we learnt anything? Well, Quorn gets a decent showing (I notice that because I'm not into it myself), pops up 3 times versus halloumi's 2. At a rough guess, approx half of the meals rely on cheese. I kinda think I rely too much on cheese for veggie cookery. But there's a pretty good spread in the types of protein and the types of carbs people have in their meals. Anyway. It all sounds very nice.
ICEI 2018 special session "Analysis of ecoacoustic recordings: detection, segmentation and classification" - full programme
I'm really pleased about the selection presentations we have for our special session at ICEI2018 in Jena (Germany) 24th-28th September. The session is chaired by Jérôme Sueur and me, and is titled "Analysis of ecoacoustic recordings: detection, segmentation and classification".
Our session is special session S1.2 in the programme and here's a list of the accepted talks:
- AUREAS: a tool for recognition of anuran vocalizations
William E. Gómez, Claudia V. Isaza, Sergio Gómez, Juan M. Daza and Carol Bedoya
- Content description of very-long-duration recordings of the environment
Michael Towsey, Aniek Roelofs, Yvonne Phillips, Anthony Truskinger and Paul Roe
- What male humpback whale song chorusing can and cannot tell us about their ecology: strengths and limitations of passive acoustic monitoring of a vocally active baleen whale
Anke Kügler and Marc Lammers
- Improving acoustic monitoring of biodiversity using deep learning-based source separation algorithms
Mao-Ning Tuanmu, Tzu-Hao Lin, Joe Chun-Chia Huang, Yu Tsao and Chia-Yun Lee
- Acoustic sensor networks and machine learning: scalable ecological data to advance vidence-based conservation
Matthew McKown and David Klein
- Extracting information on bat activities from long-term ultrasonic recordings through sound separation
Chia-Yun Lee, Tzu-Hao Lin and Mao-Ning Tuanmu
- Information retrieval from marine soundscape by using machine learning-based source separation
Tzu-Hao Lin, Tomonari Akamatsu, Yu Tsao and Katsunori Fujikura
- A Novel Set of Acoustic Features for the Categorization of Stridulatory Sounds in Beetles
Carol Bedoya, Eckehard Brockerhoff, Michael Hayes, Richard Hofstetter, Daniel Miller and Ximena Nelson
- Noise robust 2D bird localization via sound using microphone arrays
Daniel Gabriel, Ryosuke Kojima, Kotaro Hoshiba, Katsutoshi Itoyama, Kenji Nishida and Kazuhiro Nakadai
- Fine-scale observations of spatiotemporal dynamics and vocalization type of birdsongs using microphone arrays and unsupervised feature mapping
Reiji Suzuki, Shinji Sumitani, Naoaki Chiba, Shiho Matsubayashi, Takaya Arita, Kazuhiro Nakadai and Hiroshi Okuno
- Articulating citizen science, automatic classification and free web services for long-term acoustic monitoring: examples from bat monitoring schemes in France and UK
Yves Bas, Kevin Barre, Christian Kerbiriou, Jean-Francois Julien and Stuart Newson
We also have poster presentations on related topics:
- Towards truly automatic bird audio detection: an international challenge
- Assessing Ecosystem Change using Soundscape Analysis
Diana C. Duque-Montoya, Claudia Isaza and Juan M. Daza
- MatlabHTK: a simple interface for bioacoustic aanalyses using hidden Markov models
- MAAD, a rational unsupervised method to estimate diversity in ecoacoustic recordings
Juan Sebastian Ulloa, Thierry Aubin, Sylvain Haupert, Chloé Huetz, Diego Llusia, Charles Bouveyron and Jerome Sueur
- Underwater acoustic habitats: towards a toolkit to assess acoustic habitat quality
Irene Roca and Ilse Van Opzeeland
- Focus on geophony: what weather sounds can tell
Roberta Righini and Gianni Pavan
- Reverse Wavelet Interference Algorithm for Detection of Avian Species and Characterization of Biodiversity
Sajeev C Rajan, Athira K and Jaishanker R
- Automatic Bird Sound Detection: Logistic Regression Based Acoustic Occupancy Model
Yi-Chin Tseng, Bianca Eskelson and Kathy Martin
- A software detector for monitoring endangered common spadefoot toad populations
Guillaume Dutilleux and Charlotte Curé
- PylotWhale a python package for automatically annotating bioacoustic recordings
Maria Florencia Noriega Romero Vargas, Heike Vester and Marc Timme
You can register for the conference here - early discount until 15th Sep. See you there!
I've been trying out an e-ink reader for my academic work.
"e-ink" - these are greyscale LCD-like displays. You see the image by light reflectance, almost the same way you read a printed page, not by luminance like a TV/laptop screen. This should be better in lots of ways: better on your eyes, low-power, and you can read outside. The low-power comes because it doesn't need a full jolt of energy 50 times a second as does an LED display: if the image doesn't change, no power is needed, the image stays there for free.
Why for academic work? A LARGE portion of my everyday work consists of looking at academic article PDFs, scribbling on them, then giving/implementing feedback. This comes from students, collaborators, reviews for journals/conferences, and from editing my own work as I do it. Some people can do this kind of stuff directly on a laptop screen. I'm afraid I can't. It's less effective, less detailed when I do that - so, for years, I've been printing things out, scribbling notes on them, then throwing them away afterwards.
If an e-ink reader can replace all that, maybe that's a good thing for the environment?
Note: it takes a lot of resources to build an e-reader. At what point is it "better" to print thousands of pages of paper, versus manufacture one e-reader? I don't know.
You can't use any old e-reader, for academic reviewing: it needs to be large enough to render an A4 PDF well (ideally, full A4 size), and needs some way of annotating. This one I'm trying has a stylus that you can use to scribble, and it works. Surprisingly good so far.
NOW THE NEXT STEP:
We sometimes have sunny days, you know. For some reason this often happens when we've a workshop or conference organised. "Why don't we have the session outside on the grass?" I'm tempted to say. The answer would be... because you can't really look at people's slideshow slides out on the grass. Pass a laptop around? Broadcast the slides to everyone's smartphones? Redraw everything from scratch on a flipchart? Meh.
What I'd like to see is an e-ink screen, large enough to host a seminar with. The resolution doesn't need to be as high as all that, certainly doesn't need to be as high as is needed for reviewing PDFs. it just needs to be big. It would be great if there was a stylus or some other way of scribbling on the screen too.
Most academic slides are not animated. So an e-ink type screen is much more suitable than an LED screen, and would use much much less power. (Ever noticed the amount of cooling needed for those LED advertising signs in the street? Crazy power consumption.)
This week we've been at the LVA-ICA 2018 conference, at the University of Surrey. A lot of papers presented on source separation. Here are some notes:
- Evrim Acar gave a great tutorial on tensor factorisation. Slides here
- Hiroshi Sawada described a nice extension of "joint diagonalisation", applying it in synchronised fashion across all frequency bands at once. He also illustrated well how this method reduces to some existing well-known methods, in certain limiting cases.
- Ryan Corey showed his work on helping smart-speaker devices (such as Alexa or whatever) to estimate the relative transfer function which helps with multi-microphone sound processing. He made use of the wake-up keywords that are used for such devices ("Hi Marvin" etc), taking advantage of the known content to estimate the RTF for "free" i.e. with no extra interaction. He DTW-aligned the spoken keyword against a dictionary, then used that to mask the recorded sound and estimate the RTF.
- Stefan Uhlich presented their (Sony group's) strongly-performing SiSEC sound separation method. Interestingly, they use a variant of DenseNet, as well as a BLSTM, to estimate a tf mask. Stefan also said that once the estimates have been made, a crucial improvement was to re-estimate them by putting the estimated masks together through a multichannel Wiener filtering stage.
- Ama Marina Kreme presented her new task of "phase inpainting" and methods to solve it - estimating a missing portion of phases in a spectrogram, when all of the magnitudes and some of the phases are known. I can see this being useful in post-processing of source separation outputs, though her application was in engine noise analysis with an industrial collaborator.
- Lucas Rencker presented some very nice ideas in "consistent dictionary learning" for signal declipping. Here, "consistent" means that the reconstructed signal should be painting the missing regions in a way that matches the clipping - if some part of the signal was clipped at a maximum of X, then its reconstruction should take values greater than or equal to X. Here's his Python code of the declipping method. Apparently also the state-of-the-art in this task is a method called "A-SPADE" by Kitic (2015). Pavel Zaviska presented an analysis of A-SPADE and S-SPADE, improving the latter but not beating A-SPADE.
An interesting feature of the week was the "SiSEC" Signal Separation Evaluation Challenge. We saw posters of some of the methods used to separate musical recordings into their component stems, but even better, we were used as guinea-pigs, doing a quick listening test to see which methods we thought were giving the best results. In most SiSEC work this is evaluated using computational measures such as signal-to-distortion ratio (SDR), but there's quite a lot of dissatisfaction with these "objective" measures since there's plenty that they get wrong. At the end of LVA-ICA the organisers announced the results of the listening test: surprisingly or not, the results of the listening test had broadly a strong correlation with the SDR measures, though there were some tracks for which this didn't hold. More analysis of the data to come, apparently.
From our gang, my students Will and Delia presented their posters and both went really well. Here's the photographic evidence:
- Delia Fano Yela's poster about source separation using graph theory and Kernel Additive Modelling read the preprint here
- Will Wilkinson's poster "A Generative Model for Natural Sounds Based on Latent Force Modelling" read the preprint here
Also from our research group (though not working with me) Daniel Stoller presented a poster as well as a talk, getting plenty of interest for his deep learning methods for source separation preprint here.
I've always thought fake meat was a bit silly. When I recently starting eating more veggy food I promised myself I wouldn't have to eat Quorn pieces, those fake chicken pieces that taste bland and (unlike chicken) don't respond to cooking. They don't caramelise, they don't get melty tender, they …
Excited today to get a delivery of the new mail-order vegan cheese from my friend's new London cheezmakery, Black Arts Vegan! It came beautifully packed, see:
Their first cheese is a vegan mozarella. We unpacked the cheese and had a taste - yes, a good clear taste like standard mozarella. But …
Tamarind is ace. It imparts a deep, rich and sweet flavour to curries. Buy a block and put it in your fridge, it keeps for months, and you can hack a piece off and chuck it in your curry just like that. That's what I did in this lovely chana …
I'm very happy to publish a video of this installation piece that Sarah Angliss and I collaborated on a couple of years ago. We used computational methods to transcribe a dawn chorus birdsong recording into music for Sarah's robot carillon:
We presented this at Soundcamp in 2016. We'd also done …