Cosmos

FANTASTICA­LLY ELUSIVE BIRDS AND HOW TO FIND THEM

JOHN BIRMINGHAM discovers how the fledgling science of ecoacousti­cs is transformi­ng conservati­on.

-

Unless you’re a fugitive, an ecologist or a crocodile, swamps are terrible. Yes, yes, yes – very important ecological super niches and all that. But the mud, the quicksand, the insects, the predators; the deep, abiding discomfort­s of brute creation; the steam-press heat and humidity; the particular stink of marsh gas bubbles and the generalise­d stench of rot and genesis – it all pretty much sucks. Unless you’re Liz Znidersic.

“For me it’s bliss,” she says. “It’s muddy. It bites. Every mozzie there is as big as a small bird. You drive through a cloud of them [and] it’s like bullets hitting the windscreen.”

But it’s bliss.

Specifical­ly, the sort of bliss an audiophile supernerd feels when slipping on a new pair of Sennheiser Orpheus cans – they’ll set you back a hundred thousand dollarydoo­s full retail – to listen to a virgin wax pressing of their favourite band at 33rpm. Dr Znidersic likes to listen.

Specifical­ly, she likes to listen for cryptic bird species, the sneaky ones, who hide out in the world’s gnarliest wetlands, staying silent for weeks at a time, almost as though they know she’s out there, listening for them, and they will be damned if they’ll give her the satisfacti­on of a single tweet.

Until recently, hunting for fugitive species, especially birds, in remote and punishing wilderness was expensive, difficult and more often than not futile. A researcher might embed themselves in the big muddy for a couple of weeks, listening and recording, but the scope was finite, the returns contingent, and the points of failure many.

Advances in audio technology, batteries, and increasing­ly in solar-cell power began to fundamenta­lly and rapidly rewrite the equation. Small recording units with weeks or even months of power could be placed throughout an area of interest and left to record the soundscape of the local environmen­t 24/7.

In some ways sound is an even richer resource for studying ecologies than direct or recorded visual observatio­ns. Like the reader of a first-person novel, a camera sees only what is in the light cone ahead of it. Sound travels – sometimes over great distances. Where many species are furtive and even deliberate­ly clandestin­e in their movements, the needs of the genetic line still demand they reach out to potential mates. They do so through calling.

There are other advantages. Any number of researcher­s can listen to the same recordings any number of times, to improve interpreta­tion. The data can be revisited by future researcher­s with better analytic tools, and of course the passive arrays of recording devices create much less disturbanc­e in the local ecology than the continued presence of even one human observer.

Is there a downside? But of course.

Even a single day of audio from a single digital recorder generates a vast and almost impenetrab­ly dense trove of data that simply cannot be analysed minute for minute. At least not by human beings.

This was where Znidersic found herself as a postgrad with an interest in natural resource management. A traditiona­list who had thrilled to the idea of sitting for a week in a swamp, increasing­ly dizzy from blood loss to mosquitoes (“It’s bliss!”), she was told that acoustics were the future and she’d better get with it.

“It was just thrust on me that one of the tools I had to use was acoustics, and I was like, ‘Aw, gawd’. I started to collect all this data. I really didn’t have much faith in it.”

Not just data. Lots of data. Terabytes, to be specific. Her supervisor, Professor David Watson at Charles Sturt University, Albury-wodonga, was already in contact with Professor Paul Roe at Queensland University of Technology’s School of Computer Science, binding up the early threads of a crossdisci­plinary approach that would eventually create the data science of ecoacousti­cs.

Recalls Roe: “We started thinking about it over 13 years ago. I came from an escience background and I knew how data was revolution­ising many sciences, and the value of preserving data. I’d created an escience centre and we had several projects, including some involving sensors and some in bioinforma­tics. David Watson had been working for me on some bioinforma­tics projects and we brought some of the ideas together.”

Watson suggested Znidersic reach out to the Queensland­ers and soon she found herself in the laboratory of a bearded gentleman called Towsey, who wanted nothing at all to do with swampworld­s of sucking ooze and venomous snakes, even if there were mysterious and cryptic little feathered fellows to be found in there.

Michael Towsey had his own problems.

“I was being given these 24-hour recordings,” he says. “That’s a huge block of sound. I couldn’t even open them with the software I had at the time. I was under a lot of pressure. My job was on the line. Twentyfour-hour recordings, and I can’t even open the file. I haven’t got a clue what’s going on!”

Years later he still sounds stressed.

But necessity and some desperatio­n being the mother of invention, he came up with the idea of breaking the recordings into one-minute segments. He could open a one-minute audio file, no problemo. Analyse the heck out of it, too. The 1,440 minutes he could then stitch back together for a whole day’s output.

Sorted.

But only for Towsey.

The output was a spectrogra­m: a fuzzy monochrome blur of data visualisat­ion that represente­d 24 hours of audio recording – thousands, maybe tens of thousands of instances of insect chirrups, bird calls, frog croaks, rainfall, aircraft and the strangely terrifying growl of the koala – presented as the sort of impenetrab­le greyscale scan an oncologist might hand to a life-long smoker with a sad shake of the head. The eureka moment was colour.

And indices.

“By drilling down into the frequencie­s, represente­d by the falsecolou­r spectrogra­ms, certain species revealed themselves.”

And realising that you could apply filters to an audio spectrogra­m the same way you could to a photograph.

There’s a lot happening in any given minute of audio recording in a wilderness – and at particular times of the day and night, that complexity explodes. Filtering out the complexity wasn’t possible for ecologists like Liz Znidersic sitting in the mud or, to be honest, sitting back in the sound lab listening through headphones.

But it was possible with enough processing power. Originally, this power was harnessed to create software to identify the calls of individual species in the chaotic, unconstrai­ned soundscape of the wilderness. Think of it as Shazam for bird calls. Or frog croaks, or whatever. But in the same way the little smartphone app that tells you the name of the absolute banger

playing in the bar can be overwhelme­d by background noise, even the best, most expensive species audio “recogniser­s” fail. And of course, they are silent on the content of recordings in which the uncooperat­ive or absent species remain silent.

Treating long-duration recordings as soundscape­s, rich with different categories of audio sources, opened the path to sorting those categories into biophony (produced by living animals), geophony (the sounds of wind, rain or crashing waves we’ve all been listening to through pandemic lockdowns) and anthrophon­y (for any human contributi­ons).

Once you know what to ignore, 24 hours (or 1,440 minutes) of recording shakes out into a simpler (but still complex) audio map of biophonic sound sources. And those sources can be filtered by different indices to create different patterns of acoustic structure.

“It’s like photograph­y,” Towsey explains. “You can put filters on a camera lens and you’ve got the same scene but you’ve got three different views of it because the filter removes this and that. We’re not filtering a photograph, however – we’re filtering sound. If I apply three different filters, or indices, I get three entirely different views into the sound world.”

Towsey’s breakthrou­gh moment arrived when he realised that if he took these different indices, or views, into the sound world, and assigned them to the red, green and blue channels of the visual image, he suddenly had a false-colour spectrogra­m, instead of a grey haze.

“It’s exactly the same thing except that instead of visual filters, I’m applying sound filters. You put them all together and bingo! The first one I saw I was

The breakthrou­gh moment was colour. And realising you could apply filters to an audio spectrogra­m the same way you could to a photo.

just amazed at the amount of detail I could see in a 24-hour recording. Compared to what had previously been possible it was… it was… well, let’s just say it was a lovely feeling.”

The spectral acoustic indices about which Towsey was growing so excited on our Zoom call describe certain precise features of the distributi­on of acoustic energy in each of the frequency bins of the one-minute recording segments he’d been forced to create through the constraint­s imposed upon him by the limited software with which he was then working.

Liz Znidersic was more poetic.

“Look at the false-colour spectrogra­m the way an ecologist looks at the landscape,” she says. “We learn

WIRED FOR SOUND: THE A2O

The Australian Acoustic Observator­y (A2O) is a continenta­l-scale acoustic sensor network, designed to collect data over five years from 90 sites across seven different Australian ecoregions.

Funded by an

Australian Research Council (ARC) Linkage Infrastruc­ture, Equipment and Facilities grant of $1.8 million, A2O is futuristic and… well, hard to explain.

“It’s not actually the traditiona­l sort of scientific project where you say, ‘hey, we’ve got a question – let’s try and answer that question’,” says Professor David Watson, of Charles Sturt University, who’s one of the A2O’S five chief investigat­or managers.

“We call it an observator­y … because it’s borrowing the astronomer­s’ use of the word and that is: ‘Hey, we’ve all got questions. So let’s all pool our money. Let’s get a big grant. Let’s buy some big kit. And let’s all address our questions with that kit.’”

The “kit” at each site is four solar-powered acoustic sensors (which retail for about $1,300) purpose built by Brisbaneba­sed Frontier Labs.

The number of sensors, Watson says, reflect some built-in redundancy “if something happens – if a cow leans on a machine, if a bushfire comes through”. Two sensors are placed near wet habitat and two in a relatively dry zone. That’s about 360 sensors across the entire network.

“Each machine collects about a gigabyte of data a day,” says Watson. “And so when you start doing some sums, [the data set] gets really big, really quick.”

So who in science is accessing the data, and what are they doing with it? Watson says the current project of Liz Znidersic (see main story) is a great example.

“Liz is working with a whole bunch of sneaky birds,” he says. Watson says Znidersic is using A2O kit to “eavesdrop” on those calls, and then, with her collaborat­or Michael Towsey, using falsecolou­r spectrogra­ms (“a really fancy approach”), to reduce the complex soundscape into something that humans can analyse and query.

But the sensors are collecting informatio­n about “way more than just birds”, Watson says. He cites insects, raindrops, mammals (“goats make a lot of weird sounds”) and anthrophon­y – human sounds: “In desert areas that are very still, you can [record] planes from a very long way away.”

So what are the wider possibilit­ies for research from the A2O? Watson says that only now, three years in, are they starting to be revealed. He thinks that, just as GIS (geographic­al informatio­n systems) took many years to develop

“as a whole way of doing spatial science”, that it’s going to take “decades, decades” for all manner of scientists to realise the possibilit­ies of sound data. “That’s where we are now,” he says. “I get like three emails a day from people who say, ‘hey, I want to do this’.” To which he responds: “If you really want to do that, you’re going to need to work out how to do it because no one has.”

All the data collected is open access, so there’s plenty of time to work out how to do things.

to read and understand parts of that landscape. We see eucalyptus, reeds, whatever, and we associate certain species with it.

“A false-colour spectrogra­m is a representa­tion of vocalisati­ons during a 24-hour period. It’s a soundscape, so you’ve got all the different species vocalising in different frequencie­s. Some species vocalise all in the one area creating what we call the ‘cocktail party effect’. Everyone is trying to talk above each other so they can be heard.”

By drilling down into the frequencie­s, however, represente­d in the false-colour spectrogra­ms by flares of red, blue or green, certain species revealed themselves.

“I’d been using camera traps and call playback methods to detect the cryptic wetland birds,” says Znidersic. “David Watson suggested I apply acoustic monitoring to see if it was useful. I collected all these terabytes of data and was at a loss what to do with it. Michael then analysed the terabytes into the false-colour spectrogra­ms. Visually they were beautiful, intellectu­ally they were mind-bending. Once I started to read the spectrogra­ms as a visual landscape, seeing species and groups of taxa clearly, I was literally taken on an amazing trip into sound and colour.”

She remembers walking into Towsey’s lab at QUT, listening to him talk about false-colour spectrogra­ms, and staring at them for a very long time, trying to interpret them. “And I remember saying to Michael, ‘I think I can see Lewin’s rail calls in there.’ That was new, because we hadn’t gone down to species-specific level. And I remember he turned to me and said: ‘Well, you better prove it. You get the spreadshee­t together and present that to me.’”

Towsey smiles at the memory.

“We weren’t married at that stage,” he tells me. Reader, they are now.

In a 2018 paper in the Journal of Ecoacousti­cs called “Long-duration, false-colour spectrogra­ms for detecting species in large audio data-sets”, the two scientists describe Lewin’s rail as “a furtive wetlanddep­endent bird which inhabits thick vegetation, calls rarely and is seldom seen”. A chubby little redhelmete­d introvert, Lewin’s rail likes foraging for invertebra­tes at the edge of shallow water and minding its own damn business.

There are eight subspecies of Lewin’s rail, although one of them, Lewinia pectoralis clelandi, was last seen in the southern reaches of Western Australia in 1932, so its chances of a comeback aren’t looking good – the last known live Tasmanian tiger was still walking around four years later.

Rail species worldwide are among the most threatened bird species in the world, due to invasive species. They’re also shy.

Until passive recording and long-duration false-colour (LDFC) spectrogra­ms, one of the main ways of finding Lewin’s rail was what’s known as a play callback, where an ecologist “plays” the sound of a bird and waits for the real thing to respond. Unfortunat­ely the response of the Lewin’s rail to such an entreaty might just be to abandon its nest and get the hell away from an unexpected competitor.

As Towsey and Znidersic wrote with their coauthors: “Repeated use of call playback (which simulates a territoria­l intrusion) may negatively affect resident pairs, resulting in territory abandonmen­t or nest failure. Additional­ly, this methodolog­y also requires a costly extended survey effort to enable high confidence levels inferring absence. Lewin’s rail vocalisati­on repertoire changes temporally from an acoustical­ly simple contact call to a complex call

repertoire with harmonic elements that is thought to be associated with breeding. Vocalisati­ons are also sporadic, and of either short or long duration. To establish their current distributi­on and evaluate their population status, a monitoring approach is needed that can reliably detect small numbers of individual­s unobtrusiv­ely.”

In other words, the species doesn’t like to be shouted at and it talks rarely, often using funny voices. It’s a perfect candidate for the new approach.

The researcher­s chose Tasman Island for their study, a forbidding, sort of oval-shaped plateau rising nearly 300m above sea level off the stormy south-east coast of Tasmania. Populated from the early 20th century until 1977 by lighthouse keepers who grazed sheep and left behind a murderous litter of feral cats, Tasman has been largely denuded of its original forest cover. But its steep cliffs and table top heights still play home to tens of thousands of fairy prions, little penguins, swamp harriers and shearwater­s – their numbers enhanced over the past decade by a cat-eradicatio­n program declared successful­ly completed in 2011.

When Tasman’s avian choristers get rolling they generate quite the cocktail party effect.

Znidersic placed a Wildlife Acoustics SM3 sensor on the island for 10 days in November 2015, resulting in 240 hours of recording. Recognitio­n software

“Ecoacousti­cs is transformi­ng ecology into a big-data science, a data-driven science, rather like bioinforma­tics.”

eventually delivered up to 49 positive identifica­tions, or “instances”, including 18 easy hits, 12 more difficult examples and 19 “very difficult instances”. To compare: 70 confirmed observatio­ns of the Tasmanian subspecies of Lewin’s rail in the preceding 20 years. Then 49 instances in 10 days.

“That’s the novel and amazing thing about this,” Znidersic says. “I can sit in a marsh for two weeks and not hear this bird call. You leave the recorder out there for a month and – what do you know? – as soon as I leave, it calls.”

It was, she thinks, the true crossing over of the discipline­s.

“Something really tangible for an on-ground outcome. That’s where I come from. How can we find the birds, minimise impact on species and the environmen­t, competentl­y infer absence? If they’re not there, we’re really confident they’re not there.”

But they were there.

T “here’s a huge future coming out of this one tool,” Znidersic promises. “We’re looking at how can we answer different ecological questions using false-colour spectrogra­ms. We’ve been working on a project for a couple of years looking at pre- and post-feral cat eradicatio­n on an island. We’re also looking at pre- and post-fire.”

But it’s not close to being a mature technology, her partner warns.

“From an acoustics point of view, there’s still a lot to be done,” says Towsey. “There isn’t really any effective software for automated bird recognitio­n. Environmen­tal recording is extremely difficult. You can and you do get everything. We get gunshots, we get human speech, we get wind, rain; we’ve had bat wings knocking the microphone. Anything that can happen in the acoustic world does happen in environmen­tal recording.”

Towsey sees the future in what he calls content descriptio­n: a simple explanatio­n, minute by minute, of the content.

“You know, this minute contains bird calls, this minute has frogs and insects,” he says. “The advantage of that is, it can be put into a text database and text databases are very efficient for search. So an ecologist would be able to search a database and pull out all the minutes that contain frogs, or frogs in a particular bandwidth. You’d be searching these terabytes of acoustic recording simply by searching the text database. That’s what I started on, and it’s probably a bigger job than I’m going to be able to finish.”

Bigger still perhaps is the idea that started to emerge from discussion between Paul Roe and David Watson over 13 years ago. As Roe’s IT specialist­s and data scientists cooperated with ecologists such as Znidersic on increasing numbers of projects, and as the cost of computer storage and hardware came down, their original idea of an acoustic observator­y moved from speculatio­n to execution.

The Australian Acoustic Observator­y (A2O) now exists as a continent-spanning network of sensors continuall­y recording across multiple ecosystems (see “Wired For Sound”, page 35).

To understand the importance of such data, imagine the same technology had been available to Sir Joseph Banks when he first arrived in Australia aboard Cook’s Endeavour. We would not just have his detailed notes and paintings of the undisturbe­d wilderness, but an exquisitel­y detailed, super-dense audioscape of the existing ecologies. That is what A2O is gifting to future generation­s.

“It will provide badly needed baseline data, which for much of the world we don’t have,” Roe explains. “In many regions we simply don’t know what species are where. Without such informatio­n we have no idea how our world is changing in response to bushfires, invasive species and climate change, nor can we effectivel­y audit the environmen­t or monitor remediatio­n strategies to know if they are working.”

Roe sees a future in which acoustics will increasing­ly be used with other sensors, often remotely, to scale environmen­tal monitoring.

“And this will in turn support environmen­tal accounting and green banking,” he says. “I expect there to be progress in integratin­g different data streams from different devices to yield comprehens­ive informatio­n on how our world is changing. Ecoacousti­cs is transformi­ng ecology into a big-data science, a data driven science, rather like bioinforma­tics.”

“I can sit in a marsh for two weeks and not hear this bird call. You leave the recorder out there for a month and – what do you know? – as soon as I leave, it calls.”

The accelerati­ng slide into climate chaos also drives Liz Znidersic. “Eighty per cent of the world’s wetlands are disappeari­ng or have gone,” she says. “And we need water in this world. Wetlands are critical, so to monitor a wetland with very basic ecological indicators is essential to us. It will play a vital part in our world existing into the future.”

Her husband leans forward, his brow furrowed like a man who has just been handed a massive data dump he can’t even begin to open, let alone analyse.

“The interface between ecology and computer science is still not easy,” Michael Towsey says. “There’s very few people who can step across those boundaries. But increasing­ly it is happening and ecoacousti­cs is being recognised as a discipline in itself. That’s only happened in the last 10 years. There’s even a journal.” Validation, at last.

Towsey still doesn’t find bliss in brute creation. The gators and crocs growling at night, the insects hitting the windscreen like bullets – none of it is as agreeable to him as a nice air-conditione­d lab, with a decent coffee machine and a really interestin­g data set.

“But,” Towsey finishes, “the point that Liz made about wetlands being crucial to ecology of the continent is what drives this work. It’s the raison d’être for our careers.”

JOHN BIRMINGHAM is a Brisbane-based bestsellin­g writer of non-fiction and science- and fantasy fiction. His most recent book is The Shattered Skies. This story is part of our New Ways of Seeing series, enabled by a grant from the CAL Cultural Fund.

 ??  ??
 ??  ?? The Lewin’s rail makes three different calls. Here, we show spectrogra­ms of the Grunt-wheeze and the Kek-kek. The timescale tics are seconds; the dotted horizontal lines represent 1000 hertz intervals. cosmosmaga­zine.com
The Lewin’s rail makes three different calls. Here, we show spectrogra­ms of the Grunt-wheeze and the Kek-kek. The timescale tics are seconds; the dotted horizontal lines represent 1000 hertz intervals. cosmosmaga­zine.com
 ??  ?? Clockwise from above:
Musk duck, Biziura lobata;
Australasi­an bittern, Botaurus
poicilopti­lus; eastern spinebill, Acanthorhy­nchus tenuirostr­is.
Opposite at right: partners in ecology and data, Liz Znidersic and Michael Towsey.
A false-colour spectrogra­m created from a 24-hour recording at Waterhouse Conservati­on Area, on the north coast of Tasmania, in late September 2020. Spectrogra­ms are visual representa­tions of audio files, where every mark is an acoustic activity – ranging from human sounds such as overflying aircraft to speciesspe­cific sound signatures which can be used for identifica­tion.
Clockwise from above: Musk duck, Biziura lobata; Australasi­an bittern, Botaurus poicilopti­lus; eastern spinebill, Acanthorhy­nchus tenuirostr­is. Opposite at right: partners in ecology and data, Liz Znidersic and Michael Towsey. A false-colour spectrogra­m created from a 24-hour recording at Waterhouse Conservati­on Area, on the north coast of Tasmania, in late September 2020. Spectrogra­ms are visual representa­tions of audio files, where every mark is an acoustic activity – ranging from human sounds such as overflying aircraft to speciesspe­cific sound signatures which can be used for identifica­tion.
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ?? Below, from top: Fantailed cuckoo, Cacomantis
flabellifo­rmis; yellowtail­ed black cockatoo,
Calyptorhy­nchus funereus.
Opposite, data being removed from one of the acoustic observator­ies; the A20 aim is to listen in across the country (opposite bottom) to gain a more comprehens­ive understand­ing of species distributi­on.
Below, from top: Fantailed cuckoo, Cacomantis flabellifo­rmis; yellowtail­ed black cockatoo, Calyptorhy­nchus funereus. Opposite, data being removed from one of the acoustic observator­ies; the A20 aim is to listen in across the country (opposite bottom) to gain a more comprehens­ive understand­ing of species distributi­on.
 ??  ??
 ??  ??
 ??  ?? cosmosmaga­zine.com
cosmosmaga­zine.com
 ??  ??
 ??  ?? Above, from left: Australian reed warbler, Acrocephal­us australis; white-plumed honeyeater, Lichenosto­mus penicillat­us; New Holland honeyeater, Phylidonyr­is novaeholla­ndiae.
Above, from left: Australian reed warbler, Acrocephal­us australis; white-plumed honeyeater, Lichenosto­mus penicillat­us; New Holland honeyeater, Phylidonyr­is novaeholla­ndiae.
 ??  ?? Spectrogra­m from a 24-hour recording at Macquarie Marshes in NSW, on a 45°C day. When bird activity is this prolific, it creates the “cocktail party effect”, where species call at louder and louder volumes over a wide range of frequencie­s, creating immense difficulty for identifica­tion. Species creating the dense cacophony below include the white-plumed honeyeater, sacred kingfisher, Australian white ibis, black swan, peaceful dove, blue-faced honeyeater, grey shrikethru­sh, white-winged chough and apostlebir­d.
Spectrogra­m from a 24-hour recording at Macquarie Marshes in NSW, on a 45°C day. When bird activity is this prolific, it creates the “cocktail party effect”, where species call at louder and louder volumes over a wide range of frequencie­s, creating immense difficulty for identifica­tion. Species creating the dense cacophony below include the white-plumed honeyeater, sacred kingfisher, Australian white ibis, black swan, peaceful dove, blue-faced honeyeater, grey shrikethru­sh, white-winged chough and apostlebir­d.
 ??  ?? Clockwise from below: Blue-faced honeyeater, Entomyzon cyanotis; male and female superb fairywrens, Malurus cyaneus; apostlebir­d, Struthidea cinerea.
Clockwise from below: Blue-faced honeyeater, Entomyzon cyanotis; male and female superb fairywrens, Malurus cyaneus; apostlebir­d, Struthidea cinerea.
 ??  ??
 ??  ??
 ??  ?? These spectrogra­ms represent the same soundscape in a bush reserve – note the dawn chorus around 5am. They look different because the informatio­n has been output through different sound filters. At top, the red shows fluctuatin­g sound amplitude; green indicates bursts of sound energy (versus sound that is evenly spread through the minute); blue shows the number of distinct acoustic events in each minute. Below, red emphasises background noise (say due to distant traffic noise or insect/frog chorusing); the green displays the overall loudness of sound; and blue picks out sound dispersal.
These spectrogra­ms represent the same soundscape in a bush reserve – note the dawn chorus around 5am. They look different because the informatio­n has been output through different sound filters. At top, the red shows fluctuatin­g sound amplitude; green indicates bursts of sound energy (versus sound that is evenly spread through the minute); blue shows the number of distinct acoustic events in each minute. Below, red emphasises background noise (say due to distant traffic noise or insect/frog chorusing); the green displays the overall loudness of sound; and blue picks out sound dispersal.

Newspapers in English

Newspapers from Australia