Guest blog post by bioacoustics PhD student Chloe Malinka, @c_malinka
We at Coding for Conservation would like to let you know about a recent publication, authored by researchers from the Marine Bioacoustics lab at Aarhus University, the Sea Mammal Research Unit, the Bahamas Marine Mammal Research Organisation, and Ocean Instruments.
(A very rough first draft of this paper was originally posted here as a blog post in July 2018. Due to interest and accessibility, we decided to draft it as a manuscript for publication. A couple of field seasons later, here we are, ready to share our publication with you…)
I recently had an opportunity to study the bioacoustics of a deep-diving toothed whale. I was interested in collecting passive acoustic recordings on an array containing multiple hydrophones. With this, I planned to detect echolocation clicks, classify them, and localise them.

However, this presented me with a challenge: how do I deploy an array and collect recordings at several hundred meters of depth, where I anticipate my animal of interest to be? With traditional star and towed arrays, the cables all connect to recording gear on the boat, whereby all channels usually get recorded on the same soundcard. If I want to go deep, ~1000 m of cable is heavy and expensive, which means a big boat is needed, which also expensive. …If only I had an autonomous array that I could set into the deep, without having to worry about its connection to a boat. Furthermore, I would need this array to be vertically oriented, and as straight as possible, to allow for minimal errors in acoustic localisations.
I looked around the lab, and I came across a few (okay, 14) SoundTraps. These are autonomous hydrophones made by a company based out of New Zealand (Ocean Instruments). I’ve used these devices many times before and appreciated their user-friendliness, low noise floor, and large dynamic range.

I got in touch with their director, who had the foundations in place for a “Transmitter / Receiver” dynamic. This means that so long as all the devices on an array are connected with a cable, the “Transmitter” can send out an electrical signal to all of the “Receivers” on the array. These pulses are sent out at a rate of 1 per second. The one Transmitter and each Receiver records the sample number at which it either sent or received each pulse. This information is stored and can be used to time-align the audio recordings on all devices on the array after data has been collected, to sample-level accuracy. In other words, we now have a way to treat these autonomous devices as if they were collecting audio data on the same soundcard.
How did I do this, and how can you do it, too? Check out our publication here:
Malinka CE, Atkins J, Johnson M, Tønnesen P, Dunn C, Claridge D, Aguilar de Soto N, & PT Madsen (2020) “An autonomous hydrophone array to study the acoustic ecology of deep-water toothed whales.” Deep Sea Research I. https://doi.org/10.1016/j.dsr.2020.103233
Highlights:
– We developed an autonomous deep-water, large-aperture vertical hydrophone array using off-the-shelf components to address challenge of recording time-synchronised, high sample rate acoustic data at depth.
– Array recordings can be used to quantify source parameters of toothed whale clicks.
– We report on the design and performance of the portable and lightweight array.
– Step-by-step directions on how to construct the array, as well as an analysis library for time synchronisation, are provided.

This publication also links to the time synchronisation library on github, some research data on which this library can be trialled, and a step-by-step how to build and deploy guide in the Supplementary Materials.
We genuinely hope that making these instructions, software, and analysis library all open will make it accessible for other researchers to employ this method.
Questions, comments, access to publication? Get in touch.
Guest blog post by Chloe Malinka @c_malinka