The Evolution of Dolphin Research: Embracing New Technology

Dolphin research has come a long way from the days of underwater film cameras and limited shot counts. Today, technology plays a pivotal role in enhancing our understanding of these animals, from advanced sound processing to high-definition video and innovative underwater communication systems.

From the Sky

Drones are revolutionizing marine mammal research by providing a bird’s-eye view of habitats without disturbing the animals. They help researchers track movements, observe behaviors, and collect data from hard-to-reach areas. This technology allows for safer, more efficient studies, ultimately improving our understanding and conservation of marine mammals.

Last year, WDP published a scientific study that explores the occurrence of mixed-species group of dolphins, specifically the common bottlenose dolphin and the Atlantic spotted dolphin, along the southeast coast of Florida. While these species have been seen interacting in other regions, this study is the first to document them together in Florida. Using a DJI Mavic Pro 2 drone, we observed their behavior and highlight the need for further investigation into the reasons behind these mixed groups, particularly in Florida.

Read the study here.

Dolphin behavior via drone footage

Aerial view of dolphin behavior captured via drone.

We also used drones to record the release of Lamda, after his rescue, rehabilitation and release.

Watch here.

Read the full story here.

Photo of Lamda.

Navigating Dolphin Sounds with ASPOD

One of the major challenges in dolphin communication research is identifying which dolphin produces specific sounds. On land, researchers use triangulation methods with multiple microphones. Underwater, we employ similar techniques using hydrophones. Our ASPOD (Acoustic Source Positioning Overlay Device) combines a video camera with hydrophones to collect data on vocalizing dolphins. After processing, we can visualize which dolphin made which sound, aiding our understanding of their communication patterns.

Harnessing Machine Learning for Language Decoding

As artificial intelligence has gained traction, we have integrated machine learning into our research. These algorithms help us analyze our extensive sound datasets, allowing us to categorize sounds that were previously difficult to decipher. Using a user interface called UHURA, we can inject sound files and search for patterns in dolphin communication, correlating vocalizations with behaviors captured on video.

Exploring Dolphin Families

Hayley Knapp, a graduate student working with Denise Herzing, is studying dolphin genetics to expand on and update the work of former WDP doctoral student Michelle Green, Ph.D., now an assistant professor of instruction at University of South Florida.

Her project, titled “From Poop to Parent: Examining Paternity in Dynamic Atlantic Spotted Dolphin Populations in the Bahamas,” looks at DNA from dolphin feces to find out who the fathers are. This research focuses on calves born after 2013, after dolphins moved from Little Bahama Bank off Grand Bahama Island to Great Bahama Bank off Bimini. By identifying these fathers, Hayley hopes to gain insights into the social lives of these dolphin communities.

A group of dolphins — and poop!

CHAT

Many of you have followed the evolution of our work with dolphins. From 1997 to 2000, we piloted an underwater keyboard with wild spotted dolphins, which helped us make some progress but ultimately showed us we needed a better system. In the fall of 2010, we relaunched this project in collaboration with Georgia Tech, developing cutting-edge technology specifically for our research.

Our goal is to use wearable computers to facilitate two-way communication experiments with dolphins. With the CHAT system, one researcher broadcasts a sound associated with a toy that dolphins enjoy. When a second researcher detects this sound, they pass the toy back to the first researcher. This interaction reinforces the link between the sound and the toy, allowing us to see if dolphins will mimic the sound to request the toy. The wearable computer uses pattern recognition technology to identify these imitated sounds.

Over time, the wearable computers have shrunk in size, evolving from bulky boxes worn on the chest to more compact devices that fit comfortably on the wrist.

In our latest study on the CHAT research published in the journal Animal Behavior and Cognition, we used a system that labeled objects with computer-generated sounds to enhance interactions with young female dolphins. While these dolphins showed impressive vocal skills and flexibility, they didn’t seem to understand the labels, reinforcing recent findings about the complexities of dolphin communication.

Our latest research on that work can be read here.