Science for a Blue Planet

Featuring cutting-edge work, discoveries, and challenges of our scientists, our partners, and the larger conservation science community.

Hope in the Cloud?

By Dr. Grant Ballard, Chief Science Officer

Being in the midst of a technological revolution isn’t always as exciting as we might hope. In fact, the promise of the revolution can be hard to appreciate given that the outcome seems to be mostly an endless proliferation of distractions (think: receiving custom-designed advertisements, thanks to artificial intelligence, that you click-on because they are actually things you like, but probably don’t need). Distractions are perhaps more attractive when the alternative is focusing on climate change and biodiversity loss, or other potentially existential crises. But maybe there are some aspects of technological innovation that can help us out of the mess we’re in?

A volunteer community scientist deploying an audio recorder for the Soundscapes to Landscapes project. Credit: Rose Snyder/Point Blue

One of the biggest technological changes of the 21st century is the advent of “cloud-based” computing, which started to be noticeable in about 2006 and has since advanced rapidly. Cloud-based computing differs from the previous paradigm, known as “on-premise computing” in that the machines that are doing all the work are all remote (and virtualized, a topic perhaps best left for the curious reader’s own research). Enabled by ubiquitous high-speed Internet interconnecting new computing and data resources as they are made available, it doesn’t much matter where the computers are located. This enables the computing power and storage capacity of these large banks of machines to be essentially infinitely scalable compared to working on an individual personal computer or a locally-networked server.

The advances leveraging cloud computing in several disciplines are well-known – things like navigation systems, translation services, internet search, and digital personal assistants all rely on harnessing the computing power and data storage of thousands of computers at the same time. But what are the opportunities in conservation science?

Community scientists and Point Blue staff training AI models to recognize bird songs. Photo credit: Point Blue.

The biggest opportunity that we know about seems to be related to enabling the use of artificial intelligence (largely reliant on cloud computing) to assist in completing tasks that humans can’t do at scale, but of which we are often quite capable at a smaller scale. Things like identifying species by their sounds, counting the numbers of seals resting on some ice from a satellite image, or counting the numbers of birds nesting in a colony. Because of other technological advances related to improved sensors, we have deployed a vast array of instruments that continues to grow, and the data-streams that these sensors are providing are nearly unusable without the assistance of artificial intelligence.

For example, we are using a group of three flying robots (aka, drones) to collect as many as 6000 images of a single penguin colony within a few hours. It would be challenging to find people willing to count all the penguins in those 6000 images one time, much less do it every year (and it would be nice if we could get results from willing participants within a few hours rather than a year or two later!). Instead, we’re now using a combination of cloud computing and artificial intelligence to count the penguins for us. We’re also using the same technology to continue the long-term California gull census and productivity estimates from Mono Lake, which we’ve been

Point Blue staff and partners conducting drone surveys of California Gull breeding colonies at Mono Lake. Photo Credit: Dennis Jongsomjit, Point Blue

conducting since 1983. These studies are vital to informing water management in the Eastern Sierra. More recently, we are attempting to use AI to enable underwater robots to identify and map a rapidly moving coral disease in the Caribbean so that we can better protect areas that are resilient. We are hoping that the deeper-water reefs (100 – 200 feet) that are hard for human divers to access will be largely unaffected, but many hundreds of miles of reef would ideally be mapped and monitored to know for sure– a task that humans could never achieve without technological assistance. Next year, we will be involved in developing an automated monitoring of the Greater Cape Floristic Province in South Africa, one of the 20 most diverse regions on the planet, with the assistance of AI. We have also used scalable cloud computing to enable hundreds of thousands of volunteers to participate in our research and help us track the health of the global Emperor Penguin population and calculate the first global estimate of Weddell seals, highlighting conservation priorities for these species (Weddell seals, like the polar bear and the recently listed emperor penguin, are adapted to rely on sea ice for their survival, making them susceptible to a changing climate).

The catch is that we need to train the AI how to make sense of the data before it can tackle the data at scale. Paradoxically, this often involves human contributions before we let the machines do their work. In many cases, this means numerous community scientists contributing to the training so that the AI has access to as much verified data as possible. For example, our Soundscapes to Landscapes project recruited the help of hundreds of volunteers to develop data to train the AI models.

We are now in a time when we need to focus our attention on organizing such teams of people to do this work with the promise that the AI will take over at some point and start generating knowledge (e.g., numbers of nesting penguins, then trends in numbers of nesting penguins) from all the data being collected. The work of training AI systems is quite repetitive and boring (or meditative, perhaps) – just the kind of work I would normally call “a perfect task for a machine.” But it relies on human understanding of context and subtleties of visual or sound information that computers don’t understand until they are shown thousands of examples. And even then, this is not the same as human “understanding” – it is based on mathematical models assembled from trillions of 0’s and 1’s. It only works because cloud computing power is fast enough to crunch all the numbers, and there’s enough storage available to collect all the sensor data.

Field technician Parker Levinson preparing to launch a drone in Antarctica. Photo credit: Annie Schmidt/Point Blue.

In the coming 5 years, Point Blue will build out several AI “workflows” – from sensor deployment to knowledge output – in collaboration with biologists, community scientists, computer scientists, systems engineers, quantitative ecologists, and others. And, importantly,we will need to demonstrate the conservation impact of these systems to justify ongoing investments. I am hopeful that in that time we will have entered a new paradigm, where the AI’s we need most have been trained and we have ready access to richer information that will drive conservation. And along the way, we have a big opportunity to join with communities around the world who share our vision that because of the work we are doing now, in the decades to come there will still be thriving wildlife and human populations, and the ecosystems they rely upon.