Finished! Looks like this project is out of data at the moment!

See Results

Thank you for counting with us, this project is now complete! See results here.

Results

Thank you!

This project is now complete and has been published in the open access paper PLOS ONE.
Read about the results here!

Abstract
Repeated counts of animal abundance can reveal changes in local ecosystem health and inform conservation strategies. Unmanned aircraft systems (UAS), also known as drones, are commonly used to photograph animals in remote locations; however, counting animals in images is a laborious task. Crowd-sourcing can reduce the time required to conduct these censuses considerably, but must first be validated against expert counts to measure sources of error. Our objectives were to assess the accuracy and precision of citizen science counts and make recommendations for future citizen science projects. We uploaded drone imagery from Año Nuevo Island (California, USA) to a curated Zooniverse website that instructed citizen scientists to count seals and sea lions. Across 212 days, over 1,500 volunteers counted animals in 90,000 photographs. We quantified the error associated with several descriptive statistics to extract a single citizen science count per photograph from the 15 repeat counts and then compared the resulting citizen science counts to expert counts. Although proportional error was relatively low (9% for sea lions and 5% for seals during the breeding seasons) and improved with repeat sampling, the 12+ volunteers required to reduce error was prohibitively slow, taking on average 6 weeks to estimate animals from a single drone flight covering 25 acres, despite strong public outreach efforts. The single best algorithm was 'Median without the lowest two values', demonstrating that citizen scientists tended to under-estimate the number of animals present. Citizen scientists accurately counted adult seals, but accuracy was lower when sea lions were present during the summer and could be confused for seals. We underscore the importance of validation efforts and careful project design for researchers hoping to combine citizen science with imagery from drones, occupied aircraft, and/or remote cameras.


What if I'm giving you bad data?

This project launched in August of 2019, and one thing we've heard over and over again is: "What if I'm wrong?" This is a valid question, and it's one that we had when we were creating this project. First and foremost, it’s important to keep in mind that we used to census these animals by travelling out to the island and counting by hand. This was a nearly impossible task considering how much the animals move and how many are out of sight on offshore rocks. So, no matter what, the counts we are getting now will be much more accurate than the ones we had before!

However, we also understand that our project is unlike many other Zooniverse projects in that it features less widely known species, lower resolution photos that are sometimes blurry, and small animal subjects. All of these factors present obstacles that our volunteers must overcome, and give some extra weight to the question of data credibility. So, we set out to prove that our volunteers are successful at counting seals and sea lions the only way we know how - using cold, hard data!

Using the sea lion counts from 100 photos, we compared volunteer counts with our expert counts and calculated percent error for each. We then created a series of graphs comparing the number of classifiers with their percent error and found that as the number of people counting each photo increases, the error decreases. We have posted one of these graphs in our talk page for your reference. After crunching some numbers, we found that by dropping the highest count values and taking the average of the rest we end up with a predicted 0% error! Interestingly, this means that our volunteers generally overestimate the number of animals in a photo. But more importantly, this indicates the power of crowd-sourced science in projects like ours - 0% error is impossible to beat!

The truth is, individually we may be wrong, but, when we work together, we get incredibly close to the truth! Here at the Costa lab, we have so much confidence in our volunteers, and now the data reflects that confidence too. Thank you to all our volunteers who have stuck with this project and continue to prove exactly how amazing citizen scientists can be!

Want to learn more? Talk to us!


Preliminary Data

How will my data be used?

Each time a volunteer presses the green done button on a photo, the points on each of the animals’ heads get added to our data set. We take the exact location of these points, and stitch together the images from one day into a complete graph of the island. This way, we can see all of the animals at once, and begin to look for patterns in their distribution.

Below are three graphs to illustrate what this looks like. To make one complete graph of the island it takes 800-1,000 photos stitched together! Our CAMINO intern, Sarah Wood, has spent part of her summer counting each and every animal that you see below (that’s more than 36,000!). She chose three dates during similar times of the year: July 15, 2017, June 26, 2018, and July 5, 2019. The timeframe focuses on June and July since this is when the most sea lions are present on the island, as well as when the most sea lion pups are being born.


July 15, 2017


June 26, 2018


July 5, 2019


What do these graphs mean?

At the moment, it’s difficult to say! With only three days from three separate years, we cannot confidently make conclusions based on these graphs alone. It’s up to our wonderful citizen scientists to help us complete our data set! Here at the lab, we aim to be as open about our research as possible during the course of this project, so check back here regularly for updates.

In the meantime, happy counting!

Thank you,
Sarah, Roxanne, and Patrick, your Año Nuevo Animal Count team