Finished! Looks like this project is out of data at the moment!

See Results

CloudCatcher will be featured on the Science Scribbler Twitch Channel! Go to https://www.twitch.tv/sciencescribbler

Also, this project recently migrated onto Zooniverse’s new architecture. For details, see here.

Results

CloudCatcher Results

The CloudCatcher "Catch that Cloud" workflow was initially run as a beta project in April 2020 and it was launched as a full public project with the 'Catch That Cloud' workflow in March 2023. In May 2025, we launched the new 'You Cannot Be Cirrus' workflow.

Over 3,000 citizen scientists have contributed to the project so far!

Thank you to everyone who classified images.

Beta test results

The results from the beta project were used to investigate how accurate this method is at identifying clouds in satellite images. We wanted to see whether the data that we get from CloudCatcher would be good enough to use for real satellite validation of cloud masks in the future.

The results of this investigation have been published in October 2024 as a research article in the Royal Meteorological Society journal Weather! The full article can be found here: https://doi.org/10.1002/wea.7635.

The article presents the motivation and initial results of the CloudCatcher project. It shows that people are good at spotting cloud, achieving an overall accuracy of 94% over ocean and 90% over land, and a probability of correctly identifying cloud as 99% over ocean and 94% over land.

We can see from the data that people are very good at spotting thick cloud, but can often miss the semi-transparent type. We are working on a second workflow that will run alongside Catch that Cloud to enable cirrus cloud to be spotted, using different types of 'false colour' images.

Results from first public workflow 'Catch That Cloud'

In this first public workflow, we wanted to check the performance of the satellite cloud mask over land, and put together a dataset of randomly selected images from all over the globe. We also put together a UK-only dataset, to see if that might be popular with our UK audience.

Our citizen scientists classified almost 5,000 images as part of this workflow!

We learnt from our beta run that citizen scientists were really good at spotting opaque clouds in satellite images. Therefore, the results of our first public workflow focuses on the scenes where citizen scientists agreed there was a cloud.

For a scene to be successfully classified as cloud by citizen scientists, over 90% of people had to categorise it as either ‘cloudy’ or ‘both’. The pie charts below show the number of scenes from each workflow that were identified as cloudy by our volunteers. We can see that there are a higher proportion of cloudy scenes in the UK workflow compared to the Global dataset.

         
The percentage of scenes identified in Global Workflow (left) and UK Workflow (right)

We now have a collection of citizen scientist-identified cloud scenes for both workflows. We can use these to validate the computer cloud mask and identify scenes that our citizen scientists categorised as cloudy but the computer cloud mask flagged as clear. This will reveal when the computer cloud mask has made a mistake. From the global and UK workflow respectively, there are 72 and 182 of these scenes, as shown in the bar plot below.

         
Breakdown of citizen scientist-identified cloudy scenes into the computer-identified cloudy and clear scenes in the Global Workflow (left) and UK Workflow (right)

We examined the computer-generated cloud masks of these scenes to get an idea of what caused the discrepancy between the human and computer classifications, and to double check that the Citizen Scientists were correct in their classifications.

In some cases, the computer cloud mask missed large areas of the cloud.

   
True colour image of scene (left) yellow pixels represent computer-generated cloud mask (right)

In other cases, we saw that the computer cloud mask missed small areas of cloud.

   
True colour image of scene (left) yellow pixels represent computer-generated cloud mask (right)

There were a few notable cases where the cloud mask struggled to identify lines of smaller, cumulus clouds above land.

   
True colour image of scene (left) yellow pixels represent computer-generated cloud mask (right)

In one particular case, the computer cloud mask struggled to identify the cloud above a river.

   
True colour image of scene (left) yellow pixels represent computer-generated cloud mask (right)

Our citizen scientists have now identified 1933 scenes to be cloudy and we have begun to build our human cloud mask!

A summary of the total users, scenes identified and scenes unidentified can be found in the table below.


Summary table of Catch that Cloud results

These results to date have been presented at a scientific meeting, the 'Sentinel-3 Validation Team Meeting'. There is still some analysis to do, and before we write up the results, we want to screen this dataset for thin cloud. The next workflow, 'You Cannot Be Cirrus' will focus on the scenes that weren’t confidently identified as cloudy.


You Can Not Be Cirrus!

In this workflow we use false colour images which show thin, transparent clouds more clearly.

Watch this space!