Want to try something else? Check out Experiment! Note: Experiment is external to Zooniverse. See this blog post for details.

FAQ

What is the Rubin Observatory?

The NSF-DOE Vera C. Rubin Observatory is an amazing new facility that will be transformational for many areas of astrophysics. It is located in Cerro Pachón, in Chile. It uses the largest camera ever built, which is roughly the size of a small car. The Rubin Observatory will run the Legacy Survey of Space and Time, a project that will take 10 years. For more information, visit https://rubinobservatory.org/.

What is an alert?

An alert is essentially what it sounds like: a little alarm bell that suggests something interesting has happened. The Rubin Observatory sends out these alerts whenever they think they've detected something new or changed. We expect the Rubin Observatory to generate ~10 million alerts per night. However, some of these alerts will be false alarms (or "artefacts). Your job is to identify these artefacts. In this project, you will primarily be shown alerts that originated from relatively nearby galaxies, as the default image subtraction software can malfunction in these cases. More information on Rubin alerts can be found here and here.

What is "image subtraction"?

Image subtraction is the default method that the Rubin Observatory (and many other facilities) use to identify changes in the sky. It works as follows: whenever a new image is taken (a "science image"), they go back and look what it should look like, based on previous observations (the "template image"). These templates are usually a combination of multiple previous images taken over a long period of time. You can then subtract the template image from the science image, which results in a so-called "difference image". Any changes in the sky should then be clearly visible in this difference image. The Rubin Observatory quantifies this by calculating the signal-to-noise (SNR), and if this is greater than some predefined threshold, that region is counted as a detection. This process usually works quite well, although it often struggles when the target is embedded in a bright background galaxy.

Why is it more difficult to find signals when they are embedded in bright host galaxies?

For an intuitive explanation: try to think of looking for a dim light bulb. It’s much easier to do that in a dark room than in a room with all the lights on. Likewise, it is easier to hear crickets in a quiet field than at a noisy concert. The signal is still there, but it takes more effort to find it.

A more detailed explanation would begin by noting that, although image subtraction pipelines seem straightforward in theory, they are actually quite complex in practice. For them to work as expected, you need to exactly know the point spread function of your images, align your images perfectly, and have robust templates. When you don't have a lot of background signal, you can get away with suboptimal conditions. However, in regions with a lot of background signal, e.g. a small misalignment can result in big artifacts down the road.

Why is the science image of seemingly worse quality than the template image?

That is because these templates usually are a combination of multiple previous images taken over a long period of time, while the science image is just one exposure.

Why am I seeing the same image multiple times?

You will never be shown the exact same image. However, you might be shown the same region in the sky, but at a different date. The template image will look the same in these cases, but the other panels can be slightly different.

How will these classifications be used?

Our team will first try to figure out what we are actually seeing. It is hard to disentangle from these images alone what it exactly is, since a more detailed analysis is needed. It can be a variety of things, such as pre-SN variability or a common envelope event. See the 'Research' tab for more information on those phenomena. We will also use these classifications to identify interesting targets as they happen. By spotting an interesting target early, we can quickly follow it up with other telescopes while the event is still unfolding. This will allow us to collect more data and gain a deeper understanding of what is happening in real time.

What kind of images are we looking at now?

In this project, we will show alerts from the Vera C. Rubin Observatory that are coming from relatively nearby galaxies. We plan to upload new targets on a regular basis as the observatory releases new alerts. This means that what you see has been observed very recently and that you might be the first person to look at it! While we expect the Rubin Observatory to eventually generate millions of alerts each night, it is still warming up and slowly ramping up to that number. Therefore we also currently supplement our sample with data taken while the observatory was testing its systems (called "Data Preview 1"), which was taken between October and December 2024. On top of that, we also show simulated alerts to better understand what kind of alerts are difficult to classify.

Why can't AI do this yet?

That's a great question! That is actually part of this project as well. We are hoping to eventually find a way that AI can do most of this work, and have humans only look at the interesting targets. However, we need some examples to train the AI on first.

Do you want to try some other citizen science projects?

Feel free to check out Experiment, a new Zooniverse-adjacent platform where we host more experimental citizen science projects. We're currently hosting projects like "Bar Brigade" and "Tidal Tales", where you can help us identify bars and merging galaxies!

Are there other citizen science projects that use Rubin LSST data?

Absolutely, you can find a full list of Rubin Zooniverse projects here. In particular, this project is very similar to Rubin Difference Detectives. However, our science goals are quite different. We are specifically focussing on faint alerts that come from nearby galaxies, with the aim to find pre-supernova variability and common envelope events. The team behind Rubin Difference Detectives want train a machine-learning classifier to make future alerts that come from Rubin more reliable, which will eventually also directly benefit us.

Background image credit: RubinObs/NOIRLab/SLAC/NSF/DOE/AURA.