FAQ
How rare are self-lensing events?
Very rare! If we're lucky, we expect to find about 10 examples in the whole SuperWASP dataset that includes lightcurves from over 2 million stars. No conclusive examples of self-lensing by a black hole in a binary system have ever been found. If you help to find one, you will be making science history!
Why do you use some simulated lightcurves?
The events we're looking for are so rare that we haven't yet observed any real examples. We use simulated light curves for two reasons. The first is to give you examples of what shapes to look for in the SuperWASP data, so that you won't miss a real lensing event when you see one. The second reason is that including simulated lightcurves lets us work out the lowest lensing magnification that should be detectable by Zooniverse volunteers.
How can I tell if a lightcurve is simulated?
Simulated subjects have special metadata that denote the magnification factor of the lensing event ("Magnification") and the orbital periods of the binary system ("Period (days)"). If you inspect the subject metadata by clicking on Subject Info and see these entries, then the subject is simulated. We've also added a metadata item ("simulated", with possible values "Yes" or "No") that simply denotes whether the light curve is simulated. Please remember that even if a subject is simulated, you should only click "Yes" if you can actually see the lensing signal in the lightcurve. Some simulated signals may be far too faint to actually see. Clicking yes for all simulated subjects may actually result in us missing any real signals that are present in the data.
Can this be done by a computer?
SuperWASP data are very noisy. That makes it very difficult to predict what the lightcurve of a lensing event would look like with enough precision to design an automatic detection algorithm. We might also miss real signals if we don't properly predict what they should look like.
Modern machine learning methods might be able to do the job. However, unless we're very careful, these algorithms tend to focus on very subtle features of the data and not on the overall shapes of the peaks we're looking for. To use them it's very important to have training data that contain signals that are very similar to the real signals. The simulations we use in this project are not sufficiently accurate to use them as training data. They give a rough idea of what the target signals look like, but they're different enough from reality that a machine learning algorithm trained with them could miss many real signals.
Why do humans have an advantage?
Humans are very good at extrapolating from a few examples and seeing patterns in data that are similar to, but not identical to, the examples they've been shown. This gives a big advantage in projects like this one, where we don't have any real examples and we need volunteers to extrapolate from a handful of approximate simulations.