Finished! Looks like this project is out of data at the moment!
The exploration of space, arguably the most remarkable undertaking in human history, has improved our scientific and technical knowledge, inspired current and future generations, and fundamentally transformed our perspectives on who we are, where we live, and how we fit into the grand scheme of the universe. We make new discoveries about our planetary neighbors on a continual basis.
One of the challenges in space exploration is the extremely small amount of data that can be sent between Earth and a distant spacecraft. Because keeping humans alive in space is so expensive and difficult, most space missions are in effect highly specialized robots. It's very easy for a robot to take a lot of data with a camera or scientific instrument... far too much to send home. While questions like, "Do the images look reasonable?" "Which ten images are the important ones to send to Earth?" "What surprising changes or new findings did you observe?" are understandable to a human expert, traditional programming makes coding these abilities into robots very challenging. From the days of Explorer I all the way to today, most missions follow the concept "Send the data directly to Earth, don't collect more than you can send, and let the humans decide what to collect next." This approach is generally effective, but it can lead to idle time in which the spacecraft is waiting on new instructions, and it precludes the ability to capture and respond to dynamic or transient events (like dust devils or cryovolcanic plumes).
Using machine learning (ML), we can train a system to model what the scientists think is interesting onboard. Then, in addition to what the scientists have manually requested, the spacecraft can continue to scan its environment looking for "likely interesting" findings to offer the scientists on the next communication pass. This is called onboard autonomy, and it's important to note that it does not (cannot) replace the scientist. Instead, it offers the experts back home a list of possible interesting observations, the same way that a mapping program on your phone might offer you similar nearby restaurants or highly rated similar products. The onboard ML has the luxury of processing the high-resolution instrument data in-situ, without worrying about transmission bandwidth back to Earth. Then, once it's made its best guess at what scientists might like to see, it lets them know what they could request. This new framework of science-supporting ML autonomy can be summarized as "Observe as much as possible and return the best," rather than the old mantra: "Observe these targets and nothing more."
With the Content-based Object Summarization to Monitor Infrequent Change (COSMIC) project, we aim to inform the development of future Mars orbital spacecraft with cameras like the HiRISE instrument currently onboard MRO. What if we could leave such a super-high-resolution imager on continuously and analyze all of the science data onboard, sending home a prioritized list of what is discovered? Specifically, for COSMIC this means inventorying signs of recent and ongoing surface activity like new craters created by meteorite impacts, springtime blasts of escaping gas as dry ice hidden beneath the surface sublimates, and many more. Scientists know that these phenomena are happening, but so far these discoveries have been happy accidents or careful campaigns tracking a handful of "likely places." A system like COSMIC could monitor the whole planet, building up a huge catalog of interesting events and landmark features.