Now live on Zooniverse! Explore Washington’s underwater forests and contribute directly to conservation research!
Check out our project update to learn more about the images you are seeing.
Important note: Subjects from Workflow 1 (Yes/No) are routed into Workflow 2 (multiple choice). This means Workflow 2 may not always have images available. Please check back regularly to continue contributing to the multiple choice workflow. Thank you!
The red dot is over a small feature, but most of the image shows a different category. What should I choose?
Please focus only on what the red dot is touching. The rest of the image is there to give you context, but your answer should be based on the center point.
The red dot seems to touch two categories. Which one should I pick?
Choose the category that covers most of the red dot. Go with your best judgment. There isn’t always a single perfect answer.
One category is growing on top of another. What should I choose?
Choose the category directly under the red dot.
Example: If Red algae - CCA is growing on a cobble and the red dot touches the Red algae - CCA , select Red algae - CCA.
Exception: Small barnacles, which can sometimes look like shell on hard surfaces. If the dot falls on small barnacles attached to rock or cobble, label the surface they are attached to instead.
Is there a back button?
Yes—within the multiple choice workflow, you can use the "Back" button to return and choose a different group before finishing the classification.
However, once a subject is marked “Done,” you cannot go back to change your answer. Don't worry if you accidentally misclassify something. Each image is reviewed by multiple volunteers, and we use everyone’s answers to reach a consensus.
Why is my patch so blurry?
All camera lenses introduce some distortion, and this effect is amplified underwater. Blurring or color “bleeding” often occurs near the edges of the original image, where distortion is strongest. Do your best to decide, but if the area under the red dot truly can’t be identified, this is when you should choose the unknown category.
What are "edge cases" in the Field Guide?
While we aim to provide ample representations for each field guide entry, some categories exhibit notable variations in appearance. We include edge cases to make you aware of some of these differences. Life stages, growth patterns, water conditions, depth and camera positioning are some of the many factors influencing how the subject appears. Some edge cases are also there to demonstrate what falls within the category if the description is unclear. As always, if your subject has you stumped, submit your best guess and/or bring it to our attention with "Done & Talk."
How do you collect your images?
We outfit our ROVs with two downward-facing GoPro 12s, which take pictures of the seafloor. After collecting our images in the field, we clean them using Adobe Lightroom Classic. Every image must be brightened, color-corrected and fine-tuned before it is ready for you to extract meaningful data from it. However, recently our team has been training machine learning models to automate this photo-editing process for us, which you can learn more about here).
What are some challenges you face in the field?
Weather conditions, current, visibility underwater and tides are just a few of the environmental challenges we face when out on (and below!) the water. We’ll also encounter software hiccups and hardware glitches in the field. Other boaters can cause problems by generating waves or ignoring our dive flag. There’s a lot that needs to go right for us to conduct our work as intended.
Where do you deploy the ROV?
We take our ROV to various sites across the Salish Sea. Most of our work happens just offshore of Seattle in Elliott Bay where we have eight survey sites. We conduct surveys in the San Juan Islands annually, which is a much more challenging environment due to strong currents. We have also taken the ROV to Washington’s outer coast where we hope to expand our research soon.
How deep can the ROV go?
The model we use can go down to 100 meters (328 feet). If we swap the acrylic electronics enclosure for an aluminum tube, the ROV can go down to 300 meters (984 feet). We typically fly our ROVs no deeper than 20 meters (65 feet), as after that there is not enough sunlight for kelp to grow. The max depth we have reached is 44 meters (145 feet). The ROV is also attached to a 150-meter (492 feet) tether which further limits how deep (and far) we can go.
How fast can the ROV go?
Our ROVs have a maximum rating of 1.5 meters/second (5 feet/second). But we tend to go much slower, since we want to capture all the creatures hiding within the kelp or under rocks on the seafloor. We typically operate at a steady 0.15 meters/second (0.5 feet/second).
How long can the ROV stay underwater?
It depends on how the vehicle is powered. Our battery-powered model, ROV Nereo, can stay down continuously for a couple of hours. Our model that receives power from the boat, ROV Lutris, could stay down for about eight hours. Run times vary based on how hard the ROV needs to work to fight against the environment (currents, wave action, etc.).
How long did it take to build the ROV?
It took us about two days to build our ROVs. They’re assembled like a LEGO set!
You can see the full build for ROV Lutris that we did alongside Port of Seattle personnel via the timelapse videos linked here.
How much does the ROV cost?
The basic ROV build costs about $5,000, a far cry from the early days of ROV technology when models could easily cost millions.
What’s the coolest thing you’ve seen on the ROV?
How could we possibly narrow it down? Check out some of our highlights via the links below!
What does it mean to be an "open-source" classification model?
“Open-source” means the computer code used to build, train and run the model is publicly available. So, anyone can inspect how the model works, download the code and use or modify it for their own projects.
We use open-source tools such as CoralNet-Toolbox, which is built on open-source models from Ultralytics. This accessibility makes our methods transparent, reusable and easy for others to build upon.
Is the CoralNet-Toolbox model that you've trained via your ROV survey imagery accessible to others?
Yes! We have made that trained ML model publicly available. (Technically, it is "open-access," meaning anyone can access the content.) For example, on the readme page of this GitHub repository, you will find a link to our trained model. You can also find that link here.
Why did you choose the YOLO11s-cls model?
The Ultralytics YOLO11s-cls is designed to assign one category per image, which matches our task. Each image patch in our project represents a small area of the seafloor, and the model predicts which category appears at the center point and assigns a label accordingly.
This type of model was originally trained on a very large image dataset, ImageNet, which contains over 14 million labeled images! Starting from that broad visual knowledge, we then fine-tune the model on a single computer using our own kelp forest imagery, so it can learn the specific patterns found in our ecosystem. It is fast, efficient and well-suited for classifying small image patches like the ones used in this project.
How is your YOLO model performing?
At the start of the project, the model agreed with expert classification about 84% of the time on an independent test dataset. However, certain categories with lots of texture tend to be predicted accurately with great consistency (like Kelp - holdfast), whereas other categories that look similar are more challenging for the model (such as some of our substrate categories). Crucially, we will use the data YOU help us gather here to improve the model's accuracy!
I've been hearing a lot about AI, chat bots and data centers. How does your work with ML models relate to these subjects?
While ML is technically a form of artificial intelligence, it is NOT generative AI, which is the branch of AI that includes "chat bots." Our ML models cannot create new text or images. The model we use is simply a computer vision program that can be trained over time to detect certain visual patterns.
Because the task is focused on pattern recognition rather than generating new content, these models are much simpler and require far less computing power. In fact, our models can be trained and run on a single computer, allowing us to process large image datasets efficiently without the large energy and infrastructure demands associated with generative AI systems.
How does bull kelp reproduce?
Bull kelp blades (the long strands attached to the bulb) develop lighter patches called sori, which contain bull kelp spores. A sorus will eventually dislodge from the kelp blade and fall to the seafloor where its spores release and become fertilized, lying dormant until the next growing season.
Is bull kelp edible?
The stipe (the long “stem”) and blades are not only edible, but rich in vitamins and minerals. You can pickle it, use it as a thickening agent or toss it in soups and salads. Additionally, Indigenous communities throughout the Salish Sea have used bull kelp as tools and for cultural practices since time immemorial.
How do different groups lead recovery efforts?
One of our partners, Puget Sound Restoration Fund, has multiple bull kelp outplanting sites throughout Puget Sound. They lay several longlines along the seafloor with lab-grown bull kelp spores. The spores attach to the line and grow to maturity, releasing their own spores and steadily increasing the potential for sustained presence at that site in future generations. You can learn more about this work here!