Hello! The "Tag Along with Adler" team wanted to share some early results of this project, as we have crossed the 90% complete mark! We previously reported early statistics and results to our Results page below.
We wanted to share that the tags you have added as part of the “Tag Along with Adler” workflows have been instrumental in not only our research and expansion of the Adler’s catalogue language, but also in the creation of a new video game being used to test gamification and citizen science.
Update by Numbers:
“Verify AI” Workflow
“Meta Tag Game”
Beta test launched in November 2021 with:
In November we launched the “Meta Tag Game” which again asks users to help the Adler Planetarium tag our collections to expand how we describe our objects, and how people like you can find them online. As part of this game you can see how language and word choice may affect your ability to find things. To begin, you will be shown an image of an object in the Adler’s collection along with all the words that make it searchable online -- its metadata! Some of these terms were created by museum staff, some were created by AI and machine learning, and some were created by users like you as part of the “Tag Along with Adler” Zooniverse project! That’s right! Your work on this project is now an integral part of the “Meta Tag Game”. As the words fall it’s your job to collect them. Any words that you would use to describe the image you move to the right and any words you don’t think describe the image you move to the left.
We thank you all for your participation, your time, and your enthusiasm for this project. We could not design a game like this without your participation, and we cannot wait to see how gamification impacts participation and language choice. As a thank you for all the amazing work your have done as part of “Tag Along” we want to share the early beta version of this game with you all: https://meta-tag-game.herokuapp.com/ Please join in and let us know what you think using the survey on the game site. Thank you for all your participation and hard work, we could not do the work we do without you.
As of the 14th of June, 2021 the Tag Along with Adler project is 50% complete. 2,071 registered volunteers, and countless unregistered participants, have helped to contribute 58,472 classifications. The Tag Along with Adler project was launched with 1091 Adler collections items, which were sent through both the “Verify AI Tags” and “Tag Images” workflows. Our team made the decision to release these subjects in sets of 100 across both workflows, helping the Adler process exports more quickly and helped us encourage completion of smaller sets across the project timeline!
So far we have processed 5 completed subject sets from each of the workflows, requiring 50 classifications per subject. With these 1000 subjects we have seen a tremendous amount of data and additional entry points to the Adler’s collections created already. Let’s share some statistics below.
For the “Tag Images Workflow” Zooniverse participants created 100,389 individual tags for 500 images. We compared these individual tags to the current terms available in the Adler Planetarium Collections Search Catalogue, as well as against the terms created by the two AI models used as part of this project: the Google Cloud Vision API, and the Metropolitan Museum of Art Tagger. These comparisons were important for us to be able to understand how the language our project participants used differed from the language of museum cataloguers and a human trained computer tagging model. As you can see below, a very small percentage of tags added by users were already in the Adler’s catalogue; and even fewer had been created by the AI models! For this first half of the project the median average for tags added that were already in the Adler catalogue was 12.2%, and the median average for tags added that were also created by the AI models was 7.25%. For our team this was an exciting early assurance of the importance of using crowdsourcing for metadata creation, helping to provide early proof that language of cataloguers and the choice of what to describe is different than that of the public, and in many ways much more limited. It also helped to show that though AI has some promises for metadata and tag creation, its success is still extremely dependent on the dataset used to train the AI model!
Similarly, for the “Verify AI Tags” Workflow, Zooniverse participants created 79,023 individual tags for 500 images! As you can see below, once again, a very small proportion of tags added by users were already in the Adler’s catalogue; and even fewer had been created by the AI Taggers! For this first half of the project the median average for tags added that were already in the Adler catalogue was 13.4%, and the median average for tags added that were also created by the AI models was 4.5%.
A surprise for the research team was seeing that the tags added as part of the “Verify AI Tags” workflow had a much lower percentage of matching to AI generated tags than the tags added as part of the “Tag Images” workflow! We had anticipated that by seeing the tags as part of the image in the “Verify AI Tags” workflow, it would have an effect on the language and tags added by users. It appears it may have. 20% fewer tags were added by participants as part of the “Verify AI Tags” workflow than were added by participants of the “Tag Images” workflow. On average (median) participants of the “Verify AI Tags” workflow added 3.1 tags per classification, whereas the median for participants of the “Tag Images” workflow was 4.02 tags per classification. Overall these early statistics hint that being prompted by tags as part of the workflow may change the way participants tag an image, initially showing it may limit the amount of tags added by participants!
Our team is enthusiastic about these early results; with over 100,000 tags already created that will help improve and diversify the cataloging of our collections. Even more incredible is that these came from over 2,000 individual participants, helping the Adler expand whose voices are included in our collections and changing the way we describe our objects to better serve the public!
We look forward to seeing how the next half of this project progresses, and we thank you all for your continued participation and support of this evolving research project!