Abstract
Expert investigators bring advanced skills and deep experience to analyze visual evidence, but they face limits on their time and attention. In contrast, crowds of novices can be highly scalable and parallelizable, but lack expertise. In this paper, we introduce the concept of shared representations for crowd--augmented expert work, focusing on the complex sensemaking task of image geolocation performed by professional journalists and human rights investigators. We built GroundTruth, an online system that uses three shared representations-a diagram, grid, and heatmap-to allow experts to work with crowds in real time to geolocate images. Our mixed-methods evaluation with 11 experts and 567 crowd workers found that GroundTruth helped experts geolocate images, and revealed challenges and success strategies for expert-crowd interaction. We also discuss designing shared representations for visual search, sensemaking, and beyond.
Supplemental Material
Available for Download
This contains the subtitles file (GTCSCW_captions.sbv) for the video figure, as well as the video figure description (GT_CSCW_video_description).
- Tanja Aitamurto. 2015. Motivation Factors in Crowdsourced Journalism: Social Impact, Social Change, and Peer Learning. International Journal of Communication, Vol. 9, 0 (2015). https://ijoc.org/index.php/ijoc/article/view/3481Google Scholar
- Salvatore Andolina, Hendrik Schneider, Joel Chan, Khalil Klouche, Giulio Jacucci, and Steven Dow. 2017. Crowdboard: Augmenting In-Person Idea Generation with Real-Time Crowds. In Proceedings of the 2017 ACM SIGCHI Conference on Creativity and Cognition (C & C '17). ACM, New York, NY, USA, 106--118. https://doi.org/10.1145/3059454.3059477Google ScholarDigital Library
- Paul André, Aniket Kittur, and Steven P. Dow. 2014. Crowd Synthesis: Extracting Categories and Clusters from Complex Data. In Proceedings of CSCW 2014 .Google Scholar
- Trushar Barot. 2014. Verifying Images. In Verification Handbook: A Definitive Guide to Verifying Digital Content for Emergency Coverage. http://verificationhandbook.com/book/chapter4.phpGoogle Scholar
- Michael S Bernstein, Joel Brandt, Robert C Miller, and David R Karger. 2011. Crowds in two seconds: Enabling realtime crowd-powered interfaces. In Proceedings of the 24th annual ACM symposium on User interface software and technology. ACM, 33--42.Google ScholarDigital Library
- Michael S Bernstein, Greg Little, Robert C Miller, Björn Hartmann, Mark S Ackerman, David R Karger, David Crowell, and Katrina Panovich. 2010. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology. ACM, 313--322.Google ScholarDigital Library
- Jonathan Bragg, Mausam, and Daniel S. Weld. 2018. Sprout: Crowd-Powered Task Design for Crowdsourcing. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST '18). ACM, New York, NY, USA, 165--176. https://doi.org/10.1145/3242587.3242598Google ScholarDigital Library
- Petter Bae Brandtzaeg, Marika Lü ders, Jochen Spangenberg, Linda Rath-Wiggins, and Asbjørn Følstad. 2016. Emerging Journalistic Verification Practices Concerning Social Media. Journalism Practice, Vol. 10, 3 (2016), 323--342. https://doi.org/10.1080/17512786.2015.1020331 https://doi.org/10.1007/s13347-016-0216--4Google ScholarCross Ref
- Sukrit Venkatagiri and Amy X Zhang. 2018. Response to “Heuristics for the Online Curator”,, Brian G Southwell and Vanessa Boudewyns (Eds.). Curbing the Spread of Misinformation: Insights, Innovations, and Interpretations From the Misinformation Solutions Forum, 9--10. https://doi.org/10.3768/rtipress.2018.cp.0008.1812Google Scholar
- Nam Vo, Nathan Jacobs, and James Hays. 2017. Revisiting im2gps in the deep learning era. In Proceedings of the IEEE International Conference on Computer Vision. 2621--2630.Google ScholarCross Ref
- Nai-Ching Wang, David Hicks, and Kurt Luther. 2018. Exploring Trade-Offs Between Learning and Productivity in Crowdsourced History. Proc. ACM Hum.-Comput. Interact., Vol. 2, CSCW (Nov. 2018), 178:1--178:24. https://doi.org/10.1145/3274447Google ScholarDigital Library
- Tobias Weyand, Ilya Kostrikov, and James Philbin. 2016. Planet-photo geolocation with convolutional neural networks. In European Conference on Computer Vision. Springer, 37--55.Google ScholarCross Ref
- Joanne I White, Leysia Palen, and Kenneth M Anderson. 2014. Digital mobilization in disaster response: the work & self-organization of on-line pet advocates in response to hurricane sandy. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. ACM, 866--876.Google ScholarDigital Library
- Mark E. Whiting, Dilrukshi Gamage, Snehalkumar (Neil) S. Gaikwad, Aaron Gilbee, Shirish Goyal, Alipta Ballav, Dinesh Majeti, Nalin Chhibber, Angela Richmond-Fuller, Freddie Vargus, Tejas Seshadri Sarma, Varshine Chandrakanthan, Teogenes Moura, Mohamed Hashim Salih, Gabriel Bayomi Tinoco Kalejaiye, Adam Ginzberg, Catherine A. Mullings, Yoni Dayan, Kristy Milland, Henrique Orefice, Jeff Regino, Sayna Parsi, Kunz Mainali, Vibhor Sehgal, Sekandar Matin, Akshansh Sinha, Rajan Vaish, and Michael S. Bernstein. 2017. Crowd Guilds: Worker-led Reputation and Feedback on Crowdsourcing Platforms. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '17). ACM, New York, NY, USA, 1902--1913. https://doi.org/10.1145/2998181.2998234 event-place: Portland, Oregon, USA.Google Scholar
- Wesley Willett, Shiry Ginosar, Avital Steinitz, Björn Hartmann, and Maneesh Agrawala. 2013. Identifying Redundancy and Exposing Provenance in Crowdsourced Data Analysis. (2013).Google Scholar
- Helen Wollan. 2004. Incorporating heuristically generated search patterns in search and rescue. University of Edinburgh (2004).Google Scholar
- Anna Wu, Gregorio Convertino, Craig Ganoe, John M. Carroll, and Xiaolong (Luke) Zhang. 2013. Supporting collaborative sense-making in emergency management through geo-visualization., Vol. 71, 1 (2013), 4--23. https://doi.org/10.1016/j.ijhcs.2012.07.007Google ScholarDigital Library
- Elizabeth Yardley, Adam George Thomas Lynes, David Wilson, and Emma Kelly. 2018. What's the deal with `websleuthing'? News media representations of amateur detectives in networked spaces. Crime, Media, Culture, Vol. 14, 1 (2018), 81--109. https://doi.org/10.1177/1741659016674045 showeprinthttps://doi.org/10.1177/1741659016674045Google Scholar
- Ling Yu, Sheryl Ball, Christine Blinn, Klaus Moeltner, Seth Peery, Valerie Thomas, Randolph Wynne, Ling Yu, Sheryl B. Ball, Christine E. Blinn, Klaus Moeltner, Seth Peery, Valerie A. Thomas, and Randolph H. Wynne. 2015. Cloud-Sourcing: Using an Online Labor Force to Detect Clouds and Cloud Shadows in Landsat Images, Vol. 7, 3 (2015), 2334--2351. https://doi.org/10.3390/rs70302334Google Scholar
Index Terms
- GroundTruth: Augmenting Expert Image Geolocation with Crowdsourcing and Shared Representations
Recommendations
CrowdIA: Solving Mysteries with Crowdsourced Sensemaking
The increasing volume of text data is challenging the cognitive capabilities of expert analysts. Machine learning and crowdsourcing present new opportunities for large-scale sensemaking, but we must overcome the challenge of modeling the overall process ...
Novice and Expert Sensemaking of Crowdsourced Design Feedback
Online feedback exchange (OFE) systems are an increasingly popular way to test concepts with millions of target users before going to market. Yet, we know little about how designers make sense of this abundant feedback. This empirical study investigates ...
Investigating Differences in Crowdsourced News Credibility Assessment: Raters, Tasks, and Expert Criteria
CSCWMisinformation about critical issues such as climate change and vaccine safety is oftentimes amplified on online social and search platforms. The crowdsourcing of content credibility assessment by laypeople has been proposed as one strategy to combat ...
Comments