Abstract
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that of work environments; defined as the hardware and software affordances at the disposal of crowd workers which are used to complete microtasks on crowdsourcing platforms. In this paper, we reveal the significant role of work environments in the shaping of crowd work. First, through a pilot study surveying the good and bad experiences workers had with UI elements in crowd work, we revealed the typical issues workers face. Based on these findings, we then deployed over 100 distinct microtasks on CrowdFlower, addressing workers in India and USA in two identical batches. These tasks emulate the good and bad UI element designs that characterize crowdsourcing microtasks. We recorded hardware specifics such as CPU speed and device type, apart from software specifics including the browsers used to complete tasks, operating systems on the device, and other properties that define the work environments of crowd workers. Our findings indicate that crowd workers are embedded in a variety of work environments which influence the quality of work produced. To confirm and validate our data-driven findings we then carried out semi-structured interviews with a sample of Indian and American crowd workers from this platform. Depending on the design of UI elements in microtasks, we found that some work environments support crowd workers more than others. Based on our overall findings resulting from all the three studies, we introduce ModOp, a tool that helps to design crowdsourcing microtasks that are suitable for diverse crowd work environments. We empirically show that the use of ModOp results in reducing the cognitive load of workers, thereby improving their user experience without affecting the accuracy or task completion time.
- 2016. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016-2020. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/mobile-white-paper-c11-520862.html. (2016). Last accessed on : 2017-07-12.Google Scholar
- Gregory D Abowd. 2016. Beyond weiser: From ubiquitous to collective computing. Computer 49, 1 (2016), 17--23. Google ScholarDigital Library
- Ioannis Agadakos, Jason Polakis, and Georgios Portokalidis. 2017. Techu: Open and Privacy-Preserving Crowdsourced GPS for the Masses. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 475--487. Google ScholarDigital Library
- Yuvraj Agarwal and Malcolm Hall. 2013. ProtectMyPrivacy: detecting and mitigating privacy leaks on iOS devices using crowdsourcing. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services. ACM, 97--110. Google ScholarDigital Library
- Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3665--3674. Google ScholarDigital Library
- Giacomo Alessandroni, Alessandro Bogliolo, Alberto Carini, Saverio Delpriori, Valerio Freschi, Lorenz Klopfenstein, Emanuele Lattanzi, Gioele Luchetti, Brendan Paolini, and Andrea Seraghiti. 2015. Mobile crowdsensing of road surface roughness. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 439--439. Google ScholarDigital Library
- Javier A Bargas-Avila, Olivia Brenzikofer, Alexandre N Tuch, Sandra P Roth, and Klaus Opwis. 2011a. Working towards usable forms on the worldwide web: optimizing multiple selection interface elements. Advances in Human-Computer Interaction 2011 (2011), 4. Google ScholarDigital Library
- Javier A Bargas-Avila, Sébastien Orsini, Hannah Piosczyk, Dominic Urwyler, and Klaus Opwis. 2011b. Enhancing online forms: Use format specifications for fields with format restrictions to help respondents. Interacting with Computers 23, 1 (2011), 33--39. Google ScholarDigital Library
- Javier A Bargas-Avila, AN Tuch, K Opwis, O Brenzikofer, S Orsini, and SP Roth. 2010. Simple but crucial user interfaces in the world wide web: introducing 20 guidelines for usable web form design. INTECH Open Access Publisher.Google Scholar
- Robin Brewer, Meredith Ringel Morris, and Anne Marie Piper. 2016. “Why Would Anybody Do This?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, 2246--2257. Google ScholarDigital Library
- Jeffrey A Burke, Deborah Estrin, Mark Hansen, Andrew Parker, Nithya Ramanathan, Sasank Reddy, and Mani B Srivastava. 2006. Participatory sensing. Center for Embedded Network Sensing (2006).Google Scholar
- Dana Chandler and Adam Kapelner. 2013. Breaking monotony with meaning: Motivation in crowdsourcing markets. Journal of Economic Behavior 8 Organization 90 (2013), 123--133.Google ScholarCross Ref
- Georgios Chatzimilioudis, Andreas Konstantinidis, Christos Laoudias, and Demetrios Zeinalipour-Yazti. 2012. Crowdsourcing with smartphones. IEEE Internet Computing 16, 5 (2012), 36--44. Google ScholarDigital Library
- Leah Melani Christian, Don A Dillman, and Jolene D Smyth. 2007. Helping respondents get it right the first time: the influence of words, symbols, and graphics in web surveys. Public Opinion Quarterly 71, 1 (2007), 113--125.Google ScholarCross Ref
- Mick P Couper, Michael W Traugott, and Mark J Lamias. 2001. Web survey design and administration. Public opinion quarterly 65, 2 (2001), 230--253.Google Scholar
- Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G Ipeirotis, and Philippe Cudré-Mauroux. 2015. The dynamics of micro-task crowdsourcing: The case of amazon mturk. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 238--247. Google ScholarDigital Library
- Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell Me What You Like, and I’Ll Tell You What to Do. In Proceedings of the 22nd International Conference on World Wide Web (WWW ’13). ACM, New York, NY, USA, 367--374. Google ScholarDigital Library
- Guoru Ding, Jinlong Wang, Qihui Wu, Linyuan Zhang, Yulong Zou, Yu-Dong Yao, and Yingying Chen. 2014. Robust spectrum sensing with crowd sensors. IEEE Transactions on Communications 62, 9 (2014), 3129--3143.Google ScholarCross Ref
- Nathan Eagle. 2009. txteagle: Mobile crowdsourcing. In International Conference on Internationalization, Design and Global Development. Springer, 447--456. Google ScholarDigital Library
- Carsten Eickhoff and Arjen P de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Information retrieval 16, 2 (2013), 121--137. Google ScholarDigital Library
- Moustafa Elhamshary, Moustafa Youssef, Akira Uchiyama, Hirozumi Yamaguchi, and Teruo Higashino. 2016. TransitLabel: A crowdsensing system for automatic labeling of transit stations semantics. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 193--206. Google ScholarDigital Library
- Susan Feinberg and Margaret Murphy. 2000. Applying cognitive load theory to the design of web-based instruction. In Proceedings of IEEE professional communication society international professional communication conference and Proceedings of the 18th annual ACM international conference on Computer documentation: technology 8 teamwork. IEEE Educational Activities Department, 353--360. Google ScholarDigital Library
- Zhenni Feng, Yanmin Zhu, Qian Zhang, Lionel M Ni, and Athanasios V Vasilakos. 2014. TRAC: Truthful auction for location-aware collaborative sensing in mobile crowdsourcing. In IEEE INFOCOM 2014-IEEE Conference on Computer Communications. IEEE, 1231--1239.Google ScholarCross Ref
- Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 333--343. Google ScholarDigital Library
- Leah Findlater, Joan Zhang, Jon E Froehlich, and Karyn Moffatt. 2017. Differences in Crowdsourced vs. Lab-based Mobile and Desktop Input Performance Data. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 6813--6824. Google ScholarDigital Library
- Ujwal Gadiraju, Besnik Fetahu, Ricardo Kawase, Patrick Siehndel, and Stefan Dietze. 2017. Using Worker Self-Assessments for Competence-based Pre-Selection in Crowdsourcing Microtasks. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 4 (2017). Google ScholarDigital Library
- Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. 2014. A Taxonomy of Microtasks on the Web. In Proceedings of the 25th ACM conference on Hypertext and social media. ACM, 218--223. Google ScholarDigital Library
- Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1631--1640. Google ScholarDigital Library
- Ujwal Gadiraju, Jie Yang, and Alessandro Bozzon. 2017. Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, HT 2017, Prague, Czech Republic, July 4-7, 2017. 5--14. Google ScholarDigital Library
- Mary L Gray, Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. 2016. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work 8 Social Computing. ACM, 134--147. Google ScholarDigital Library
- Aakar Gupta, William Thies, Edward Cutrell, and Ravin Balakrishnan. 2012. mClerk: enabling mobile crowdsourcing in developing regions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1843--1852. Google ScholarDigital Library
- Neha Gupta. 2017. An Ethnographic study of Crowdwork via Amazon Mechanical Turk in India. (2017).Google Scholar
- Neha Gupta, David Martin, Benjamin V Hanrahan, and Jacki O’Neill. 2014. Turk-life in India. In Proceedings of the 18th International Conference on Supporting Group Work. 1--11. Google ScholarDigital Library
- Tenshi Hara, Thomas Springer, Gerd Bombach, and Alexander Schill. 2013. Decentralised approach for a reusable crowdsourcing platform utilising standard web servers. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. ACM, 1063--1074. Google ScholarDigital Library
- Christopher G. Harris. 2015. The Effects of Pay-to-Quit Incentives on Crowdworker Task Quality. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work 8 Social Computing (CSCW ’15). ACM, New York, NY, USA, 1801--1812. Google ScholarDigital Library
- Sandra G Hart. 2006. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 50. Sage Publications Sage CA: Los Angeles, CA, 904--908.Google ScholarCross Ref
- Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics (1979), 65--70.Google Scholar
- Gary Hsieh and RafałKocielnik. 2016. You Get Who You Pay for: The Impact of Incentives on Participation Bias. In CSCW (CSCW ’16). ACM, New York, NY, USA, 823--835. Google ScholarDigital Library
- Ting-Hao Kenneth Huang, Amos Azaria, and Jeffrey P Bigham. 2016. InstructableCrowd: Creating IF-THEN Rules via Conversations with the Crowd. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1555--1562. Google ScholarDigital Library
- Panagiotis G Ipeirotis. 2010. Demographics of mechanical turk. (2010).Google Scholar
- Lilly Irani. 2015. The cultural work of microwork. New Media 8 Society 17, 5 (2015), 720--739.Google Scholar
- Lilly C Irani and M Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 611--620. Google ScholarDigital Library
- Ling Jiang, Christian Wagner, and Bonnie Nardi. 2015. Not Just in it for the Money: A Qualitative Investigation of Workers’ Perceived Benefits of Micro-task Crowdsourcing. In System Sciences (HICSS), 2015 48th Hawaii International Conference on. IEEE, 773--782. Google ScholarDigital Library
- Matt Jones, Simon Robinson, Jennifer Pearson, Manjiri Joshi, Dani Raju, Charity Chao Mbogo, Sharon Wangari, Anirudha Joshi, Edward Cutrell, and Richard Harper. 2016. Beyond “yesterday’s tomorrow”: future-focused mobile interaction design by and for emergent users. Personal and Ubiquitous Computing (2016), 1--15. Google ScholarDigital Library
- Thivya Kandappu, Nikita Jaiman, Randy Tandriansyah, Archan Misra, Shih-Fen Cheng, Cen Chen, Hoong Chuin Lau, Deepthi Chander, and Koustuv Dasgupta. 2016. Tasker: Behavioral insights via campus-based experimental mobile crowd-sourcing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 392--402. Google ScholarDigital Library
- Salil S Kanhere. 2011. Participatory sensing: Crowdsourcing data from mobile smartphones in urban spaces. In Mobile Data Management (MDM), 2011 12th IEEE International Conference on, Vol. 2. IEEE, 3--6. Google ScholarDigital Library
- Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk.. In 17th Americas Conference on Information Systems, AMCIS, Vol. 11. 1--11.Google Scholar
- Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of Mechanical Turk for low-income workers in India. In Proceedings of the first ACM symposium on computing for development. ACM, 12. Google ScholarDigital Library
- Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453--456. Google ScholarDigital Library
- Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. ACM, 1301--1318. Google ScholarDigital Library
- Vassilis Kostakos, Jakob Rogstadius, Denzil Ferreira, Simo Hosio, and Jorge Goncalves. 2017. Human Sensors. In Participatory Sensing, Opinions and Collective Awareness. Springer, 69--92.Google Scholar
- Gierad Laput, Walter S Lasecki, Jason Wiese, Robert Xiao, Jeffrey P Bigham, and Chris Harrison. 2015. Zensors: Adaptive, rapidly deployable, human-intelligent sensor feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1935--1944. Google ScholarDigital Library
- Matthew Linderman and Jason Fried. 2004. Defensive Design for the Web: How to improve error messages, help, forms, and other crisis points. New Riders Publishing. Google ScholarDigital Library
- Yan Liu, Bin Guo, Yang Wang, Wenle Wu, Zhiwen Yu, and Daqing Zhang. 2016. Taskme: multi-task allocation in mobile crowd sensing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 403--414. Google ScholarDigital Library
- Jerome P Lynch and Kenneth J Loh. 2006. A summary review of wireless sensors and sensor networks for structural health monitoring. Shock and Vibration Digest 38, 2 (2006), 91--130.Google ScholarCross Ref
- Catherine C Marshall and Frank M Shipman. 2013. Experiences surveying the crowd: Reflections on methods, participation, and reliability. In Proceedings of the 5th Annual ACM Web Science Conference. ACM, 234--243. Google ScholarDigital Library
- David Martin, Benjamin V Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work 8 Social Computing. ACM, 224--235. Google ScholarDigital Library
- David Martin, Jacki O’Neill, Neha Gupta, and Benjamin V. Hanrahan. 2016. Turking in a Global Labour Market. Computer Supported Cooperative Work (CSCW) 25, 1 (2016), 39--77. Google ScholarDigital Library
- Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing around rejection, mistrust, risk, and workers’ experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2271--2282. Google ScholarDigital Library
- Róisín McNaney, Mohammad Othman, Dan Richardson, Paul Dunphy, Telmo Amaral, Nick Miller, Helen Stringer, Patrick Olivier, and John Vines. 2016. Speeching: Mobile Crowdsourced Speech Assessment to Support Self-Monitoring and Management for People with Parkinson’s. In Proceedings of the 2016 ACM SIGCHI Conference on Human Factors in Computing Systems. 7--12. Google ScholarDigital Library
- Marija Milenkovic and Oliver Amft. 2013. An opportunistic activity-sensing approach to save energy in office buildings. In Proceedings of the fourth international conference on Future energy systems. ACM, 247--258. Google ScholarDigital Library
- Meredith Ringel Morris, Jeffrey P Bigham, Robin Brewer, Jonathan Bragg, Anand Kulkarni, Jessie Li, and Saiph Savage. 2017. Subcontracting Microwork. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, To Appear. Google ScholarDigital Library
- Prayag Narula, Philipp Gutheim, David Rolnitzky, Anand Kulkarni, and Bjoern Hartmann. 2011. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid. Human Computation 11 (2011), 11. Google ScholarDigital Library
- Michael Nebeling, Alexandra To, Anhong Guo, Adrian A de Freitas, Jaime Teevan, Steven P Dow, and Jeffrey P Bigham. 2016. WearWrite: Crowd-assisted writing from smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 3834--3846. Google ScholarDigital Library
- Sharon Oviatt. 2006. Human-centered design meets cognitive load theory: designing interfaces that help people think. In Proceedings of the 14th ACM international conference on Multimedia. ACM, 871--880. Google ScholarDigital Library
- Gabriel Parent and Maxine Eskenazi. 2010. Toward better crowdsourced transcription: Transcription of a year of the let’s go bus information system data. In Spoken Language Technology Workshop (SLT), 2010 IEEE. IEEE, 312--317.Google ScholarCross Ref
- Sasank Reddy, Andrew Parker, Josh Hyman, Jeff Burke, Deborah Estrin, and Mark Hansen. 2007. Image browsing, processing, and clustering for participatory sensing: lessons from a DietSense prototype. In Proceedings of the 4th workshop on Embedded networked sensors. ACM, 13--17. Google ScholarDigital Library
- Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2016. Just in Time: Controlling Temporal Performance in Crowdsourcing Competitions. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 817--827. Google ScholarDigital Library
- Tobias Schnabel, Paul N Bennett, Susan T Dumais, and Thorsten Joachims. 2016. Using shortlists to support decision making and improve recommender system performance. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 987--997. Google ScholarDigital Library
- Oliver S Schneider, Hasti Seifi, Salma Kashani, Matthew Chun, and Karon E MacLean. 2016. HapTurk: Crowdsourcing Affective Ratings of Vibrotactile Icons. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 3248--3260. Google ScholarDigital Library
- Aaron D Shaw, John J Horton, and Daniel L Chen. 2011. Designing incentives for inexpert human raters. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work. ACM, 275--284. Google ScholarDigital Library
- Matthias Stevens and Ellie D’Hondt. 2010. Crowdsourcing of pollution data using smartphones. In Workshop on Ubiquitous Crowdsourcing.Google Scholar
- John Sweller. 1988. Cognitive load during problem solving: Effects on learning. Cognitive science 12, 2 (1988), 257--285.Google Scholar
- Rannie Teodoro, Pinar Ozturk, Mor Naaman, Winter Mason, and Janne Lindqvist. 2014. The motivations and experiences of the on-demand mobile workforce. In Proceedings of the 17th ACM conference on Computer supported cooperative work 8 social computing. ACM, 236--247. Google ScholarDigital Library
- Kanchana Thilakarathna, Fangzhou Jiang, Sirine Mrabet, Mohamed Ali Kaafar, Aruna Seneviratne, and Prasant Mohapatra. 2014. Crowd-cache--popular content for free. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 358--359. Google ScholarDigital Library
- Annamalai Vasantha, Gokula Vijayumar, Jonathan Corney, Nuran Acur Bakir, Andrew Lynn, Ananda Prasanna Jagadeesan, Marisa Smith, and Anupam Agarwal. 2014. Social implications of crowdsourcing in rural scotland. International Journal of Social Science 8 Human Behavior Study 1, 3 (2014), 47--52.Google Scholar
- Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, and Milad Shokouhi. 2014. Community-based Bayesian Aggregation Models for Crowdsourcing. In Proceedings of the 23rd International Conference on World Wide Web. 155--164. Google ScholarDigital Library
- Maja Vukovic, Soundar Kumara, and Ohad Greenshpan. 2010. Ubiquitous crowdsourcing. In Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing-Adjunct. ACM, 523--526. Google ScholarDigital Library
- Qiuzhen Wang, Sa Yang, Manlu Liu, Zike Cao, and Qingguo Ma. 2014. An eye-tracking study of website complexity from cognitive load perspective. Decision support systems 62 (2014), 1--10.Google Scholar
- Yuhui Wang and Mohan S Kankanhalli. 2015. Tweeting cameras for event detection. In Proceedings of the 24th International Conference on World Wide Web. ACM, 1231--1241. Google ScholarDigital Library
- Kathryn Whitenton. 2013. Minimize cognitive load to maximize usability. Pozyskano 4 (2013), 2014.Google Scholar
- Luke Wroblewski. 2008. Web form design: filling in the blanks. Rosenfeld Media. Google ScholarDigital Library
- Xiao-Feng Xie and Zun-Jing Wang. 2015. An empirical study of combining participatory and physical sensing to better understand and improve urban mobility networks. In Transportation Research Board 94th Annual Meeting.Google Scholar
- Haoyi Xiong, Yu Huang, Laura E Barnes, and Matthew S Gerber. 2016. Sensus: a cross-platform, general-purpose system for mobile crowdsensing in human-subject studies. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 415--426. Google ScholarDigital Library
- Tingxin Yan, Matt Marzilli, Ryan Holmes, Deepak Ganesan, and Mark Corner. 2009. mCrowd: a platform for mobile crowdsourcing. In Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems. ACM, 347--348. Google ScholarDigital Library
- Jie Yang, Judith Redi, Gianluca Demartini, and Alessandro Bozzon. 2016. Modeling Task Complexity in Crowdsourcing. In Proceedings of The Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2016). AAAI, 249--258.Google ScholarCross Ref
- Andrea Zanella, Nicola Bui, Angelo Castellani, Lorenzo Vangelista, and Michele Zorzi. 2014. Internet of things for smart cities. IEEE Internet of Things journal 1, 1 (2014), 22--32.Google ScholarCross Ref
Index Terms
- Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments
Recommendations
Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing
HT '17: Proceedings of the 28th ACM Conference on Hypertext and Social MediaWorkers of microtask crowdsourcing marketplaces strive to find a balance between the need for monetary income and the need for high reputation. Such balance is often threatened by poorly formulated tasks, as workers attempt their execution despite a sub-...
Make Hay While the Crowd Shines: Towards Efficient Crowdsourcing on the Web
WWW '15 Companion: Proceedings of the 24th International Conference on World Wide WebWithin the scope of this PhD proposal, we set out to investigate two pivotal aspects that influence the effectiveness of crowdsourcing: (i) microtask design, and (ii) workers behavior. Leveraging the dynamics of tasks that are crowdsourced on the one ...
Crowd Anatomy Beyond the Good and Bad: Behavioral Traces for Crowd Worker Modeling and Pre-selection
AbstractThe suitability of crowdsourcing to solve a variety of problems has been investigated widely. Yet, there is still a lack of understanding about the distinct behavior and performance of workers within microtasks. In this paper, we first introduce a ...
Comments