skip to main content
research-article

Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments

Published:11 September 2017Publication History
Skip Abstract Section

Abstract

The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that of work environments; defined as the hardware and software affordances at the disposal of crowd workers which are used to complete microtasks on crowdsourcing platforms. In this paper, we reveal the significant role of work environments in the shaping of crowd work. First, through a pilot study surveying the good and bad experiences workers had with UI elements in crowd work, we revealed the typical issues workers face. Based on these findings, we then deployed over 100 distinct microtasks on CrowdFlower, addressing workers in India and USA in two identical batches. These tasks emulate the good and bad UI element designs that characterize crowdsourcing microtasks. We recorded hardware specifics such as CPU speed and device type, apart from software specifics including the browsers used to complete tasks, operating systems on the device, and other properties that define the work environments of crowd workers. Our findings indicate that crowd workers are embedded in a variety of work environments which influence the quality of work produced. To confirm and validate our data-driven findings we then carried out semi-structured interviews with a sample of Indian and American crowd workers from this platform. Depending on the design of UI elements in microtasks, we found that some work environments support crowd workers more than others. Based on our overall findings resulting from all the three studies, we introduce ModOp, a tool that helps to design crowdsourcing microtasks that are suitable for diverse crowd work environments. We empirically show that the use of ModOp results in reducing the cognitive load of workers, thereby improving their user experience without affecting the accuracy or task completion time.

References

  1. 2016. Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016-2020. http://www.cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/mobile-white-paper-c11-520862.html. (2016). Last accessed on : 2017-07-12.Google ScholarGoogle Scholar
  2. Gregory D Abowd. 2016. Beyond weiser: From ubiquitous to collective computing. Computer 49, 1 (2016), 17--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Ioannis Agadakos, Jason Polakis, and Georgios Portokalidis. 2017. Techu: Open and Privacy-Preserving Crowdsourced GPS for the Masses. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 475--487. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Yuvraj Agarwal and Malcolm Hall. 2013. ProtectMyPrivacy: detecting and mitigating privacy leaks on iOS devices using crowdsourcing. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services. ACM, 97--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3665--3674. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Giacomo Alessandroni, Alessandro Bogliolo, Alberto Carini, Saverio Delpriori, Valerio Freschi, Lorenz Klopfenstein, Emanuele Lattanzi, Gioele Luchetti, Brendan Paolini, and Andrea Seraghiti. 2015. Mobile crowdsensing of road surface roughness. In Proceedings of the 13th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 439--439. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Javier A Bargas-Avila, Olivia Brenzikofer, Alexandre N Tuch, Sandra P Roth, and Klaus Opwis. 2011a. Working towards usable forms on the worldwide web: optimizing multiple selection interface elements. Advances in Human-Computer Interaction 2011 (2011), 4. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Javier A Bargas-Avila, Sébastien Orsini, Hannah Piosczyk, Dominic Urwyler, and Klaus Opwis. 2011b. Enhancing online forms: Use format specifications for fields with format restrictions to help respondents. Interacting with Computers 23, 1 (2011), 33--39. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Javier A Bargas-Avila, AN Tuch, K Opwis, O Brenzikofer, S Orsini, and SP Roth. 2010. Simple but crucial user interfaces in the world wide web: introducing 20 guidelines for usable web form design. INTECH Open Access Publisher.Google ScholarGoogle Scholar
  10. Robin Brewer, Meredith Ringel Morris, and Anne Marie Piper. 2016. “Why Would Anybody Do This?”: Understanding Older Adults’ Motivations and Challenges in Crowd Work. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, 2246--2257. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Jeffrey A Burke, Deborah Estrin, Mark Hansen, Andrew Parker, Nithya Ramanathan, Sasank Reddy, and Mani B Srivastava. 2006. Participatory sensing. Center for Embedded Network Sensing (2006).Google ScholarGoogle Scholar
  12. Dana Chandler and Adam Kapelner. 2013. Breaking monotony with meaning: Motivation in crowdsourcing markets. Journal of Economic Behavior 8 Organization 90 (2013), 123--133.Google ScholarGoogle ScholarCross RefCross Ref
  13. Georgios Chatzimilioudis, Andreas Konstantinidis, Christos Laoudias, and Demetrios Zeinalipour-Yazti. 2012. Crowdsourcing with smartphones. IEEE Internet Computing 16, 5 (2012), 36--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Leah Melani Christian, Don A Dillman, and Jolene D Smyth. 2007. Helping respondents get it right the first time: the influence of words, symbols, and graphics in web surveys. Public Opinion Quarterly 71, 1 (2007), 113--125.Google ScholarGoogle ScholarCross RefCross Ref
  15. Mick P Couper, Michael W Traugott, and Mark J Lamias. 2001. Web survey design and administration. Public opinion quarterly 65, 2 (2001), 230--253.Google ScholarGoogle Scholar
  16. Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, Panagiotis G Ipeirotis, and Philippe Cudré-Mauroux. 2015. The dynamics of micro-task crowdsourcing: The case of amazon mturk. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 238--247. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell Me What You Like, and I’Ll Tell You What to Do. In Proceedings of the 22nd International Conference on World Wide Web (WWW ’13). ACM, New York, NY, USA, 367--374. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Guoru Ding, Jinlong Wang, Qihui Wu, Linyuan Zhang, Yulong Zou, Yu-Dong Yao, and Yingying Chen. 2014. Robust spectrum sensing with crowd sensors. IEEE Transactions on Communications 62, 9 (2014), 3129--3143.Google ScholarGoogle ScholarCross RefCross Ref
  19. Nathan Eagle. 2009. txteagle: Mobile crowdsourcing. In International Conference on Internationalization, Design and Global Development. Springer, 447--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Carsten Eickhoff and Arjen P de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Information retrieval 16, 2 (2013), 121--137. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Moustafa Elhamshary, Moustafa Youssef, Akira Uchiyama, Hirozumi Yamaguchi, and Teruo Higashino. 2016. TransitLabel: A crowdsensing system for automatic labeling of transit stations semantics. In Proceedings of the 14th Annual International Conference on Mobile Systems, Applications, and Services. ACM, 193--206. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Susan Feinberg and Margaret Murphy. 2000. Applying cognitive load theory to the design of web-based instruction. In Proceedings of IEEE professional communication society international professional communication conference and Proceedings of the 18th annual ACM international conference on Computer documentation: technology 8 teamwork. IEEE Educational Activities Department, 353--360. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Zhenni Feng, Yanmin Zhu, Qian Zhang, Lionel M Ni, and Athanasios V Vasilakos. 2014. TRAC: Truthful auction for location-aware collaborative sensing in mobile crowdsourcing. In IEEE INFOCOM 2014-IEEE Conference on Computer Communications. IEEE, 1231--1239.Google ScholarGoogle ScholarCross RefCross Ref
  24. Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In Proceedings of the 24th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 333--343. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Leah Findlater, Joan Zhang, Jon E Froehlich, and Karyn Moffatt. 2017. Differences in Crowdsourced vs. Lab-based Mobile and Desktop Input Performance Data. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 6813--6824. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Ujwal Gadiraju, Besnik Fetahu, Ricardo Kawase, Patrick Siehndel, and Stefan Dietze. 2017. Using Worker Self-Assessments for Competence-based Pre-Selection in Crowdsourcing Microtasks. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 4 (2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Ujwal Gadiraju, Ricardo Kawase, and Stefan Dietze. 2014. A Taxonomy of Microtasks on the Web. In Proceedings of the 25th ACM conference on Hypertext and social media. ACM, 218--223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. 1631--1640. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Ujwal Gadiraju, Jie Yang, and Alessandro Bozzon. 2017. Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing. In Proceedings of the 28th ACM Conference on Hypertext and Social Media, HT 2017, Prague, Czech Republic, July 4-7, 2017. 5--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Mary L Gray, Siddharth Suri, Syed Shoaib Ali, and Deepti Kulkarni. 2016. The crowd is a collaborative network. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work 8 Social Computing. ACM, 134--147. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Aakar Gupta, William Thies, Edward Cutrell, and Ravin Balakrishnan. 2012. mClerk: enabling mobile crowdsourcing in developing regions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1843--1852. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Neha Gupta. 2017. An Ethnographic study of Crowdwork via Amazon Mechanical Turk in India. (2017).Google ScholarGoogle Scholar
  33. Neha Gupta, David Martin, Benjamin V Hanrahan, and Jacki O’Neill. 2014. Turk-life in India. In Proceedings of the 18th International Conference on Supporting Group Work. 1--11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Tenshi Hara, Thomas Springer, Gerd Bombach, and Alexander Schill. 2013. Decentralised approach for a reusable crowdsourcing platform utilising standard web servers. In Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. ACM, 1063--1074. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Christopher G. Harris. 2015. The Effects of Pay-to-Quit Incentives on Crowdworker Task Quality. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work 8 Social Computing (CSCW ’15). ACM, New York, NY, USA, 1801--1812. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Sandra G Hart. 2006. NASA-task load index (NASA-TLX); 20 years later. In Proceedings of the human factors and ergonomics society annual meeting, Vol. 50. Sage Publications Sage CA: Los Angeles, CA, 904--908.Google ScholarGoogle ScholarCross RefCross Ref
  37. Sture Holm. 1979. A simple sequentially rejective multiple test procedure. Scandinavian journal of statistics (1979), 65--70.Google ScholarGoogle Scholar
  38. Gary Hsieh and RafałKocielnik. 2016. You Get Who You Pay for: The Impact of Incentives on Participation Bias. In CSCW (CSCW ’16). ACM, New York, NY, USA, 823--835. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Ting-Hao Kenneth Huang, Amos Azaria, and Jeffrey P Bigham. 2016. InstructableCrowd: Creating IF-THEN Rules via Conversations with the Crowd. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, 1555--1562. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Panagiotis G Ipeirotis. 2010. Demographics of mechanical turk. (2010).Google ScholarGoogle Scholar
  41. Lilly Irani. 2015. The cultural work of microwork. New Media 8 Society 17, 5 (2015), 720--739.Google ScholarGoogle Scholar
  42. Lilly C Irani and M Silberman. 2013. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 611--620. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Ling Jiang, Christian Wagner, and Bonnie Nardi. 2015. Not Just in it for the Money: A Qualitative Investigation of Workers’ Perceived Benefits of Micro-task Crowdsourcing. In System Sciences (HICSS), 2015 48th Hawaii International Conference on. IEEE, 773--782. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Matt Jones, Simon Robinson, Jennifer Pearson, Manjiri Joshi, Dani Raju, Charity Chao Mbogo, Sharon Wangari, Anirudha Joshi, Edward Cutrell, and Richard Harper. 2016. Beyond “yesterday’s tomorrow”: future-focused mobile interaction design by and for emergent users. Personal and Ubiquitous Computing (2016), 1--15. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Thivya Kandappu, Nikita Jaiman, Randy Tandriansyah, Archan Misra, Shih-Fen Cheng, Cen Chen, Hoong Chuin Lau, Deepthi Chander, and Koustuv Dasgupta. 2016. Tasker: Behavioral insights via campus-based experimental mobile crowd-sourcing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 392--402. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Salil S Kanhere. 2011. Participatory sensing: Crowdsourcing data from mobile smartphones in urban spaces. In Mobile Data Management (MDM), 2011 12th IEEE International Conference on, Vol. 2. IEEE, 3--6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Nicolas Kaufmann, Thimo Schulze, and Daniel Veit. 2011. More than fun and money. Worker Motivation in Crowdsourcing-A Study on Mechanical Turk.. In 17th Americas Conference on Information Systems, AMCIS, Vol. 11. 1--11.Google ScholarGoogle Scholar
  48. Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of Mechanical Turk for low-income workers in India. In Proceedings of the first ACM symposium on computing for development. ACM, 12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453--456. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Aniket Kittur, Jeffrey V Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. ACM, 1301--1318. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Vassilis Kostakos, Jakob Rogstadius, Denzil Ferreira, Simo Hosio, and Jorge Goncalves. 2017. Human Sensors. In Participatory Sensing, Opinions and Collective Awareness. Springer, 69--92.Google ScholarGoogle Scholar
  52. Gierad Laput, Walter S Lasecki, Jason Wiese, Robert Xiao, Jeffrey P Bigham, and Chris Harrison. 2015. Zensors: Adaptive, rapidly deployable, human-intelligent sensor feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1935--1944. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Matthew Linderman and Jason Fried. 2004. Defensive Design for the Web: How to improve error messages, help, forms, and other crisis points. New Riders Publishing. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Yan Liu, Bin Guo, Yang Wang, Wenle Wu, Zhiwen Yu, and Daqing Zhang. 2016. Taskme: multi-task allocation in mobile crowd sensing. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 403--414. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Jerome P Lynch and Kenneth J Loh. 2006. A summary review of wireless sensors and sensor networks for structural health monitoring. Shock and Vibration Digest 38, 2 (2006), 91--130.Google ScholarGoogle ScholarCross RefCross Ref
  56. Catherine C Marshall and Frank M Shipman. 2013. Experiences surveying the crowd: Reflections on methods, participation, and reliability. In Proceedings of the 5th Annual ACM Web Science Conference. ACM, 234--243. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. David Martin, Benjamin V Hanrahan, Jacki O’Neill, and Neha Gupta. 2014. Being a turker. In Proceedings of the 17th ACM Conference on Computer Supported Cooperative Work 8 Social Computing. ACM, 224--235. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. David Martin, Jacki O’Neill, Neha Gupta, and Benjamin V. Hanrahan. 2016. Turking in a Global Labour Market. Computer Supported Cooperative Work (CSCW) 25, 1 (2016), 39--77. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Brian McInnis, Dan Cosley, Chaebong Nam, and Gilly Leshed. 2016. Taking a HIT: Designing around rejection, mistrust, risk, and workers’ experiences in Amazon Mechanical Turk. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2271--2282. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Róisín McNaney, Mohammad Othman, Dan Richardson, Paul Dunphy, Telmo Amaral, Nick Miller, Helen Stringer, Patrick Olivier, and John Vines. 2016. Speeching: Mobile Crowdsourced Speech Assessment to Support Self-Monitoring and Management for People with Parkinson’s. In Proceedings of the 2016 ACM SIGCHI Conference on Human Factors in Computing Systems. 7--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Marija Milenkovic and Oliver Amft. 2013. An opportunistic activity-sensing approach to save energy in office buildings. In Proceedings of the fourth international conference on Future energy systems. ACM, 247--258. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Meredith Ringel Morris, Jeffrey P Bigham, Robin Brewer, Jonathan Bragg, Anand Kulkarni, Jessie Li, and Saiph Savage. 2017. Subcontracting Microwork. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, To Appear. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Prayag Narula, Philipp Gutheim, David Rolnitzky, Anand Kulkarni, and Bjoern Hartmann. 2011. MobileWorks: A Mobile Crowdsourcing Platform for Workers at the Bottom of the Pyramid. Human Computation 11 (2011), 11. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Michael Nebeling, Alexandra To, Anhong Guo, Adrian A de Freitas, Jaime Teevan, Steven P Dow, and Jeffrey P Bigham. 2016. WearWrite: Crowd-assisted writing from smartwatches. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 3834--3846. Google ScholarGoogle ScholarDigital LibraryDigital Library
  65. Sharon Oviatt. 2006. Human-centered design meets cognitive load theory: designing interfaces that help people think. In Proceedings of the 14th ACM international conference on Multimedia. ACM, 871--880. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Gabriel Parent and Maxine Eskenazi. 2010. Toward better crowdsourced transcription: Transcription of a year of the let’s go bus information system data. In Spoken Language Technology Workshop (SLT), 2010 IEEE. IEEE, 312--317.Google ScholarGoogle ScholarCross RefCross Ref
  67. Sasank Reddy, Andrew Parker, Josh Hyman, Jeff Burke, Deborah Estrin, and Mark Hansen. 2007. Image browsing, processing, and clustering for participatory sensing: lessons from a DietSense prototype. In Proceedings of the 4th workshop on Embedded networked sensors. ACM, 13--17. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2016. Just in Time: Controlling Temporal Performance in Crowdsourcing Competitions. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 817--827. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Tobias Schnabel, Paul N Bennett, Susan T Dumais, and Thorsten Joachims. 2016. Using shortlists to support decision making and improve recommender system performance. In Proceedings of the 25th International Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 987--997. Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Oliver S Schneider, Hasti Seifi, Salma Kashani, Matthew Chun, and Karon E MacLean. 2016. HapTurk: Crowdsourcing Affective Ratings of Vibrotactile Icons. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 3248--3260. Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. Aaron D Shaw, John J Horton, and Daniel L Chen. 2011. Designing incentives for inexpert human raters. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work. ACM, 275--284. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Matthias Stevens and Ellie D’Hondt. 2010. Crowdsourcing of pollution data using smartphones. In Workshop on Ubiquitous Crowdsourcing.Google ScholarGoogle Scholar
  73. John Sweller. 1988. Cognitive load during problem solving: Effects on learning. Cognitive science 12, 2 (1988), 257--285.Google ScholarGoogle Scholar
  74. Rannie Teodoro, Pinar Ozturk, Mor Naaman, Winter Mason, and Janne Lindqvist. 2014. The motivations and experiences of the on-demand mobile workforce. In Proceedings of the 17th ACM conference on Computer supported cooperative work 8 social computing. ACM, 236--247. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Kanchana Thilakarathna, Fangzhou Jiang, Sirine Mrabet, Mohamed Ali Kaafar, Aruna Seneviratne, and Prasant Mohapatra. 2014. Crowd-cache--popular content for free. In Proceedings of the 12th annual international conference on Mobile systems, applications, and services. ACM, 358--359. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Annamalai Vasantha, Gokula Vijayumar, Jonathan Corney, Nuran Acur Bakir, Andrew Lynn, Ananda Prasanna Jagadeesan, Marisa Smith, and Anupam Agarwal. 2014. Social implications of crowdsourcing in rural scotland. International Journal of Social Science 8 Human Behavior Study 1, 3 (2014), 47--52.Google ScholarGoogle Scholar
  77. Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, and Milad Shokouhi. 2014. Community-based Bayesian Aggregation Models for Crowdsourcing. In Proceedings of the 23rd International Conference on World Wide Web. 155--164. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Maja Vukovic, Soundar Kumara, and Ohad Greenshpan. 2010. Ubiquitous crowdsourcing. In Proceedings of the 12th ACM international conference adjunct papers on Ubiquitous computing-Adjunct. ACM, 523--526. Google ScholarGoogle ScholarDigital LibraryDigital Library
  79. Qiuzhen Wang, Sa Yang, Manlu Liu, Zike Cao, and Qingguo Ma. 2014. An eye-tracking study of website complexity from cognitive load perspective. Decision support systems 62 (2014), 1--10.Google ScholarGoogle Scholar
  80. Yuhui Wang and Mohan S Kankanhalli. 2015. Tweeting cameras for event detection. In Proceedings of the 24th International Conference on World Wide Web. ACM, 1231--1241. Google ScholarGoogle ScholarDigital LibraryDigital Library
  81. Kathryn Whitenton. 2013. Minimize cognitive load to maximize usability. Pozyskano 4 (2013), 2014.Google ScholarGoogle Scholar
  82. Luke Wroblewski. 2008. Web form design: filling in the blanks. Rosenfeld Media. Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. Xiao-Feng Xie and Zun-Jing Wang. 2015. An empirical study of combining participatory and physical sensing to better understand and improve urban mobility networks. In Transportation Research Board 94th Annual Meeting.Google ScholarGoogle Scholar
  84. Haoyi Xiong, Yu Huang, Laura E Barnes, and Matthew S Gerber. 2016. Sensus: a cross-platform, general-purpose system for mobile crowdsensing in human-subject studies. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 415--426. Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Tingxin Yan, Matt Marzilli, Ryan Holmes, Deepak Ganesan, and Mark Corner. 2009. mCrowd: a platform for mobile crowdsourcing. In Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems. ACM, 347--348. Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Jie Yang, Judith Redi, Gianluca Demartini, and Alessandro Bozzon. 2016. Modeling Task Complexity in Crowdsourcing. In Proceedings of The Fourth AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2016). AAAI, 249--258.Google ScholarGoogle ScholarCross RefCross Ref
  87. Andrea Zanella, Nicola Bui, Angelo Castellani, Lorenzo Vangelista, and Michele Zorzi. 2014. Internet of things for smart cities. IEEE Internet of Things journal 1, 1 (2014), 22--32.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
      Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 3
      September 2017
      2023 pages
      EISSN:2474-9567
      DOI:10.1145/3139486
      Issue’s Table of Contents

      Copyright © 2017 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 September 2017
      • Accepted: 1 July 2017
      • Revised: 1 May 2017
      • Received: 1 February 2017
      Published in imwut Volume 1, Issue 3

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader