ABSTRACT
Online crowd labor markets often address issues of risk and mistrust between employers and employees from the employers' perspective, but less often from that of employees. Based on 437 comments posted by crowd workers (Turkers) on the Amazon Mechanical Turk (AMT) participation agreement, we identified work rejection as a major risk that Turkers experience. Unfair rejections can result from poorly-designed tasks, unclear instructions, technical errors, and malicious Requesters. Because the AMT policy and platform provide little recourse to Turkers, they adopt strategies to minimize risk: avoiding new and known bad Requesters, sharing information with other Turkers, and choosing low-risk tasks. Through a series of ideas inspired by these findings-including notifying Turkers and Requesters of a broken task, returning rejected work to Turkers for repair, and providing collective dispute resolution mechanisms-we argue that making reducing risk and building trust a first-class design goal can lead to solutions that improve outcomes around rejected work for all parties in online labor markets.
- Gilles Adda and Joseph J. Mariani. 2014. Crowdsourcing for Speech: Economic, Legal and Ethical Analysis. HAL Archives No. HAL-01067110.Google Scholar
- Pat Barrett. 2014. New development: Risk management-how to regain trust and confidence in government. Public Money & Management, 34(6), 459--464.Google ScholarCross Ref
- Benjamin B. Bederson and Alexander J. Quinn. 2011. Web Workers, Unite! Addressing challenges of online laborers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11), 97106. Google ScholarDigital Library
- Lisa J. Bernt. 2014. Suppressing the mischief: New work, old problems. North Eastern University Law Journal, 6, 2: 311--346.Google Scholar
- Chris Callison-Burch. 2014. Crowd-Workers: Aggregating information across Turkers to help them find higher paying work. In Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing (HCOMP '14), Works in Progress and Demonstration Abstracts, 8--9.Google Scholar
- Jesse Chandler, Pam Mueller and Gabriele Paolacci. 2014. Nonnaïveté among Amazon Mechanical Turk Workers: Consequences and solutions for behavioral researchers. Behavioral Research 46, 1: 112--30.Google ScholarCross Ref
- Justin Cheng, Jaime Teevan, Shamsi T. Iqbal and Michael S. Bernstein. 2015. Break It Down: A comparison of macro- and microtasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '15), 4061--4064. Google ScholarDigital Library
- Lydia B. Chilton, John J. Horton, Robert C. Miller and Shiri Azenkot. 2010. Task search in a human computation market. In Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP '10), 1--9. Google ScholarDigital Library
- Stacie Conchie and Calvin Burns. 2008. Trust and risk communication in high-risk organizations: A test of principles from social risk research, Journal of Risk Research, 28, 1: 141--149.Google Scholar
- Karen Cook, Russell Hardin and Margaret Levi. 2005. Cooperation without Trust? Russell Sage Foundation Series on Trust. NY, NY: Russell Sage Foundation.Google Scholar
- Djellel Eddine Difallah, Gianluca Demartini and Philippe Cudré-Mauroux. 2012. Mechanical Cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch 2012 workshop. Lyon, France.Google Scholar
- Mira Dontcheva, Robert Morris, Joel Brandt and Elizabeth Gerber. 2014. Combining crowdsourcing and learning to improve engagement and performance. In Proceedings of the Computer Supported Cooperative Work Conference (CSCW '14), 3379--3388. Google ScholarDigital Library
- Steven Dow, Anand Kulkarni, Scott Klemmer and Björn Hartmann. 2012. Shepherding the crowd yields better work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '12), 1013--1022. Google ScholarDigital Library
- Timothy C. Earle. 2010. Trust in risk management: A model-based review of empirical research. Risk Analysis: An International Journal, 30, 4: 541--574.Google ScholarCross Ref
- Cynthia R Farina, Dmitry Epstein, Josiah Heidt and Mary J Newhart. 2014. Designing an online civic engagement platform: Balancing more vs. better participation in complex public policymaking. International Journal of E-Politics (IJEP) 5, 1, 16--40. Google ScholarDigital Library
- Alek Felstiner. 2011. Working the crowd: Employment and labor law in the crowdsourcing industry. Berkeley Journal of Employment and Labor Law, 32, 1.Google Scholar
- Eric J. Friedman and Paul Resnick. 2000. The social cost of cheap pseudonyms. Journal of Economics & Management Strategy, 10, 2: 173--199.Google ScholarCross Ref
- Mark Granovetter. 2005. The impact of social structure on economic outcomes. The Journal of Economic Perspectives, 19, 1, 33--50.Google ScholarCross Ref
- Neha Gupta, Andy Crabtree, Tom Rodden, David Martin and Jacki O'Neill. 2014a. Understanding Indian crowdworkers. In Proceedings of the Computer Supported Cooperative Work Conference (CSCW '14).Google Scholar
- Neha Gupta, David Martin and Jacki O'Niel. 2014b. Turk-life in India. In Proceedings of the International Conference on Supporting Group Work (GROUP '14). Google ScholarDigital Library
- Benjamin V. Hanrahan, Jutta K.Willamowski, Saiganesh Swaminathan, David B. Martin. 2015. TurkBench: Rendering the market for Turkers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '15). Google ScholarDigital Library
- Panagiotis G. Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk marketplace. XRDS: Crossroads, The ACM Magazine for Students, 17, 2, 16--21. Google ScholarDigital Library
- Panagiotis G. Ipeirotis and John J. Horton. 2012. The need for standardization in crowdsourcing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '11).Google Scholar
- Langdon Winner. 1999. Do artifacts have politics? In The Social Shaping of Technology (2nd. ed.), Donald MacKenzie and Judy Wajcman (eds.). Open University Press, Buckingham, UK, 28--40.Google Scholar
- Edward Lawler. 2013. Being on the edge of chaos: Social psychology and the problem of social order. Contemporary Sociology: A Journal of Reviews 42, 1: 340.Google ScholarCross Ref
- Niklas Luhmann. 1998. Familiarity, confidence, and trust: Problems and alternatives. In D. Gambetta (ed), Trust: Making and breaking cooperative relations (pp.94--107), Oxford: Basil Blackwell.Google Scholar
- Lilly C. Irani and M. Six. Silberman. 2013. Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13), 611--620. Google ScholarDigital Library
- Aniket Kittur, Ed Chi and Bongwon Suh. 2008. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08), 453--456. Google ScholarDigital Library
- Aniket Kittur, Jeffrey Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matthew Lease and John Horton. 2013. The future of crowd work. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '12), 1301--1318. Google ScholarDigital Library
- Jörn Klinger and Matthew Lease. 2011. Enabling trust in crowd labor relations through identity sharing. In Proceedings of the American Society for Information Science and Technology (ASIST '11).Google ScholarCross Ref
- Nicolas Kokalis, Thomas Köhn, Carl Pfeiffer, Dima Chornyi, Michael S. Bernstein and Scott R. Klemmer. 2013. EmailValet: Managing email overload through private, accountable crowdsourcing. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '13), 1291--1300. Google ScholarDigital Library
- Anand Kulkarni, Matthew Can and Björn Hartmann. 2012. Collaboratively crowdsourcing workflows with Turkomatic. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '12), 1003--1012. Google ScholarDigital Library
- Jennifer Marlow and Laura A. Dabbish. 2014. Who's the boss?: Requester transparency and motivation in a microtask marketplace. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '14), 2533--2538. Google ScholarDigital Library
- David Martin, Benjamin V Hanrahan, Jacki O'Neill and Neha Gupta. 2014. Being a Turker. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '14), 224--235. Google ScholarDigital Library
- Brian McInnis, Elizabeth Murnane, Dmitry Epstein, Dan Cosley, and Gilly Leshed. To appear. One and done: Factors affecting one-time contributors to ad-hoc online communities. To appear in Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '16). Google ScholarDigital Library
- Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine and Michael S. Bernstein. 2014. Expert crowdsourcing with Flash Teams. In Proceedings of the ACM symposium on User interface software and technology (UIST '14), 75--85. Google ScholarDigital Library
- Denise M. Rousseau, Sim B. Sitkin, Ronald S. Burt and Colin Camerer. 1998. Not so different after all: A cross-discipline view of trust. The Academy of Management Review, 23, 3: 393--404.Google ScholarCross Ref
- Niloufar Salehi, Lilly C. Irani, Michael S. Bernstein, Ali Al Khatib, Eva Ogbe, Kristy Milland and Clickhappier. 2014. WeAreDynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '15), 1621--1630. Google ScholarDigital Library
- M. Six Silberman, Joel Ross, Lilly Irani and Bill Tomlinson. 2010. Seller problems in human computation markets. In Proceedings of the ACM SIGKDD Workshop on Human Computation (HCOMP '10), 18--21. Google ScholarDigital Library
- Robert E. Stake. 1995. The art of case study research: Sage Publications, Inc.Google Scholar
- Stephen Wolfson and Matthew Lease. 2011. Look before you leap: Legal pitfalls of crowdsourcing. In Proceedings of the American Society for Information Science and Technology (ASIST '11).Google ScholarCross Ref
- Robert I. Sutton and Andrew Hargadon. 1996. Brainstorming groups in context: Effectiveness in a production design firm. Administrative Science Quarterly, 41, 4: pp. 685--718.Google ScholarCross Ref
- Anbang Xu, Shih Huang and Brian Bailey. 2014. Voyant: Generating structured feedback on visual designs using a crowd of non-experts. In Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW '14), 1433--1444. Google ScholarDigital Library
- Jonathan Zittrain. 2009. The future of the Internet and how to stop it. Yale University Press. Google ScholarDigital Library
Index Terms
- Taking a HIT: Designing around Rejection, Mistrust, Risk, and Workers' Experiences in Amazon Mechanical Turk
Recommendations
Modus Operandi of Crowd Workers: The Invisible Role of Microtask Work Environments
The ubiquity of the Internet and the widespread proliferation of electronic devices has resulted in flourishing microtask crowdsourcing marketplaces, such as Amazon MTurk. An aspect that has remained largely invisible in microtask crowdsourcing is that ...
Understanding Worker Moods and Reactions to Rejection in Crowdsourcing
HT '19: Proceedings of the 30th ACM Conference on Hypertext and Social MediaRequesters on crowdsourcing platforms typically exercise the power to decide the fate of tasks completed by crowd workers. Rejecting work has a direct impact on workers; (i) they may not be rewarded for the work completed and for their effort that has ...
Improving Reactions to Rejection in Crowdsourcing Through Self-Reflection
WebSci '21: Proceedings of the 13th ACM Web Science Conference 2021In popular crowdsourcing marketplaces like Amazon Mechanical Turk, crowd workers complete tasks posted by requesters in return for monetary rewards. Task requesters are solely responsible for deciding whether to accept or reject submitted work. ...
Comments