ABSTRACT
Crowdsourcing systems are being widely used to overcome several challenges that require human intervention. While there is an increase in the adoption of the crowdsourcing paradigm as a solution, there are no established guidelines or tangible recommendations for task design with respect to key parameters such as task length, monetary incentive and time required for task completion. In this paper, we propose the tuning of these parameters based on our findings from extensive experiments and analysis of categorization tasks. We delve into the behavior of workers that consume categorization tasks to determine measures that can make task design more effective.
- C. Eickhoff and A. de Vries. How crowdsourcable is your task. In Proceedings of the workshop on crowdsourcing for search and data mining (CSDM) at the fourth ACM international conference on web search and data mining (WSDM), pages 11--14, 2011.Google Scholar
- C. Eickhoff and A. P. de Vries. Increasing cheat robustness of crowdsourcing tasks. Information retrieval, 16(2):121--137, 2013. Google ScholarDigital Library
- U. Gadiraju, R. Kawase, and S. Dietze. A taxonomy of microtasks on the web. In Proceedings of the 25th ACM conference on Hypertext and social media, pages 218--223. ACM, 2014. Google ScholarDigital Library
- U. Gadiraju, R. Kawase, S. Dietze, and G. Demartini. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In Proceedings of CHI'15, CHI Conference on Human Factors in Computing Systems, 2015. Google ScholarDigital Library
- P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD workshop on human computation, pages 64--67. ACM, 2010. Google ScholarDigital Library
- N. Kaufmann, T. Schulze, and D. Veit. More than fun and money. worker motivation in crowdsourcing - a study on mechanical turk. In AMCIS, 2011.Google Scholar
- G. Kazai, J. Kamps, and N. Milic-Frayling. Worker types and personality traits in crowdsourcing relevance labels. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 1941--1944. ACM, 2011. Google ScholarDigital Library
- C. C. Marshall and F. M. Shipman. Experiences surveying the crowd: Reflections on methods, participation, and reliability. In Proceedings of the 5th Annual ACM Web Science Conference, WebSci '13, pages 234--243, New York, NY, USA, 2013. ACM. Google ScholarDigital Library
- W. Mason and D. J. Watts. Financial incentives and the performance of crowds. ACM SigKDD Explorations Newsletter, 11(2):100--108, 2010. Google ScholarDigital Library
- D. Oleson, A. Sorokin, G. P. Laughlin, V. Hester, J. Le, and L. Biewald. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. Human computation, 11:11, 2011.Google ScholarDigital Library
- J. Rogstadius, V. Kostakos, A. Kittur, B. Smus, J. Laredo, and M. Vukovic. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM, 2011.Google Scholar
- J. Ross, L. Irani, M. Silberman, A. Zaldivar, and B. Tomlinson. Who are the crowdworkers?: shifting demographics in mechanical turk. In CHI'10 Extended Abstracts on Human Factors in Computing Systems, pages 2863--2872. ACM, 2010. Google ScholarDigital Library
Index Terms
- Breaking Bad: Understanding Behavior of Crowd Workers in Categorization Microtasks
Recommendations
Understanding Malicious Behavior in Crowdsourcing Platforms: The Case of Online Surveys
CHI '15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing SystemsCrowdsourcing is increasingly being used as a means to tackle problems requiring human intelligence. With the ever-growing worker base that aims to complete microtasks on crowdsourcing platforms in exchange for financial gains, there is a need for ...
Make Hay While the Crowd Shines: Towards Efficient Crowdsourcing on the Web
WWW '15 Companion: Proceedings of the 24th International Conference on World Wide WebWithin the scope of this PhD proposal, we set out to investigate two pivotal aspects that influence the effectiveness of crowdsourcing: (i) microtask design, and (ii) workers behavior. Leveraging the dynamics of tasks that are crowdsourced on the one ...
Crowd Anatomy Beyond the Good and Bad: Behavioral Traces for Crowd Worker Modeling and Pre-selection
AbstractThe suitability of crowdsourcing to solve a variety of problems has been investigated widely. Yet, there is still a lack of understanding about the distinct behavior and performance of workers within microtasks. In this paper, we first introduce a ...
Comments