skip to main content
10.1145/2488388.2488421acmotherconferencesArticle/Chapter ViewAbstractPublication PageswwwConference Proceedingsconference-collections
research-article

Pick-a-crowd: tell me what you like, and i'll tell you what to do

Published:13 May 2013Publication History

ABSTRACT

Crowdsourcing allows to build hybrid online platforms that combine scalable information systems with the power of human intelligence to complete tasks that are difficult to tackle for current algorithms. Examples include hybrid database systems that use the crowd to fill missing values or to sort items according to subjective dimensions such as picture attractiveness. Current approaches to Crowdsourcing adopt a pull methodology where tasks are published on specialized Web platforms where workers can pick their preferred tasks on a first-come-first-served basis. While this approach has many advantages, such as simplicity and short completion times, it does not guarantee that the task is performed by the most suitable worker. In this paper, we propose and extensively evaluate a different Crowdsourcing approach based on a push methodology. Our proposed system carefully selects which workers should perform a given task based on worker profiles extracted from social networks. Workers and tasks are automatically matched using an underlying categorization structure that exploits entities extracted from the task descriptions on one hand, and categories liked by the user on social platforms on the other hand. We experimentally evaluate our approach on tasks of varying complexity and show that our push methodology consistently yield better results than usual pull strategies.

References

  1. M. Agrawal, M. Karimzadehgan, and C. Zhai. An online news recommender system for social networks. In Proceedings of ACM SIGIR workshop on Search in Social Media, 2009.Google ScholarGoogle Scholar
  2. O. Alonso and R. A. Baeza-Yates. Design and Implementation of Relevance Assessments Using Crowdsourcing. In ECIR, pages 153--164, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. K. Balog, Y. Fang, M. de Rijke, P. Serdyukov, and L. Si. Expertise retrieval. Foundations and Trends in Information Retrieval, 6(2-3):127--256, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. K. Balog, P. Thomas, N. Craswell, I. Soboroff, P. Bailey, and A. De Vries. Overview of the trec 2008 enterprise track. Technical report, DTIC Document, 2008.Google ScholarGoogle Scholar
  5. R. Blanco, H. Halpin, D. Herzig, P. Mika, J. Pound, H. S. Thompson, and D. T. Tran. Repeatable and reliable search system evaluation using crowdsourcing. In SIGIR, pages 923--932, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. A. Bozzon, M. Brambilla, and S. Ceri. Answering search queries with CrowdSearcher. In WWW, pages 1009--1018, New York, NY, USA, 2012. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. A. Bozzon, M. Brambilla, S. Ceri, and A. Mauri. Extending search to crowds: A model-driven approach. In SeCO Book, pages 207--222. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. A. Bozzon, M. Brambilla, and A. Mauri. A model-driven approach for crowdsourcing search. In CrowdSearch, pages 31--35, 2012.Google ScholarGoogle Scholar
  9. A. Bozzon, I. Catallo, E. Ciceri, P. Fraternali, D. Martinenghi, and M. Tagliasacchi. A framework for crowdsourced multimedia processing and querying. In CrowdSearch, pages 42--47, 2012.Google ScholarGoogle Scholar
  10. G. Demartini, D. E. Difallah, and P. Cudre-Mauroux. ZenCrowd: leveraging probabilistic reasoning and crowdsourcing techniques for large-scale entity linking. In WWW, pages 469--478, New York, NY, USA, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. E. Diaz-Aviles and R. Kawase. Exploiting twitter as a social channel for human computation. In CrowdSearch, pages 15--19, 2012.Google ScholarGoogle Scholar
  12. A. Feng, M. J. Franklin, D. Kossmann, T. Kraska, S. Madden, S. Ramesh, A. Wang, and R. Xin. CrowdDB: Query Processing with the VLDB Crowd. PVLDB, 4(11):1387--1390, 2011.Google ScholarGoogle Scholar
  13. J. A. Golbeck. Computing and applying trust in web-based social networks. PhD thesis, College Park, MD, USA, 2005. AAI3178583. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. X. Han, L. Sun, and J. Zhao. Collective entity linking in web text: a graph-based method. In SIGIR, pages 765--774, New York, NY, USA, 2011. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. R. Jurca and B. Faltings. Mechanisms for making crowds truthful. J. Artif. Intell. Res. (JAIR), 34:209--253, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. G. Kazai. In Search of Quality in Crowdsourcing for Search Engine Evaluation. In ECIR, pages 165--176, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. G. Kazai, J. Kamps, M. Koolen, and N. Milic-Frayling. Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. In SIGIR, pages 205--214, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Macdonald and I. Ounis. Voting techniques for expert search. Knowl. Inf. Syst., 16(3):259--280, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. S. Perugini, M. A. Goncalves, and E. A. Fox. Recommender systems research: A connection-centric survey. J. Intell. Inf. Syst., 23(2):107--143, Sept. 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. J. Pound, P. Mika, and H. Zaragoza. Ad-hoc object retrieval in the web of data. In WWW, pages 771--780, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. C. Sarasua, E. Simperl, and N. F. Noy. Crowdmap: Crowdsourcing ontology alignment with microtasks. In ISWC, pages 525--541, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. N. Seemakurty, J. Chu, L. von Ahn, and A. Tomasic. Word sense disambiguation via human computation. In Proceedings of the ACM SIGKDD Workshop on Human Computation, HCOMP '10, pages 60--63, New York, NY, USA, 2010. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Selke, C. Lofi, and W.-T. Balke. Pushing the boundaries of crowd-enabled databases with query-driven schema expansion. Proc. VLDB Endow., 5(6):538--549, Feb. 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. A. Tonon, G. Demartini, and P. Cudre-Mauroux. Combining inverted indices and structured search for ad-hoc object retrieval. In SIGIR, pages 125--134, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. L. von Ahn and L. Dabbish. Designing games with a purpose. Commun. ACM, 51(8):58--67, Aug. 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. L. von Ahn, R. Liu, and M. Blum. Peekaboom: a game for locating objects in images. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '06, pages 55--64, New York, NY, USA, 2006. ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. L. Von Ahn, B. Maurer, C. McMillen, D. Abraham, and M. Blum. recaptcha: Human-based character recognition via web security measures. Science, 321(5895):1465--1468, 2008.Google ScholarGoogle Scholar

Index Terms

  1. Pick-a-crowd: tell me what you like, and i'll tell you what to do

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      WWW '13: Proceedings of the 22nd international conference on World Wide Web
      May 2013
      1628 pages
      ISBN:9781450320351
      DOI:10.1145/2488388

      Copyright © 2013 Copyright is held by the International World Wide Web Conference Committee (IW3C2).

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 May 2013

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      WWW '13 Paper Acceptance Rate125of831submissions,15%Overall Acceptance Rate1,899of8,196submissions,23%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader