skip to main content
research-article
Public Access

Linguistic Signals under Misinformation and Fact-Checking: Evidence from User Comments on Social Media

Published:01 November 2018Publication History
Skip Abstract Section

Abstract

Misinformation and fact-checking are opposite forces in the news environment: the former creates inaccuracies to mislead people, while the latter provides evidence to rebut the former. These news articles are often posted on social media and attract user engagement in the form of comments. In this paper, we investigate linguistic (especially emotional and topical) signals expressed in user comments in the presence of misinformation and fact-checking. We collect and analyze a dataset of 5,303 social media posts with 2,614,374 user comments from Facebook, Twitter, and YouTube, and associate these posts to fact-check articles from Snopes and PolitiFact for veracity rulings (i.e., from true to false). We find that linguistic signals in user comments vary significantly with the veracity of posts, e.g., we observe more misinformation-awareness signals and extensive emoji and swear word usage with falser posts. We further show that these signals can help to detect misinformation. In addition, we find that while there are signals indicating positive effects after fact-checking, there are also signals indicating potential "backfire" effects.

References

  1. Hunt Allcott and MatthewGentzkow. 2017. Social media and fake news in the 2016 election. Journal of Economic Perspectives , Vol. 31, 2 (2017), bibinfopages211--36.Google ScholarGoogle ScholarCross RefCross Ref
  2. Cecilia Ovesdotter Alm,Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for text-basedemotion prediction. In Proceedings of theconference on human language technology and empirical methods in naturallanguage processing. Association for Computational Linguistics,bibinfopages579--586. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Michelle A Amazeen. 2015. Revisiting the epistemology of fact-checking. Critical Review , Vol. 27, 1 (2015), 1--22.Google ScholarGoogle ScholarCross RefCross Ref
  4. Michelle A Amazeen. 2016. Checking the fact-checkers in 2008: Predictingpolitical ad scrutiny and assessing consistency. Journal of Political Marketing , Vol. 15, 4 (2016), bibinfopages433--464.Google ScholarGoogle ScholarCross RefCross Ref
  5. Ahmer Arif, John JRobinson, Stephanie A Stanek, Elodie SFichet, Paul Townsend, Zena Worku, andKate Starbird. 2017. A closer look at the self-correcting crowd:Examining corrections in online rumors. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Workand Social Computing. ACM, 155--168. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Solomon E Asch and HGuetzkow. 1951. Effects of group pressure upon the modification anddistortion of judgments. Groups, leadership, and men (1951), 222--236.Google ScholarGoogle Scholar
  7. Stefano Baccianella,Andrea Esuli, and Fabrizio Sebastiani. 2010. Sentiwordnet 3.0: an enhanced lexical resource forsentiment analysis and opinion mining. In Proceedings of the Ninth International Conference on Language Resources andEvaluation (LREC 2010), Vol. 10. bibinfopages2200--2204.Google ScholarGoogle Scholar
  8. Michele Banko, Michael JCafarella, Stephen Soderland, MatthewBroadhead, and Oren Etzioni. 2007. Open information extraction from the web.. In IJCAI, Vol. 7. bibinfopages2670--2676. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Lee Becker, GeorgeErhart, David Skiba, and ValentineMatula. 2013. Avaya: Sentiment analysis on twitter withself-training and polarity lexicon expansion. In Second Joint Conference on Lexical and Computational Semantics (* SEM),Volume 2: Proceedings of the Seventh International Workshop on SemanticEvaluation (SemEval 2013), Vol. 2. bibinfopages333--340.Google ScholarGoogle Scholar
  10. Nina Berman. 2017. The victims of fake news. (2017). https://www.cjr.org/special_report/fake-news-pizzagate-seth-rich-newtown-sandy-hook.phpGoogle ScholarGoogle Scholar
  11. Edward L Bernays. 1928. Propaganda .Ig publishing.Google ScholarGoogle Scholar
  12. Lindsay Blackwell, JillDimond, Sarita Schoenebeck, and CliffLampe. 2017. Classification and its consequences for onlineharassment: Design insights from heartmob. Proceedings of the ACM on Human-ComputerInteraction , Vol. 1, CSCW,bibinfopages24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Margaret M Bradley andPeter J Lang. 1999. Affective norms for English words (ANEW):Instruction manual and affective ratings . Technical Report. Citeseer.Google ScholarGoogle Scholar
  14. Sorrel Brown. 2010. Likert scale examples for surveys. (2010). https://www.extension.iastate.edu/documents/anr/likertscaleexamplesforsurveys.pdfGoogle ScholarGoogle Scholar
  15. Craig J. Calhoun. 1994. Social Theory and the Politics ofIdentity .Google ScholarGoogle Scholar
  16. Eshwar Chandrasekharan,Umashanthi Pavalanathan, AnirudhSrinivasan, Adam Glynn, JacobEisenstein, and Eric Gilbert. 2017. You Can't Stay Here: The Efficacy of Reddit's 2015Ban Examined Through Hate Speech. Proceedings of the ACM on Human-ComputerInteraction , Vol. 1, CSCW (2017), 31. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Justin Cheng, MichaelBernstein, Cristian Danescu-Niculescu-Mizil, andJure Leskovec. 2017. Anyone Can Become a Troll: Causes of TrollingBehavior in Online Discussions. In Proceedings ofthe 2017 ACM Conference on Computer Supported Cooperative Work and SocialComputing. ACM, 1217--1230. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Giovanni Luca Ciampaglia,Prashant Shiralkar, Luis M Rocha,Johan Bollen, Filippo Menczer, andAlessandro Flammini. 2015. Computational fact checking from knowledgenetworks. PloS one , Vol. 10, 6 (2015), e0128193.Google ScholarGoogle ScholarCross RefCross Ref
  19. Josh Constine. 2017. Facebook tries fighting fake news with publisher infobutton on links. (10 2017). https://techcrunch.com/2017/10/05/facebook-article-information-button/Google ScholarGoogle Scholar
  20. Nicole A Cooke. 2017. Posttruth, truthiness, and alternative facts:Information behavior and critical information consumption for a new age. The Library Quarterly , Vol. 87, 3 (2017), bibinfopages211--221.Google ScholarGoogle Scholar
  21. Michela Del Vicario,Alessandro Bessi, Fabiana Zollo,Fabio Petroni, Antonio Scala,Guido Caldarelli, H Eugene Stanley, andWalter Quattrociocchi. 2015. Echo chambers in the age of misinformation. arXiv preprint arXiv:1509.00189 (2015).Google ScholarGoogle Scholar
  22. Robert Epstein andRonald E Robertson. 2015. The search engine manipulation effect (SEME) andits possible impact on the outcomes of elections. Proceedings of the National Academy ofSciences , Vol. 112, 33 (2015), E4512--E4521.Google ScholarGoogle ScholarCross RefCross Ref
  23. Robert Epstein, Ronald E.Robertson, David Lazer, and ChristoWilson. 2017. Suppressing the Search Engine Manipulation Effect(SEME) . Proceedings of the ACM: Human-ComputerInteraction , Vol. 1, CSCW (11 2017). Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Andrea Esuli andFabrizio Sebastiani. 2006. SentiWordNet: a publicly available lexical resourcefor opinion mining. In Proceedings of the FifthInternational Conference on Language Resources and Evaluation (LREC 2006) .bibinfopages417--422.Google ScholarGoogle Scholar
  25. Robert Farley. 2011. Trump said Obama's grandmother caught on tape sayingshe witnessed his birth in Kenya. (7 2011). http://www.politifact.com/truth-o-meter/statements/2011/apr/07/donald-trump/donald-trump-says-president-obamas-grandmother-cau/Google ScholarGoogle Scholar
  26. Ethan Fast, Binbin Chen,and Michael S Bernstein. 2016. Empath: Understanding topic signals in large-scaletext. In Proceedings of the 2016 CHI Conference onHuman Factors in Computing Systems. ACM, 4647--4657. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Bjarke Felbo, AlanMislove, Anders Søgaard, Iyad Rahwan,and Sune Lehmann. 2017. Using millions of emoji occurrences to pretrainany-domain models for detecting emotion, sentiment and sarcasm. In Proceedings of the Conference on Empirical Methodsin Natural Language Processing (EMNLP'17). Copenhagen,Denmark.Google ScholarGoogle Scholar
  28. Song Feng, RitwikBanerjee, and Yejin Choi. 2012. Syntactic stylometry for deception detection. In Proceedings of the 50th Annual Meeting of theAssociation for Computational Linguistics: Short Papers-Volume 2 .Association for Computational Linguistics, 171--175. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Emilio Ferrara, OnurVarol, Clayton Davis, Filippo Menczer,and Alessandro Flammini. 2016. The rise of social bots. Commun. ACM , Vol. 59, 7 (2016), 96--104. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Seth Fiegerman. 2017. Facebook, Google, Twitter to fight fake news with'trust indicators'. (2017). http://money.cnn.com/2017/11/16/technology/tech-trust-indicators/index.htmlGoogle ScholarGoogle Scholar
  31. Kim Fridkin, Patrick JKenney, and Amanda Wintersieck. 2015. Liar, liar, pants on fire: How fact-checkinginfluences citizens? reactions to negative advertising. Political Communication , Vol. 32, 1 (2015), bibinfopages127--151.Google ScholarGoogle Scholar
  32. Uri Friedman. 2017. The real-world consequences of “fake news”. (12 2017). https://www.theatlantic.com/international/archive/2017/12/trump-world-fake-news/548888/Google ScholarGoogle Scholar
  33. R Kelly Garrett, Erik CNisbet, and Emily K Lynch. 2013. Undermining the corrective effects of media-basedpolitical fact checking? The role of contextual cues and na"ive theory. Journal of Communication , Vol. 63, 4 (2013), bibinfopages617--637.Google ScholarGoogle ScholarCross RefCross Ref
  34. Matthew Gentzkow, Jesse MShapiro, and Daniel F Stone. 2015. Media bias in the marketplace: Theory. Handbook of media economics . Vol. 1. Elsevier,bibinfopages623--645.Google ScholarGoogle Scholar
  35. Eric Gilbert, CliffLampe, Alex Leavitt, Katherine Lo, andLana Yarosh. 2017. Conceptualizing, Creating, & ControllingConstructive and Controversial Comments: A CSCW Research-athon. In Companion of the 2017 ACM Conference on ComputerSupported Cooperative Work and Social Computing. ACM,bibinfopages425--430. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Google. 2018. Google fact checks feature. (2018). https://developers.google.com/search/docs/data-types/factcheckGoogle ScholarGoogle Scholar
  37. Andrew Guess, BrendanNyhan, and Jason Reifler. 2018. Selective Exposure to Misinformation: Evidence fromthe consumption of fake news during the 2016 US presidential campaign. European Research Council (2018).Google ScholarGoogle Scholar
  38. Manish Gupta, PeixiangZhao, and Jiawei Han. 2012. Evaluating event credibility on twitter. In Proceedings of the 2012 SIAM InternationalConference on Data Mining. SIAM, 153--164.Google ScholarGoogle ScholarCross RefCross Ref
  39. Kathryn Haglin. 2017. The limitations of the backfire effect. Research & Politics , Vol. 4, 3 (2017), bibinfopages2053168017716547.Google ScholarGoogle ScholarCross RefCross Ref
  40. Aniko Hannak, DrewMargolin, Brian Keegan, and IngmarWeber. 2014. Get Back! You Don't Know Me Like That: The SocialMediation of Fact Checking Interventions in Twitter Conversations.. In ICWSM .Google ScholarGoogle Scholar
  41. Naeemul Hassan, BillAdair, James T Hamilton, Chengkai Li,Mark Tremayne, Jun Yang, andCong Yu. 2015. The quest to automate fact-checking. world (2015).Google ScholarGoogle Scholar
  42. Y Linlin Huang, KateStarbird, Mania Orand, Stephanie AStanek, and Heather T Pedersen. 2015. Connected through crisis: Emotional proximity andthe spread of misinformation online. In Proceedingsof the 18th ACM Conference on Computer Supported Cooperative Work & SocialComputing. ACM, 969--980. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Brooks Jackson. 2018. FactCheck. (2018). https://www.factcheck.orgGoogle ScholarGoogle Scholar
  44. Shagun Jhaver, SuchetaGhoshal, Amy Bruckman, and EricGilbert. 2018. Online harassment and content moderation: The caseof blocklists. ACM Transactions on Computer-Human Interaction(TOCHI) , Vol. 25, 2 (2018), bibinfopages12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  45. Zhiwei Jin, Juan Cao,Yu-Gang Jiang, and Yongdong Zhang. 2014. News credibility evaluation on microblog with ahierarchical propagation model. In Data Mining(ICDM), 2014 IEEE International Conference on. IEEE,bibinfopages230--239. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Zhiwei Jin, Juan Cao,Yongdong Zhang, and Jiebo Luo. 2016. News Verification by Exploiting Conflicting SocialViewpoints in Microblogs.. In AAAI .bibinfopages2972--2978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Dan Jurafsky and James HMartin. 2014. Speech and language processing . Vol. 3. Pearson London:.Google ScholarGoogle Scholar
  48. Linda K Kaye, Stephanie AMalone, and Helen J Wall. 2017. Emojis: Insights, affordances, and possibilitiesfor psychological science. Trends in cognitive sciences , Vol. 21, 2 (2017), bibinfopages66--68.Google ScholarGoogle Scholar
  49. Michael W. Kearney. 2017. Trusting News Project Report. Reynolds Journalism Institute (7 2017).Google ScholarGoogle Scholar
  50. Travis Kriplean, CaitlinBonnar, Alan Borning, Bo Kinney, andBrian Gill. 2014. Integrating on-demand fact-checking with publicdialogue. In Proceedings of the 17th ACM conferenceon Computer supported cooperative work & social computing. ACM,bibinfopages1188--1199. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Srijan Kumar, RobertWest, and Jure Leskovec. 2016. Disinformation on the web: Impact, characteristics,and detection of wikipedia hoaxes. In Proceedingsof the 25th international conference on World Wide Web. International WorldWide Web Conferences Steering Committee, 591--602. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Barbara Laslett. 1990. Unfeeling knowledge: Emotion and objectivity in thehistory of sociology. In Sociological Forum , Vol. 5. Springer, 413--433.Google ScholarGoogle Scholar
  53. David Lazer, MatthewBaum, Nir Grinberg, Lisa Friedland,Kenneth Joseph, Will Hobbs, andCarolina Mattsson. 2017. Combating fake news: An agenda for research andaction. Harvard Kennedy School, Shorenstein Center onMedia, Politics and Public Policy , Vol. 2 (2017).Google ScholarGoogle Scholar
  54. David MJ Lazer, Matthew ABaum, Yochai Benkler, Adam J Berinsky,Kelly M Greenhill, Filippo Menczer,Miriam J Metzger, Brendan Nyhan,Gordon Pennycook, David Rothschild,et almbox. 2018. The science of fake news. Science , Vol. 359, 6380 (2018), 1094--1096.Google ScholarGoogle Scholar
  55. Stephan Lewandowsky,Ullrich KH Ecker, Colleen M Seifert,Norbert Schwarz, and John Cook. 2012. Misinformation and its correction: Continuedinfluence and successful debiasing. Psychological Science in the PublicInterest , Vol. 13, 3 (2012), 106--131.Google ScholarGoogle Scholar
  56. Minglei Li, Qin Lu, andYunfei Long. 2017. Are Manually Prepared Affective Lexicons ReallyUseful for Sentiment Analysis. In Proceedings ofthe Eighth International Joint Conference on Natural Language Processing(Volume 2: Short Papers), Vol. 2. bibinfopages146--150.Google ScholarGoogle Scholar
  57. Q Vera Liao and Wai-TatFu. 2014a. Can you hear me now?: mitigating the echo chambereffect by source position indicators. In Proceedings of the 17th ACM conference on Computer supported cooperative work& social computing. ACM, 184--196. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Q Vera Liao and Wai-TatFu. 2014b. Expert voices in echo chambers: effects of sourceexpertise indicators on exposure to diverse opinions. In Proceedings of the SIGCHI Conference on HumanFactors in Computing Systems. ACM, 2745--2754. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Kar Wai Lim and WrayBuntine. 2014. Twitter opinion topic model: Extracting productopinions from tweets by leveraging hashtags and sentiment lexicon. In Proceedings of the 23rd ACM International Conferenceon Conference on Information and Knowledge Management. ACM,bibinfopages1319--1328. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Edward Loper and StevenBird. 2002. NLTK: The Natural Language Toolkit. In Proceedings of the ACL-02 Workshop on EffectiveTools and Methodologies for Teaching Natural Language Processing andComputational Linguistics, Vol. 1. bibinfopages63--70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Xuan Lu, Wei Ai,Xuanzhe Liu, Qian Li,Ning Wang, Gang Huang, andQiaozhu Mei. 2016. Learning from the ubiquitous language: an empiricalanalysis of emoji usage of smartphone users. In Proceedings of the 2016 ACM International Joint Conference on Pervasive andUbiquitous Computing. ACM, 770--780. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. Cristian Lumezanu, NickFeamster, and Hans Klein. 2012. # bias: Measuring the tweeting behavior ofpropagandists. In Sixth International AAAIConference on Weblogs and Social Media .Google ScholarGoogle Scholar
  63. Andrew L Maas, Raymond EDaly, Peter T Pham, Dan Huang,Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of theassociation for computational linguistics: Human language technologies-volume1. Association for Computational Linguistics, 142--150. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Laurens van der Maaten andGeoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research , Vol. 9, 11 (2008), bibinfopages2579--2605.Google ScholarGoogle Scholar
  65. Amr Magdy and NayerWanas. 2010. Web-based statistical fact checking of textualdocuments. In Proceedings of the 2nd internationalworkshop on Search and mining user-generated contents. ACM,bibinfopages103--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. George E. Marcus. 2002. The Sentimental Citizen: Emotion in DemocraticPolitics. Perspectives on Politics (2002).Google ScholarGoogle Scholar
  67. Drew B Margolin, AnikoHannak, and Ingmar Weber. 2017. Political Fact-Checking on Twitter: When DoCorrections Have an Effect? Political Communication (2017), 1--24.Google ScholarGoogle Scholar
  68. David Mikkelson. 2011. Barack Obama Birth Certificate: Is Barack Obama'sbirth certificate a forgery? (8 2011). https://www.snopes.com/fact-check/birth-certificate/Google ScholarGoogle Scholar
  69. David Mikkelson. 2018. Snopes. (2018). https://www.snopes.comGoogle ScholarGoogle Scholar
  70. Tomas Mikolov, Kai Chen,Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations invector space. arXiv preprint arXiv:1301.3781 (2013).Google ScholarGoogle Scholar
  71. Amy Mitchell, JeffreyGottfried, Jocelyn Kiley, andKaterina Eva Matsa. 2014. Political polarization & media habits. Pew Research Center , Vol. 21 (2014).Google ScholarGoogle Scholar
  72. Tanushree Mitra and EricGilbert. 2015. CREDBANK: A Large-Scale Social Media Corpus WithAssociated Credibility Annotations.. In ICWSM .bibinfopages258--267.Google ScholarGoogle Scholar
  73. Tanushree Mitra, Graham PWright, and Eric Gilbert. 2017. A parsimonious language model of social mediacredibility across disparate events. In Proceedingsof the 2017 ACM Conference on Computer Supported Cooperative Work and SocialComputing. ACM, 126--145. Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. Saif M Mohammad andSvetlana Kiritchenko. 2015. Using hashtags to capture fine emotion categoriesfrom tweets. Computational Intelligence , Vol. 31, 2 (2015), bibinfopages301--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Saif M Mohammad andPeter D Turney. 2010. Emotions evoked by common words and phrases: UsingMechanical Turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 workshop on computational approaches toanalysis and generation of emotion in text. Association for ComputationalLinguistics, 26--34. Google ScholarGoogle ScholarDigital LibraryDigital Library
  76. Saif M Mohammad andPeter D Turney. 2013. Crowdsourcing a word--emotion association lexicon. Computational Intelligence , Vol. 29, 3 (2013), bibinfopages436--465.Google ScholarGoogle Scholar
  77. Mainack Mondal,Leandro Araújo Silva, andFabr'icio Benevenuto. 2017. A measurement study of hate speech in socialmedia. In Proceedings of the 28th ACM Conference onHypertext and Social Media. ACM, 85--94. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. NewsBusters. 2018. Don't Believe the Liberal “Fact-Checkers”! (2018). https://www.newsbusters.org/fact-checkersGoogle ScholarGoogle Scholar
  79. Andrew Y Ng, Michael IJordan, and Yair Weiss. 2002. On spectral clustering: Analysis and an algorithm. Advances in neural information processingsystems. 849--856. Google ScholarGoogle ScholarDigital LibraryDigital Library
  80. Raymond S Nickerson. 1998. Confirmation bias: A ubiquitous phenomenon in manyguises. Review of general psychology , Vol. 2, 2 (1998), bibinfopages175.Google ScholarGoogle Scholar
  81. Brendan Nyhan and JasonReifler. 2010. When corrections fail: The persistence of politicalmisperceptions. Political Behavior , Vol. 32, 2 (2010), 303--330.Google ScholarGoogle ScholarCross RefCross Ref
  82. Brendan Nyhan and JasonReifler. 2015. Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine , Vol. 33, 3 (2015), 459--464.Google ScholarGoogle ScholarCross RefCross Ref
  83. Brendan Nyhan, JasonReifler, and Peter A Ubel. 2013. The hazards of correcting myths about health carereform. Medical care , Vol. 51, 2 (2013), 127--132.Google ScholarGoogle ScholarCross RefCross Ref
  84. Myle Ott, Yejin Choi,Claire Cardie, and Jeffrey T Hancock. 2011. Finding deceptive opinion spam by any stretch ofthe imagination. In Proceedings of the 49th AnnualMeeting of the Association for Computational Linguistics: Human LanguageTechnologies, Vol. 1. Association for ComputationalLinguistics, 309--319. Google ScholarGoogle ScholarDigital LibraryDigital Library
  85. Bo Pang, Lillian Lee,et almbox. 2008. Opinion mining and sentiment analysis. Foundations and Trends in InformationRetrieval , Vol. 2, 1--2 (2008), 1--135. Google ScholarGoogle ScholarDigital LibraryDigital Library
  86. Bo Pang, Lillian Lee,and Shivakumar Vaithyanathan. 2002. Thumbs up? sentiment classification using machinelearning techniques. In Proceedings of the ACL-02conference on Empirical methods in natural language processing , Vol. 10. Association for Computational Linguistics,bibinfopages79--86. Google ScholarGoogle ScholarDigital LibraryDigital Library
  87. Jessica A Pater, Moon KKim, Elizabeth D Mynatt, and CaseyFiesler. 2016. Characterizations of online harassment: Comparingpolicies across social media platforms. In Proceedings of the 19th International Conference on Supporting Group Work .ACM, 369--374. Google ScholarGoogle ScholarDigital LibraryDigital Library
  88. James W Pennebaker,Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: LIWC 2001. Mahway: Lawrence Erlbaum Associates , Vol. 71 (2001).Google ScholarGoogle Scholar
  89. Jeffrey Pennington,Richard Socher, and ChristopherManning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empiricalmethods in natural language processing (EMNLP) .bibinfopages1532--1543.Google ScholarGoogle ScholarCross RefCross Ref
  90. Gordon Pennycook andDavid G Rand. 2017. Assessing the effect of “disputed” warnings andsource salience on perceptions of fake news accuracy. (2017).Google ScholarGoogle Scholar
  91. Robert Plutchik. 1984. Emotions: A general psychoevolutionary theory. Approches to emotion (1984), 197--219.Google ScholarGoogle Scholar
  92. Ben Popken. 2018. Twitter deleted 200,000 Russian troll tweets. (2 2018). https://www.nbcnews.com/tech/social-media/now-available-more-200-000-deleted-russian-troll-tweets-n844731Google ScholarGoogle Scholar
  93. Ethan Porter, Thomas JWood, and David Kirby. 2018. Sex Trafficking, Russian Infiltration, BirthCertificates, and Pedophilia: A Survey Experiment Correcting Fake News. Journal of Experimental Political Science (2018), 1--6.Google ScholarGoogle Scholar
  94. Martin Potthast, JohannesKiesel, Kevin Reinartz, JanekBevendorff, and Benno Stein. 2017. A Stylometric Inquiry into Hyperpartisan and FakeNews. arXiv preprint arXiv:1702.05638 (2017).Google ScholarGoogle Scholar
  95. Daniel Preoct iuc-Pietro,Ye Liu, Daniel Hopkins, andLyle Ungar. 2017. Beyond binary labels: political ideology predictionof twitter users. In Proceedings of the 55th AnnualMeeting of the Association for Computational Linguistics (Volume 1: LongPapers), Vol. 1. 729--740.Google ScholarGoogle ScholarCross RefCross Ref
  96. Vahed Qazvinian, EmilyRosengren, Dragomir R Radev, andQiaozhu Mei. 2011. Rumor has it: Identifying misinformation inmicroblogs. In Proceedings of the Conference onEmpirical Methods in Natural Language Processing. Association forComputational Linguistics, 1589--1599. Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. Lin Qiu, Han Lin,Jonathan Ramsay, and Fang Yang. 2012. You are what you tweet: Personality expression andperception on Twitter. Journal of Research in Personality , Vol. 46, 6 (2012), bibinfopages710--718.Google ScholarGoogle ScholarCross RefCross Ref
  98. Hannah Rashkin, EunsolChoi, Jin Yea Jang, Svitlana Volkova,and Yejin Choi. 2017. Truth of varying shades: Analyzing language in fakenews and political fact-checking. In Proceedings ofthe 2017 Conference on Empirical Methods in Natural Language Processing .bibinfopages2931--2937.Google ScholarGoogle ScholarCross RefCross Ref
  99. Radim v Rehr uv rek andPetr Sojka. 2010. Software Framework for Topic Modelling with LargeCorpora. In Proceedings of the LREC 2010 Workshopon New Challenges for NLP Frameworks. 45--50.Google ScholarGoogle Scholar
  100. Valerie Richardson. 2018. Conservative project seeks to fact-check thefact-checkers accused of liberal bias. (3 2018). https://www.washingtontimes.com/news/2018/mar/27/conservative-project-seeks-fact-check-fact-checker/Google ScholarGoogle Scholar
  101. Monica A Riordan. 2017. Emojis as tools for emotion work: Communicatingaffect in text messages. Journal of Language and Social Psychology , Vol. 36, 5 (2017), bibinfopages549--567.Google ScholarGoogle ScholarCross RefCross Ref
  102. Ronald E. Robertson, ShanJiang, Kenneth Joseph, Lisa Friedland,David Lazer, and Christo Wilson. 2018. Auditing Partisan Audience Bias within GoogleSearch . Proceedings of the ACM: Human-ComputerInteraction , Vol. 2, CSCW (11 2018). Google ScholarGoogle ScholarDigital LibraryDigital Library
  103. Robert J Robinson, DacherKeltner, Andrew Ward, and Lee Ross. 1995. Actual versus assumed differences in construal:“Naive realism” in intergroup perception and conflict. Journal of Personality and SocialPsychology , Vol. 68, 3 (1995), 404.Google ScholarGoogle ScholarCross RefCross Ref
  104. Chengcheng Shao,Giovanni Luca Ciampaglia, Onur Varol,Alessandro Flammini, and FilippoMenczer. 2017. The spread of fake news by social bots. arXiv preprint arXiv:1707.07592 (2017).Google ScholarGoogle Scholar
  105. Matt Shapiro. 2016. Running The Data On PolitiFact Shows Bias AgainstConservatives. (12 2016). http://thefederalist.com/2016/12/16/running-data-politifact-shows-bias-conservatives/Google ScholarGoogle Scholar
  106. Aaron Sharockman. 2018. Politifact. (2018). http://www.politifact.comGoogle ScholarGoogle Scholar
  107. Sonam Sheth. 2018. Facebook takes down over 200 accounts and pages runby the IRA, a notorious Russian troll farm. (4 2018). http://www.businessinsider.com/facebook-removes-accounts-pages-tied-to-russia-internet-research-agency-2018--4Google ScholarGoogle Scholar
  108. Baoxu Shi and TimWeninger. 2016. Fact checking in heterogeneous informationnetworks. In Proceedings of the 25th InternationalConference Companion on World Wide Web. International World Wide WebConferences Steering Committee, 101--102. Google ScholarGoogle ScholarDigital LibraryDigital Library
  109. Kai Shu, Amy Sliva,Suhang Wang, Jiliang Tang, andHuan Liu. 2017. Fake news detection on social media: A data miningperspective. ACM SIGKDD Explorations Newsletter , Vol. 19, 1 (2017), bibinfopages22--36. Google ScholarGoogle ScholarDigital LibraryDigital Library
  110. Philip J Stone, Dexter CDunphy, and Marshall S Smith. 1966. The general inquirer: A computer approach tocontent analysis. (1966).Google ScholarGoogle Scholar
  111. Ray Suarez and KerryFlynn. 2017. Facebook, Twitter issue policy changes to manage fakenews and hate speech. (2017). https://www.npr.org/2017/12/24/573333371/facebook-twitter-issue-policy-changes-to-manage-fake-news-and-hate-speechGoogle ScholarGoogle Scholar
  112. Eugenio Tacchini, GabrieleBallarin, Marco L Della Vedova, StefanoMoret, and Luca de Alfaro. 2017. Some like it hoax: Automated fake news detection insocial networks. arXiv preprint arXiv:1704.07506 (2017).Google ScholarGoogle Scholar
  113. Henri Tafjel and John CTurner. 1986. The social identity theory of intergroup behavior. Psychology of intergroup relations (1986), 7--24.Google ScholarGoogle Scholar
  114. Yla R Tausczik andJames W Pennebaker. 2010. The psychological meaning of words: LIWC andcomputerized text analysis methods. Journal of language and social psychology , Vol. 29, 1 (2010), bibinfopages24--54.Google ScholarGoogle ScholarCross RefCross Ref
  115. Sebastian Tschiatschek,Adish Singla, Manuel Rodriguez,Arpit Merchant, and Andreas Krause. 2018. Fake News Detection in Social Networks via CrowdSignals. In The 2018 Web Conference Companion (WWW2018 Companion). Lyon, France. Google ScholarGoogle ScholarDigital LibraryDigital Library
  116. Grigorios Tsoumakas,Ioannis Katakis, and Ioannis Vlahavas. 2009. Mining multi-label data. Data mining and knowledge discoveryhandbook. Springer, 667--685.Google ScholarGoogle Scholar
  117. Svitlana Volkova andJin Yea Jang. 2018. Misleading or Falsification? Inferring DeceptiveStrategies and Types in Online News and Social Media. In The 2018 Web Conference Companion (WWW 2018Companion). Lyon, France. Google ScholarGoogle ScholarDigital LibraryDigital Library
  118. Svitlana Volkova, KyleShaffer, Jin Yea Jang, and NathanHodas. 2017. Separating facts from fiction: Linguistic models toclassify suspicious and trusted news posts on twitter. In Proceedings of the 55th Annual Meeting of theAssociation for Computational Linguistics (Volume 2: Short Papers) , Vol. 2. 647--653.Google ScholarGoogle ScholarCross RefCross Ref
  119. Soroush Vosoughi, DebRoy, and Sinan Aral. 2018. The spread of true and false news online. Science , Vol. 359, 6380 (2018), 1146--1151.Google ScholarGoogle Scholar
  120. William Yang Wang. 2017. “Liar, Liar Pants on Fire”: A New BenchmarkDataset for Fake News Detection. In Proceedings ofthe 55th Annual Meeting of the Association for Computational Linguistics(Volume 2: Short Papers), Vol. 2. bibinfopages422--426.Google ScholarGoogle ScholarCross RefCross Ref
  121. Xuezhi Wang, Cong Yu,Simon Baumgartner, and Flip Korn. 2018. Relevant Document Discovery for Fact-CheckingArticles. In The 2018 Web Conference Companion (WWW2018 Companion). Lyon, France. Google ScholarGoogle ScholarDigital LibraryDigital Library
  122. Andrew Ward, L Ross,E Reed, E Turiel, andT Brown. 1997. Naive realism in everyday life: Implications forsocial conflict and misunderstanding. Values and knowledge (1997), 103--135.Google ScholarGoogle Scholar
  123. Claire Wardle. 2017. Fake news. It's complicated. First Draft News (2017).Google ScholarGoogle Scholar
  124. Brian E Weeks. 2015. Emotions, partisanship, and misperceptions: Howanger and anxiety moderate the effect of partisan bias on susceptibility topolitical misinformation. Journal of Communication , Vol. 65, 4 (2015), bibinfopages699--719.Google ScholarGoogle ScholarCross RefCross Ref
  125. Thomas Wood and EthanPorter. 2016. The elusive backfire effect: mass attitudes?steadfast factual adherence. Political Behavior (2016), bibinfopages1--29.Google ScholarGoogle Scholar
  126. You Wu, Pankaj K Agarwal,Chengkai Li, Jun Yang, andCong Yu. 2014. Toward computational fact-checking. Proceedings of the VLDB Endowment , Vol. 7, 7 (2014), bibinfopages589--600. Google ScholarGoogle ScholarDigital LibraryDigital Library
  127. Amy X Zhang, Michele Igo,Marc Facciotti, and David Karger. 2017. Using Student Annotated Hashtags and Emojis toCollect Nuanced Affective States. In Proceedings ofthe Fourth (2017) ACM Conference on Learning@Scale. ACM,bibinfopages319--322. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Linguistic Signals under Misinformation and Fact-Checking: Evidence from User Comments on Social Media

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in

          Full Access

          • Published in

            cover image Proceedings of the ACM on Human-Computer Interaction
            Proceedings of the ACM on Human-Computer Interaction  Volume 2, Issue CSCW
            November 2018
            4104 pages
            EISSN:2573-0142
            DOI:10.1145/3290265
            Issue’s Table of Contents

            Copyright © 2018 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 1 November 2018
            Published in pacmhci Volume 2, Issue CSCW

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader