ABSTRACT
Algorithmic decision-making systems are increasingly being adopted by government public service agencies. Researchers, policy experts, and civil rights groups have all voiced concerns that such systems are being deployed without adequate consideration of potential harms, disparate impacts, and public accountability practices. Yet little is known about the concerns of those most likely to be affected by these systems. We report on workshops conducted to learn about the concerns of affected communities in the context of child welfare services. The workshops involved 83 study participants including families involved in the child welfare system, employees of child welfare agencies, and service providers. Our findings indicate that general distrust in the existing system contributes significantly to low comfort in algorithmic decision-making. We identify strategies for improving comfort through greater transparency and improved communication strategies. We discuss the implications of our study for accountable algorithm design for child welfare applications.
Supplemental Material
Available for Download
Derived comfort maps for all scenarios.
- Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y Lim, and Mohan Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 582. Google ScholarDigital Library
- Alekh Agarwal, Alina Beygelzimer, Miroslav Dudík, John Langford, and Hanna Wallach. 2018. A reductions approach to fair classification. arXiv preprint arXiv:1803.02453 (2018).Google Scholar
- Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias. (2016). https://www.propublica.org/article/ machine-bias-risk-assessments-in-criminal-sentencingGoogle Scholar
- Toi Aria. 2017. Our data, our way. https://trusteddata.co.nz/massey_ our_data_our_way.pdfGoogle Scholar
- Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. 'It's Reducing a Human Being to a Percentage': Perceptions of Justice in Algorithmic Decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 377. Google ScholarDigital Library
- Robert Brauneis and Ellen P Goodman. 2017. Algorithmic transparency for the smart city. (2017).Google Scholar
- Sarah Brayne. 2017. Big data surveillance: The case of policing. American Sociological Review 82, 5 (2017), 977--1008.Google ScholarCross Ref
- Joel Brockner and Batia Wiesenfeld. 2005. How, when, and why does outcome favorability interact with procedural fairness? (2005).Google Scholar
- Joel Brockner and BatiaMWiesenfeld. 1996. An integrative framework for explaining reactions to decisions: interactive effects of outcomes and procedures. Psychological bulletin 120, 2 (1996), 189.Google Scholar
- Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountability and Transparency. 134--148.Google Scholar
- Coalition. 2018. The use of pretrial risk assessment instruments: A shared statement of civil rights concerns. http://civilrightsdocs.info/ pdf/criminal-justice/Pretrial-Risk-Assessment-Full.pdfGoogle Scholar
- Jason A Colquitt. 2001. On the dimensionality of organizational justice: A construct validation of a measure. Journal of applied psychology 86, 3 (2001), 386.Google ScholarCross Ref
- Nicholas Diakopoulos. {n. d.}. Algorithmic-Accountability: the investigation of Black Boxes. ({n. d.}).Google Scholar
- Virginia Eubanks. 2018. Automating inequality: How high-tech tools profile, police, and punish the poor. St. Martin's Press. Google ScholarDigital Library
- Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 259--268. Google ScholarDigital Library
- Andrew Guthrie Ferguson. 2017. The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement. NYU Press.Google Scholar
- Nina Grgic-Hlaca, Elissa M Redmiles, Krishna P Gummadi, and Adrian Weller. 2018. Human perceptions of fairness in algorithmic decision making: A case study of criminal risk prediction. arXiv preprint arXiv:1802.09548 (2018). Google ScholarDigital Library
- Bernard E Harcourt. 2014. Risk as a proxy for race: The dangers of risk assessment. Fed. Sent'g Rep. 27 (2014), 237.Google ScholarCross Ref
- George E Higgins, Scott E Wolfe, Margaret Mahoney, and Nelseta M Walters. 2009. Race, Ethnicity, and Experience: Modeling the Public's Perceptions of Justice, Satisfaction, and Attitude Toward the Courts. Journal of Ethnicity in Criminal Justice 7, 4 (2009), 293--310.Google ScholarCross Ref
- Dan Hurley. 2018. Can an Algorithm Tell When Kids Are in Danger? https://www.nytimes.com/2018/01/02/magazine/can-an-algorithm-tell-when-kids-are-in-danger.htmlGoogle Scholar
- David Jackson and Gary Marx. 2017. Data mining program designed to predict child abuse proves unreliable, DCFS says. http://www.chicagotribune.com/news/watchdog/ct-dcfs-eckerd-met-20171206-story.htmlGoogle Scholar
- Robert Jungk and Norbert Müllert. 1987. Future Workshops: How to create desirable futures. Institute for Social Inventions London.Google Scholar
- Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Schölkopf. 2017. Avoiding discrimination through causal reasoning. In Advances in Neural Information Processing Systems. 656--666. Google ScholarDigital Library
- Christopher Kingsley and Stefania Di Mauro-Nava. 2017. First, do no harm: Ethical Guidelines for Applying Predictive Tools Within Human Services. MetroLab Network Report (2017).Google Scholar
- Min Kyung Lee. 2018. Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society 5, 1 (2018), 2053951718756684.Google ScholarCross Ref
- Min Kyung Lee and Su Baykal. {n. d.}. Algorithmic Mediation in Group Decisions: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division.Google Scholar
- Min Kyung Lee, Ji Tae Kim, and Leah Lizarondo. 2017. A Human- Centered Approach to Algorithmic Services: Considerations for Fair and Motivating Smart Community Service Management that Allocates Donations to Non-Profit Organizations. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 3365--3376. Google ScholarDigital Library
- Christopher T Lowenkamp. 2009. The development of an actuarial risk assessment instrument for US Pretrial Services. Fed. Probation 73 (2009), 33.Google Scholar
- John Monahan, Anne Metz, and Brandon L Garrett. 2018. Judicial Appraisals of Risk Assessment in Sentencing. (2018).Google Scholar
- Razieh Nabi and Ilya Shpitser. 2018. Fair inference on outcomes. In Proceedings of the... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, Vol. 2018. NIH Public Access, 1931.Google ScholarCross Ref
- Jedrzej Niklas, Karolina Sztandar-Sztanderska, and Katarzyna Szymielewicz. 2015. Profiling the unemployed in Poland: social and political implications of algorithmic decision making. Fundacja Panoptykon, Warsaw Google Scholar (2015).Google Scholar
- Executive Office of the President, Cecilia Munoz, Domestic Policy Council Director, Megan (US Chief Technology Officer Smith (Office of Science, Technology Policy)), DJ (Deputy Chief Technology Officer for Data Policy, Chief Data Scientist Patil (Office of Science, and Technology Policy)). 2016. Big data: A report on algorithmic systems, opportunity, and civil rights. Executive Office of the President.Google Scholar
- Christopher P Parker, Boris B Baltes, and Neil D Christiansen. 1997. Support for affirmative action, justice perceptions, and work attitudes: A study of gender and racial--ethnic group differences. Journal of Applied Psychology 82, 3 (1997), 376.Google ScholarCross Ref
- Data Futures Partnership. 2017. A Path to Social Licence: Guidelines for Trusted Data Use. http://datafutures.co.nz/our-work-2/ talking-to-new-zealanders/Google Scholar
- Angelisa C Plane, Elissa M Redmiles, Michelle L Mazurek, and Michael Carl Tschantz. 2017. Exploring user perceptions of discrimination in online targeted advertising. In USENIX Security. Google ScholarDigital Library
- Dillon Reisman, Jason Schultz, K Crawford, and M Whittaker. 2018. Algorithmic impact assessments: A practical framework for public agency accountability.Google Scholar
- O'Brien Kirk Roberts, Yvonne H and Peter J Pecora. 2018. Considerations for Implementing Predictive Analytics in Child Welfare. Casey Family Programs (2018).Google Scholar
- Douglas Schuler and Aki Namioka. 1993. Participatory design: Principles and practices. CRC Press. Google ScholarDigital Library
- Nicholas Scurich and John Monahan. 2016. Evidence-based sentencing: Public openness and opposition to using gender, age, and race as risk factors for recidivism. Law and Human Behavior 40, 1 (2016), 36.Google ScholarCross Ref
- Hetan Shah. 2018. Algorithmic accountability. Phil. Trans. R. Soc. A 376, 2128 (2018), 20170362.Google ScholarCross Ref
- Halil Toros and Daniel Flaming. 2018. Prioritizing Homeless Assistance Using Predictive Algorithms: An Evidence-Based Approach. Cityscape 20, 1 (2018), 117--146.Google Scholar
- Michael Veale, Max Van Kleek, and Reuben Binns. 2018. Fairness and Accountability Design Needs for Algorithmic Support in High- Stakes Public Sector Decision-Making. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 440. Google ScholarDigital Library
- AJ Wang. 2018. Procedural Justice and Risk-Assessment Algorithms. (2018).Google Scholar
- Allison Woodruff, Sarah E Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 656. Google ScholarDigital Library
- Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2016. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment. arXiv preprint arXiv:1610.08452 (2016).Google Scholar
Index Terms
- Toward Algorithmic Accountability in Public Services: A Qualitative Study of Affected Community Perspectives on Algorithmic Decision-making in Child Welfare Services
Recommendations
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making
CHI '18: Proceedings of the 2018 CHI Conference on Human Factors in Computing SystemsCalls for heightened consideration of fairness and accountability in algorithmically-informed public decisions-like taxation, justice, and child protection-are now commonplace. How might designers support such human values? We interviewed 27 public ...
Public Participation Readiness Toward E-Gov 2.0: Lessons from Two Countries
ICEGOV '19: Proceedings of the 12th International Conference on Theory and Practice of Electronic GovernanceNowadays, many government has programs to build inclusion and equality in the community where each individual is empowered and given equal opportunities to participate in both social and political processes. Equality in the community is a cultural ...
Algorithmic Decision Making in Public Services: A CSCW-Perspective
GROUP '20: Companion Proceedings of the 2020 ACM International Conference on Supporting Group WorkEach day the public administration makes thousands of decisions with consequences for the welfare of its citizens. An increasing number of such decisions are supported or made by algorithmic decision making (ADM) systems, yet there is a widespread ...
Comments