Assessing and Suing an Algorithm

Artificial intelligence algorithms are permeating nearly every domain of human activity, including processes that make decisions about interests central to individual welfare and well-being. How do public perceptions of algorithmic decisionmaking in these domains compare with perceptions of traditional human decisionmaking? What kinds of judgments about the shortcomings of algorithmic decisionmaking processes underlie these perceptions? Will individuals be willing to hold algorithms accountable through legal channels for unfair, incorrect, or otherwise problematic decisions?

Answers to these questions matter at several levels. In a democratic society, a degree of public acceptance is needed for algorithms to become successfully integrated into decisionmaking processes. And public perceptions will shape how the harms and wrongs caused by algorithmic decisionmaking are handled. This report shares the results of a survey experiment designed to contribute to researchers’ understanding of how U.S. public perceptions are evolving in these respects in one high-stakes setting: decisions related to employment and unemployment.

Key Findings

  • There was evidence for an algorithmic penalty, or a tendency to judge algorithms more harshly than humans for otherwise identical decisions related to employment or unemployment.
  • Respondents were more likely to perceive algorithmic decisionmaking as unfair, error-prone, and non-transparent than they were to perceive human decisionmaking as such. By contrast, there were no consistent differences in perceptions of bias.
  • The investigation of differences in views between minority (i.e., Hispanic and/or non-White) and majority (i.e., non-Hispanic White) respondents produced results that were not straightforward. Majority respondents penalized algorithms more heavily than minority respondents did in their assessments of algorithmic fairness, accuracy, and transparency. This was not the case for bias, where the differences between minority and majority respondents were reversed and very small.
  • Greater exposure to algorithmic decisionmaking corresponds to greater skepticism about the future and possibilities of algorithmic processes.
  • There was little evidence that people would be discouraged from seeking to hold relevant parties accountable for problematic decisions made by algorithms. To the extent that differences existed, respondents were slightly more likely to resort to legal processes when the problematic decision was made by an algorithm than when it was made by a human.

Research:Assessing and Suing an Algorithm. Perceptions of Algorithmic Decisionmaking

Elina TreygerJirka TaylorDaniel KimMaynard A. Holliday, RAND

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024. Homeland Security Review. Use Our Intel. All Rights Reserved. Washington, D.C.