Legal Alert

AI Users Beware: Baked-In Bias Abounds, and You Could Be Liable for It

by Jonathan P. Hummel and Jonathon A. Talcott
July 14, 2023

Summary

Senate Bill 5351, introduced in December 2022, would create liability for covered entities that incorporate artificial intelligence systems into processes.

The Upshot

  • Research confirms what many may already suspect—AI systems are vulnerable to sources of bias such as data, algorithms, and human training.
  • This baked-in bias can create liability for businesses using AI decision-making tools.
  • AI users must be careful and increase scrutiny and review processes for AI decisions.

The Bottom Line

The Bill creates liabilities for covered entities that incorporate AI systems into decision-making processes impacting civil rights. Covered entities should use increased scrutiny and diligence when making the decision to incorporate AI systems into their decision-making processes and be on the lookout for baked-in bias. In the Bill, a covered entity is “any person (including a partnership, corporation, Federal, State, or local agency, or entity) that is subject to a covered right civil rights law.”

In December, 2022, then Senator Rob Portman (R-Ohio) introduced Senate Bill 5351, “Stopping Unlawful Negative Machine Impacts through National Evaluation Act,” which was read twice and referred to the Committee on Commerce, Science, and Transportation. The Committee has taken no action yet with respect to the Bill, which aims to create liabilities for “[any] covered entity that uses artificial intelligence to make or inform a decision that has an impact on a person that is addressed by a covered civil rights law, including whether to provide a program or activity or accommodation to a person”; namely, that such entity “shall be liable for a claim of discrimination under the corresponding covered civil rights law in the same manner and to the same extent (including being liable pursuant to that law’s standard of culpability) as if the covered entity had made such decision without the use of artificial intelligence.” In other words, covered entities that use artificial intelligence cannot hide behind the AI system. For example, if a housing authority relies on or incorporates artificial intelligence into housing decisions, and the AI system is makes housing decisions in a way that violates the Fair Housing Act, the housing authority may be liable.

Research confirms what many may already suspect—AI systems are vulnerable to sources of bias such as data, algorithms, and human training. As a consequence, these systems may inadvertently perpetuate inequalities and reinforce harmful stereotypes. Olga Akselrod, of the American Civil Liberties Union, stated, “[because] AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination — from the criminal legal system, to housing, to the workplace, to our financial systems…, [b]ias is often baked into the outcomes the AI is asked to predict. Likewise, bias is in the data used to train the AI — data that is often discriminatory or unrepresentative for people of color, women, or other marginalized groups — and can rear its head throughout the AI’s design, development, implementation, and use.”

The Bill calls for the establishment of “a program for conducting technology evaluations to assess and assist in mitigating bias and discrimination in artificial intelligence systems of covered entities with respect to race, sex, age, disability, and other classes or characteristics protected by covered civil rights laws.” The Bill also establishes broad “priority evaluation areas,” including “speech recognition and synthesis,” “recommendation systems” (including financial and criminal justice applications), “sensitive image recognition technology” (including facial-recognition and gait-recognition systems), “applications identified as high risk by previous technology evaluations and strategy documents,” and “any other artificial intelligence use case that poses a high risk for discrimination based on classes or characteristics protected by covered civil rights laws…” which includes video synthesis, text generation, and conversation and information systems.

Notably, the proposed legislation requires the Director of the National Institute for Standards and Technology to “ensure the participation of any industry and nongovernmental experts and entities in the fields of artificial intelligence, machine learning, computer science, social sciences, civil rights, and civil liberties seeking to participate in such evaluations.” The Bill indicates the described program “shall terminate on December 31, 2028.”

It is a positive sign that the Senate is being proactive in addressing what scholars and technologists have recognized for years; artificial intelligence systems are only as good as the data on which they are trained. The proposed legislation, if enacted, will likely carry broad impact for covered entities that wish to incorporate artificial intelligence systems into decision-making processes. As AI comes fully online, and as businesses rely on it more and more to tackle decision-making processes, it is critical to vet any AI system used (e.g., the algorithm, including computational weights and bias of the algorithm), the data on which the system was trained, and the method of training to understand how any potential baked-in bias may impact the system’s decision-making.

Subscribe to Ballard Spahr Mailing Lists

Get the latest significant legal alerts, news, webinars, and insights that affect your industry. 
Subscribe

Copyright © 2024 by Ballard Spahr LLP.
www.ballardspahr.com
(No claim to original U.S. government material.)

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, including electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the author and publisher.

This alert is a periodic publication of Ballard Spahr LLP and is intended to notify recipients of new developments in the law. It should not be construed as legal advice or legal opinion on any specific facts or circumstances. The contents are intended for general informational purposes only, and you are urged to consult your own attorney concerning your situation and specific legal questions you have.