FTC Outlines Approach to Discrimination in AI and Foreshadows Potential Enforcement
A recent blog post from the Federal Trade Commission (FTC) provides direct guidance on the FTC’s approach to Artificial Intelligence (AI) bias and signals potential enforcement in this area. The post—entitled Aiming for truth, fairness, and equity in your company’s use of AI—makes clear that the FTC will use its FTC Act authority to pursue sale and use of biased algorithms, and lays out a roadmap for its compliance expectations. The post is a step forward on one of Acting FTC Chairwoman Slaughter’s key policy priorities of promoting racial equity through consumer protection, as she has spoken often of potential harms from discriminatory algorithms.
In the post, the FTC lays out the legal framework for evaluating AI bias, which includes the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and Section 5 of the FTC Act. Both the FCRA and ECOA are applicable when AI is used for certain purposes, such as credit. The FTC Act has a broader sweep, and the explanation of Section 5 is particularly notable: “The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.” That would cover use of AI in a wide range of contexts, including advertising decisions.
Indeed, the FTC suggests that enforcement could be on the horizon if use of AI results in discrimination, noting that companies should “keep in mind that if you don’t hold yourself accountable, the FTC may do it.” As one example, the FTC states that an algorithm practice could be “unfair” under the FTC Act if the model “pinpoints [] consumers by considering race, color, religion, and sex – and the result is digital redlining.”
The rest of the post lays out a roadmap for compliance expectations for companies using AI and related algorithms. Among other things, companies are expected to:
-
Rely on inclusive data sets: “[f]rom the start, [companies should] think about ways to improve [their] data set, design [their] model to account for data gaps, and – in light of any shortcomings – limit where or how [they] use the model.”
-
Test their algorithms “both before [companies] use it and periodically after that—to make sure that it doesn’t discriminate on the basis of race, gender, or other protected class.”
-
Be truthful with customers about how their data is used and not exaggerate what an algorithm can deliver.
-
Be transparent and independent, “for example, by using transparency frameworks and independent standards.”
The FTC’s statements are a starting point for companies to prevent AI bias in practice, and companies that develop and use AI should be aware of the many resources in this space. For example, in the area of transparency, the National Institute of Standards and Technology (NIST) is currently developing standards for explainable AI that would be helpful for companies to consult – though that work remains ongoing. Similarly, NIST is in the process of considering ways to measure and test for bias in AI. Laws like ECOA have detailed implementing regulations and examination guidance, which may provide some guideposts—while much of the guidance is credit-specific, it could be used analogously in different contexts. In short, while the FTC has begun to outline the principles for companies to follow, companies will need to be creative and forward-thinking as they evaluate and address potential AI bias risks.