We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Algorithmic Fairness in Consumer Credit Underwriting: Towards a Harm-Based Framework for AI Fair Lending.
- Authors
WU, JASON JIA-XI
- Abstract
Credit discrimination undermines consumer financial autonomy and distorts market pricing of lending risks. To ensure equal access to credit, existing federal fair lending laws—e.g., Equal Credit Opportunity Act, Fair Housing Act— prohibit lenders from considering race, sex, age, or national origin in their lending decisions. For decades, the fair lending laws have largely held the banking industry in check. However, as lenders increasingly delegate lending decisions to artificial intelligence (AI) through the service of fintech and data intermediaries, it is questionable whether existing laws can still adequately safeguard equal credit access. This Article argues that the current fair lending regime can no longer protect consumers in the age of AI. This is because our regime does not account for harms traceable to automatic, unsupervised algorithmic processes. Unlike human actors, algorithms cannot desire to cause harm or intend to use suspect factors. Yet, courts and litigants are constrained by the language of the fair lending laws to hold AI accountable under an antiquated legal theory—treating discrimination as analogous to common law torts. Under this regime, victims of AI discrimination carry the burden of showing lender animus and causal explanations linking the victim’s injury to the lender’s specific acts or policies. Consequently, such victims are often barred from recovery due to insurmountable pleading and evidentiary hurdles. Thus, any attempt to combat AI discrimination must consider two unique features of algorithmic harm. First, an algorithm’s discriminatory decision may have no explicable connection—let alone causal relation—to the acts or policies of the lender due to the algorithm’s self-learning capabilities. Second, whether an algorithm discriminates depends on a host of variables typically outside the lenders’ control. The unpredictable nature of AI calls into question the effectiveness of regulating AI bias under the fair lending laws—a conduct-based liability regime that emphasizes causation, reasonable foreseeability, and ex-ante risk mitigation. As a blueprint for reform, this Article proposes an alternative harm-based framework to address the root cause of AI discrimination: data opaqueness. To implement this framework, this Article recommends the CFPB to adopt a new rule prohibiting the use of “black box” algorithms in consumer lending, pursuant to the CFPB’s authority to prohibit “unfair, deceptive, or abusive acts and practices” (UDAAPs) under the Dodd-Frank Act.
- Subjects
UNITED States. Consumer Financial Protection Bureau; LOANS; FAIR Housing Act of 1968 (U.S.); CONSUMER credit; ARTIFICIAL intelligence; CONSUMER lending; TORT theory; TORTS; CONSUMER protection
- Publication
Berkeley Business Law Journal, 2024, Vol 21, Issue 1, p65
- ISSN
1548-7067
- Publication type
Article
- DOI
10.15779/Z38CF9J785