We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
ARTIFICIAL INTELLIGENCE OPINION LIABILITY.
- Authors
Bathaee, Yavar
- Abstract
Opinions are not simply a collection of factual statements--they are something more. They are models of reality that are based on probabilistic judgments, experience, and a complex weighting of information. That is why most liability regimes that address opinion statements apply scienter-like heuristics to determine whether liability is appropriate, for example, holding a speaker liable only if there is evidence that the speaker did not subjectively believe in his or her own opinion. In the case of artificial intelligence, scienter is problematic. Using machine-learning algorithms, such as deep neural networks, these artificial intelligence systems are capable of making intuitive and experiential judgments just as humans experts do, but their capabilities come at the price of transparency. Because of the Black Box Problem, it may be impossible to determine what facts or parameters an artificial intelligence system found important in its decision making or in reaching its opinions. This means that one cannot simply examine the artificial intelligence to determine the intent of the person that created or deployed it. This decouples intent from the opinion, and renders scienter-based heuristics inert, functionally insulating both artificial intelligence and artificial intelligence-assisted opinions from liability in a wide range of contexts. This Article proposes a more precise set of factual heuristics that address how much supervision and deference the artificial intelligence receives, the training, validation, and testing of the artificial intelligence, and the a priori constraints imposed on the artificial intelligence. This Article argues that although these heuristics may indicate that the creator or user of the artificial intelligence acted with scienter (i.e., recklessness), scienter should be merely sufficient, not necessary for liability. This Article also discusses other contexts, such as data bias in training data, that should also give rise to liability, even if there is no scienter and none of the granular factual heuristics suggest that liability is appropriate.
- Subjects
ARTIFICIAL intelligence; DECISION making; HEURISTIC; ORGANIZATIONAL transparency; MACHINE learning
- Publication
Berkeley Technology Law Journal, 2020, Vol 35, Issue 1, p113
- ISSN
1086-3818
- Publication type
Article
- DOI
10.15779/Z38P55DH32