We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
"EQUALITY AND PRIVACY BY DESIGN": A NEW MODEL OF ARTIFICIAL INTELLIGENCE DATA TRANSPARENCY VIA AUDITING, CERTIFICATION, AND SAFE HARBOR REGIMES.
- Authors
Yanisky-Ravid, Shlomit; Hallisey, Sean K.
- Abstract
Artificial Intelligence and Machine Learning (AI) are often described as technological breakthroughs that will completely transform our society and economy. A I systems have been implemented everywhere, from medicine, transportation, finance, art, to legal and social spheres, and even in weapons development. In many sectors, A I systems have already started making decisions previously made by humans. Promising as A I systems may be, they also pose urgent challenges to our everyday life. While much attention has concerned A I 's legal implications, the literature suffers from a lack of solutions that account for both legal and engineering practices and constraints. This leaves technology firms without guidelines and increases the risk of societal harm. It also means that policymakers and judges operate without a regulatory regime to turn to when addressing these novel and unpredictable outcomes. This Article tries to fill the void by focusing on data rather than on the software and programmers. It suggests a new model that stems from a recognition of the significant role that the data plays in the development and functioning of AI systems. Data is the most important aspect of teaching A I systems to operate. A I algorithms begin with a massive preexisting dataset, which data providers use to train the system. But the data that A I systems "swallow" can be illegal, discriminatory, altered, unreliable, or simply incomplete. Thus, the more data fed to the A I systems, the higher the likelihood that they could produce biased, discriminatory decisions and violate privacy rights. The Article discusses how discrimination can arise, even inadvertently, from the operation of "trusted " and "objective " AI systems. To address this problem, this Article proposes a new A I Data Transparency Model that focuses on disclosure of data rather than, as some scholars argue, focusing on the initial software program and programmers. The Model includes an auditing regime and a certification program, run either by a governmental body or, in the absence of such entity, by private institutions. This Model will encourage the industry to take proactive steps to ensure and publicize that datasets are trustworthy. The suggested Model includes a safe harbor, which incentivizes firms to implement transparency recommendations even without massive regulatory oversight. From an engineering point of view, the Model recognizes data providers and big data as the most important components in the process of creating, training and operating A I systems. Even more importantly, the Model is technologically feasible because data can be easily absorbed and kept by a technological tool. Further, this Model is also practically feasible because it follows already existing legal frameworks of data transparency, such as the ones being implemented by the FDA and the SEC. Improving transparency in data systems would result in less harmful AI systems, better protect societal rights and norms, and produce improved outcomes in this emerging field, especially for minority communities that often lack resources or representation to challenge AI systems. Increased transparency of the data used while developing, training or operating AI systems would mitigate and reduce these harms. Additionally, to better identify the risks of faulty data, industry players must conduct critical evaluations and audits of the data used to train A I systems; one way to incentivize this is a certification system to publicize good-faith efforts to reduce the possibility of discriminatory outcomes and privacy violations in AI systems. This Article strives to incentivize the creation of new standards, which the industry could implement from the genesis of At systems to mitigate the possibility of harm, rather than post-hoc assignments of liability.
- Subjects
UNITED States; ARTIFICIAL intelligence; MACHINE learning; UNITED States. Food &; Drug Administration; ORGANIZATIONAL transparency; ECONOMICS
- Publication
Fordham Urban Law Journal, 2019, Vol 46, Issue 2, p428
- ISSN
0199-4646
- Publication type
Article