We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
Levels of Trust in the Context of Machine Ethics.
- Authors
Tavani, Herman
- Abstract
Are trust relationships involving humans and artificial agents (AAs) possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani ( Ethics and Information Technology 13(1):39-51, ), I argue that the 'short answer' to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents (HAs) and AAs. In defending this view, I show how James Moor's model for distinguishing four levels of ethical agents in the context of machine ethics (Moor, IEEE Intelligent Systems 21(4):18-21, ) can help us to develop a framework that differentiates four (loosely corresponding) levels of trust. Via a series of hypothetical scenarios, I illustrate each level of trust involved in HA-AA relationships. Finally, I argue that these levels of trust reflect three key factors or variables: (i) the level of autonomy of the individual AAs involved, (ii) the degree of risk/ vulnerability on the part of the HAs who place their trust in the AAs, and (iii) the kind of interactions (direct vs. indirect) that occur between the HAs and AAs in the trust environments.
- Subjects
ARTIFICIAL intelligence; ETHICS; MACHINE theory; PHILOSOPHY; COMPUTER software
- Publication
Philosophy & Technology, 2015, Vol 28, Issue 1, p75
- ISSN
2210-5433
- Publication type
Article
- DOI
10.1007/s13347-014-0165-8