We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Perspectives of Oncologists on the Ethical Implications of Using Artificial Intelligence for Cancer Care.
- Authors
Hantel, Andrew; Walsh, Thomas P.; Marron, Jonathan M.; Kehl, Kenneth L.; Sharp, Richard; Van Allen, Eliezer; Abel, Gregory A.
- Abstract
Key Points: Question: What are oncologists' views on ethical issues associated with the implementation of artificial intelligence (AI) in cancer care? Findings: In this cross-sectional survey study, 84.8% of US oncologists reported that AI needs to be explainable by oncologists but not necessarily patients, and 81.4% agreed that patients should consent to AI use for cancer treatment decisions. Less than half (47.1%) of oncologists viewed medico-legal problems from AI use as physicians' responsibility, and although most (76.5%) reported feeling responsible for protecting patients from biased AI, few (27.9%) reported feeling confident in their ability to do so. Meaning: This study suggests that concerns about ethical issues, including explainability, patient consent, and responsibility, may impede optimal adoption of AI into cancer care. Importance: Artificial intelligence (AI) tools are rapidly integrating into cancer care. Understanding stakeholder views on ethical issues associated with the implementation of AI in oncology is critical to optimal deployment. Objective: To evaluate oncologists' views on the ethical domains of the use of AI in clinical care, including familiarity, predictions, explainability (the ability to explain how a result was determined), bias, deference, and responsibilities. Design, Setting, and Participants: This cross-sectional, population-based survey study was conducted from November 15, 2022, to July 31, 2023, among 204 US-based oncologists identified using the National Plan & Provider Enumeration System. Main Outcomes and Measures: The primary outcome was response to a question asking whether participants agreed or disagreed that patients need to provide informed consent for AI model use during cancer treatment decisions. Results: Of 387 surveys, 204 were completed (response rate, 52.7%). Participants represented 37 states, 120 (63.7%) identified as male, 128 (62.7%) as non-Hispanic White, and 60 (29.4%) were from academic practices; 95 (46.6%) had received some education on AI use in health care, and 45.3% (92 of 203) reported familiarity with clinical decision models. Most participants (84.8% [173 of 204]) reported that AI-based clinical decision models needed to be explainable by oncologists to be used in the clinic; 23.0% (47 of 204) stated they also needed to be explainable by patients. Patient consent for AI model use during treatment decisions was supported by 81.4% of participants (166 of 204). When presented with a scenario in which an AI decision model selected a different treatment regimen than the oncologist planned to recommend, the most common response was to present both options and let the patient decide (36.8% [75 of 204]); respondents from academic settings were more likely than those from other settings to let the patient decide (OR, 2.56; 95% CI, 1.19-5.51). Most respondents (90.7% [185 of 204]) reported that AI developers were responsible for the medico-legal problems associated with AI use. Some agreed that this responsibility was shared by physicians (47.1% [96 of 204]) or hospitals (43.1% [88 of 204]). Finally, most respondents (76.5% [156 of 204]) agreed that oncologists should protect patients from biased AI tools, but only 27.9% (57 of 204) were confident in their ability to identify poorly representative AI models. Conclusions and Relevance: In this cross-sectional survey study, few oncologists reported that patients needed to understand AI models, but most agreed that patients should consent to their use, and many tasked patients with choosing between physician- and AI-recommended treatment regimens. These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise. This cross-sectional survey study evaluates oncologists' views on the ethical domains of the use of artificial intelligence (AI) in clinical care, including familiarity, predictions, explainability, bias, deference, and responsibilities.
- Subjects
UNITED States; CROSS-sectional method; RESEARCH funding; CANCER patient medical care; ARTIFICIAL intelligence; RESPONSIBILITY; HUMAN beings; FISHER exact test; MULTIPLE regression analysis; DESCRIPTIVE statistics; CHI-squared test; DECISION making; SURVEYS; ODDS ratio; INFORMED consent (Medical law); ONCOLOGISTS; CONFIDENCE intervals; SOCIODEMOGRAPHIC factors; DATA analysis software
- Publication
JAMA Network Open, 2024, Vol 7, Issue 3, pe244077
- ISSN
2574-3805
- Publication type
Article
- DOI
10.1001/jamanetworkopen.2024.4077