by Lori Solomon
There may be ethical barriers to the adoption of artificial intelligence (AI) into cancer care, according to a study published online March 28 in JAMA Network Open.
Andrew Hantel, M.D., from the Dana-Farber Cancer Institute in Boston, and colleagues evaluated oncologists' views on the ethical domains of the use of AI in clinical care. The analysis included 204 survey responses from 37 states.
The researchers found that most participants (84.8%) reported that AI-based clinical decision models needed to be explainable by oncologists to be used in the clinic, while 23.0% stated they also needed to be explainable by patients. Eight in 10 (81.4%) supported patient consent for AI model use during treatment decisions.
In a scenario in which an AI decision model selected a different treatment regimen than the oncologist planned to recommend, most respondents said they would present both options and let the patient decide (36.8%), with those from academic settings more likely than those from other settings to let the patient decide (odds ratio, 2.56). Three-quarters of respondents (76.5%) agreed that oncologists should protect patients from biased AI tools, but only 27.9% were confident in their ability to identify poorly representative AI models.
"These findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions as well as decisional responsibility when problems related to AI use arise," the authors write.
Several authors disclosed ties to the pharmaceutical and biotechnology industries.
More information: Andrew Hantel et al, Perspectives of Oncologists on the Ethical Implications of Using Artificial Intelligence for Cancer Care, JAMA Network Open (2024). DOI: 10.1001/jamanetworkopen.2024.4077
Journal information: JAMA Network Open
© 2024 HealthDay. All rights reserved.
Post comments