We found a match
Your institution may have rights to this item. Sign in to continue.
- Title
APPRAISE-AI Tool for Quantitative Evaluation of AI Studies for Clinical Decision Support.
- Authors
Kwong, Jethro C. C.; Khondker, Adree; Lajkosz, Katherine; McDermott, Matthew B. A.; Frigola, Xavier Borrat; McCradden, Melissa D.; Mamdani, Muhammad; Kulkarni, Girish S.; Johnson, Alistair E. W.
- Abstract
Key Points: Question: Can quantitative methods be used to evaluate the robustness of artificial intelligence (AI) prediction models and their suitability for clinical decision support? Findings: In this quality improvement study, the APPRAISE-AI tool was developed to evaluate the methodological and reporting quality of 28 clinical AI studies using a quantitative approach. APPRAISE-AI demonstrated strong interrater and intrarater reliability and correlated well with other validated measures of study quality across a variety of AI studies. Meaning: These findings suggest that APPRAISE-AI fills a critical gap in the current landscape of AI reporting guidelines and provides a standardized, quantitative tool for evaluating the methodological rigor and clinical utility of AI models. This quality improvement study evaluates the methodological and reporting quality of artificial intelligence (AI) models for clinical decision support. Importance: Artificial intelligence (AI) has gained considerable attention in health care, yet concerns have been raised around appropriate methods and fairness. Current AI reporting guidelines do not provide a means of quantifying overall quality of AI research, limiting their ability to compare models addressing the same clinical question. Objective: To develop a tool (APPRAISE-AI) to evaluate the methodological and reporting quality of AI prediction models for clinical decision support. Design, Setting, and Participants: This quality improvement study evaluated AI studies in the model development, silent, and clinical trial phases using the APPRAISE-AI tool, a quantitative method for evaluating quality of AI studies across 6 domains: clinical relevance, data quality, methodological conduct, robustness of results, reporting quality, and reproducibility. These domains included 24 items with a maximum overall score of 100 points. Points were assigned to each item, with higher points indicating stronger methodological or reporting quality. The tool was applied to a systematic review on machine learning to estimate sepsis that included articles published until September 13, 2019. Data analysis was performed from September to December 2022. Main Outcomes and Measures: The primary outcomes were interrater and intrarater reliability and the correlation between APPRAISE-AI scores and expert scores, 3-year citation rate, number of Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) low risk-of-bias domains, and overall adherence to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement. Results: A total of 28 studies were included. Overall APPRAISE-AI scores ranged from 33 (low quality) to 67 (high quality). Most studies were moderate quality. The 5 lowest scoring items included source of data, sample size calculation, bias assessment, error analysis, and transparency. Overall APPRAISE-AI scores were associated with expert scores (Spearman ρ, 0.82; 95% CI, 0.64-0.91; P <.001), 3-year citation rate (Spearman ρ, 0.69; 95% CI, 0.43-0.85; P <.001), number of QUADAS-2 low risk-of-bias domains (Spearman ρ, 0.56; 95% CI, 0.24-0.77; P =.002), and adherence to the TRIPOD statement (Spearman ρ, 0.87; 95% CI, 0.73-0.94; P <.001). Intraclass correlation coefficient ranges for interrater and intrarater reliability were 0.74 to 1.00 for individual items, 0.81 to 0.99 for individual domains, and 0.91 to 0.98 for overall scores. Conclusions and Relevance: In this quality improvement study, APPRAISE-AI demonstrated strong interrater and intrarater reliability and correlated well with several study quality measures. This tool may provide a quantitative approach for investigators, reviewers, editors, and funding organizations to compare the research quality across AI studies for clinical decision support.
- Subjects
ARTIFICIAL intelligence tests; EXPERIMENTAL design; STATISTICS; CLINICAL decision support systems; RESEARCH evaluation; CONFIDENCE intervals; RESEARCH methodology; COMPARATIVE studies; QUALITY assurance; DESCRIPTIVE statistics; INTRACLASS correlation; RESEARCH funding; PREDICTION models; DATA analysis; DATA analysis software; ODDS ratio
- Publication
JAMA Network Open, 2023, Vol 6, Issue 9, pe2335377
- ISSN
2574-3805
- Publication type
Article
- DOI
10.1001/jamanetworkopen.2023.35377