We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
Multiple Explainable Approaches to Predict the Risk of Stroke Using Artificial Intelligence.
- Authors
S, Susmita; Chadaga, Krishnaraj; Sampathila, Niranjana; Prabhu, Srikanth; Chadaga, Rajagopala; S, Swathi Katta
- Abstract
Stroke occurs when a brain's blood artery ruptures or the brain's blood supply is interrupted. Due to rupture or obstruction, the brain's tissues cannot receive enough blood and oxygen. Stroke is a common cause of mortality among older people. Hence, loss of life and severe brain damage can be avoided if stroke is recognized and diagnosed early. Healthcare professionals can discover solutions more quickly and accurately using artificial intelligence (AI) and machine learning (ML). As a result, we have shown how to predict stroke in patients using heterogeneous classifiers and explainable artificial intelligence (XAI). The multistack of ML models surpassed all other classifiers, with accuracy, recall, and precision of 96%, 96%, and 96%, respectively. Explainable artificial intelligence is a collection of frameworks and tools that aid in understanding and interpreting predictions provided by machine learning algorithms. Five diverse XAI methods, such as Shapley Additive Values (SHAP), ELI5, QLattice, Local Interpretable Model-agnostic Explanations (LIME) and Anchor, have been used to decipher the model predictions. This research aims to enable healthcare professionals to provide patients with more personalized and efficient care, while also providing a screening architecture with automated tools that can be used to revolutionize stroke prevention and treatment.
- Subjects
STROKE; ARTIFICIAL intelligence; MACHINE learning; MEDICAL personnel; OXYGEN in the blood
- Publication
Information (2078-2489), 2023, Vol 14, Issue 8, p435
- ISSN
2078-2489
- Publication type
Article
- DOI
10.3390/info14080435