Now showing 1 - 2 of 2
No Thumbnail Available
Publication

An Explainable Tool to Support Age-related Macular Degeneration Diagnosis

2022 , Martinez-Villaseñor, Lourdes , Miralles-Pechuán, Luis , Ponce, Hiram , Martínez Velasco, Antonieta Teodora

Artificial intelligence and deep learning, in particu-lar, have gained large attention in the ophthalmology community due to the possibility of processing large amounts of data and dig-itized ocular images. Intelligent systems are developed to support the diagnosis and treatment of a number of ophthalmic diseases such as age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity. Hence, explainability is necessary to gain trust and therefore the adoption of these critical decision support systems. Visual explanations have been proposed for AMD diagnosis only when optical coherence tomography (OCT) images are used, but interpretability using other inputs (i.e. data point-based features) for AMD diagnosis is rather limited. In this paper, we propose a practical tool to support AMD diagnosis based on Artificial Hydrocarbon Networks (AHN) with different kinds of input data such as demographic characteristics, features known as risk factors for AMD, and genetic variants obtained from DNA genotyping. The proposed explainer, namely eXplainable Artificial Hydrocarbon Networks (XAHN) is able to get global and local interpretations of the AHN model. An explainability assessment of the XAHN explainer was applied to clinicians for getting feedback from the tool. We consider the XAHN explainer tool will be beneficial to support expert clinicians in AMD diagnosis, especially where input data are not visual. © 2022 IEEE.

No Thumbnail Available
Publication

Explainable artificial hydrocarbon networks classifier applied to preeclampsia

2024 , Ponce, Hiram , Martinez-Villaseñor, Lourdes , Martínez Velasco, Antonieta Teodora

Explainability is crucial in domains where system decisions have significant implications for human trust in black-box models. Lack of understanding regarding how these decisions are made hinders the adoption of so-called clinical decision support systems. While neural networks and deep learning methods exhibit impressive performance, they remain less explainable than white-box approaches. Artificial Hydrocarbon Networks (AHN) is an effective black-box model that can be used to support critical clinical decisions if accompanied by explainability mechanisms to instill confidence among clinicians. In this paper, we present a use case involving global and local explanations for AHN models, provided with an automatic procedure so-called eXplainable Artificial Hydrocarbon Networks (XAHN). We apply XAHN to preeclampsia prognosis, enabling interpretability within an accurate black-box model. Our approach involves training a suitable AHN model using the cross-validation with ten repetitions, followed by a comparative analysis against four well-known machine learning techniques. Notably, the AHN model outperformed the others, achieving an F1-score of 74.91%. Additionally, we assess the efficacy of our XAHN explainer through a survey applied to clinicians, evaluating the goodness and satisfaction of the provided explanations. To the best of our knowledge, this work represents one of the earliest attempts to address the explainability challenge in preeclampsia prediction.© 2024 The Author(s). Published by Elsevier Inc.