Quantifying decision support level of explainable automatic classification of diagnoses in Spanish medical records
Ver/
Fecha
2024-11Metadatos
Mostrar el registro completo del ítem
Computers in Biology and Medicine 182 : (2024) // Article ID 109127
Resumen
Background and Objective:
In the realm of automatic Electronic Health Records (EHR) classification according to the International Classification of Diseases (ICD) there is a notable gap of non-black box approaches and more in Spanish, which is also frequently ignored in clinical language classification. An additional gap in explainability pertains to the lack of standardized metrics for evaluating the degree of explainability offered by distinct techniques.
Methods:
We address the classification of Spanish electronic health records, using methods to explain the predictions and improve the decision support level. We also propose Leberage a novel metric to quantify the decision support level of the explainable predictions.
We aim to assess the explanatory ability derived from three model-independent methods based on different theoretical frameworks: SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and Integrated Gradients (IG). We develop a system based on longformers that can process long documents and then use the explainability methods to extract the relevant segments of text in the EHR that motivated each ICD. We then measure the outcome of the different explainability methods by implementing a novel metric.
Results:
Our results beat those that carry out the same task by 7%. In terms of explainability degree LIME appears as a stronger technique compared to IG and SHAP.
Discussion:
Our research reveals that the explored techniques are useful for explaining the output of black box models as the longformer. In addition, the proposed metric emerges as a good choice to quantify the contribution of explainability techniques.