Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI and Interpretable Model for Insurance Premium Prediction
3
Zitationen
2
Autoren
2022
Jahr
Abstract
<title>Abstract</title> Traditional machine learning metrics, such as precision, recall, accuracy, Mean Squared Error (MSE) and Root Mean Square Error (RMSE) among others, do not provide sufficient confidence for practitioners with regard to the performance and dependability of their models. Therefore, there is a need to provide an explanation of the model to machine-learning professionals to establish trust in the model prediction and provide a human-understandable explanation to domain specialists. This was achieved by developing a model-independent and locally accurate explanation set. This set makes the conclusions of the primary models understandable to anyone in the insurance industry, including experts and non-experts. The interpretability of this model is vital for effective human interaction with machine learning systems. It is also important to provide an individual-explained prediction that will gauge trust, in addition to completing and supporting set validations in model selection. Therefore, this study proposes the use of LIME and <bold>SHAP</bold> approaches to understand and explain a model developed using random forest regression to predict insurance premiums. The drawback of the <bold>SHAP</bold> algorithms, as indicated in these experiments, is the lengthy computing time and every possible computing combination needed to produce the findings. Additionally, the intentions of the experiments conducted were focused on the model's interpretability and explainability using <bold>LIME</bold> and <bold>SHAP</bold>, and not on insurance premium charge prediction. Two experiments were conducted, experiment one focused on interpreting the random forest regression model using <bold>LIME</bold> techniques while experiment two used the <bold>SHAP</bold> technique to interpret the model.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.373 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.259 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.125 Zit.