OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 23.03.2026, 05:48

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI and Interpretable Model for Insurance Premium Prediction

2022·3 ZitationenOpen Access
Volltext beim Verlag öffnen

3

Zitationen

2

Autoren

2022

Jahr

Abstract

<title>Abstract</title> Traditional machine learning metrics, such as precision, recall, accuracy, Mean Squared Error (MSE) and Root Mean Square Error (RMSE) among others, do not provide sufficient confidence for practitioners with regard to the performance and dependability of their models. Therefore, there is a need to provide an explanation of the model to machine-learning professionals to establish trust in the model prediction and provide a human-understandable explanation to domain specialists. This was achieved by developing a model-independent and locally accurate explanation set. This set makes the conclusions of the primary models understandable to anyone in the insurance industry, including experts and non-experts. The interpretability of this model is vital for effective human interaction with machine learning systems. It is also important to provide an individual-explained prediction that will gauge trust, in addition to completing and supporting set validations in model selection. Therefore, this study proposes the use of LIME and <bold>SHAP</bold> approaches to understand and explain a model developed using random forest regression to predict insurance premiums. The drawback of the <bold>SHAP</bold> algorithms, as indicated in these experiments, is the lengthy computing time and every possible computing combination needed to produce the findings. Additionally, the intentions of the experiments conducted were focused on the model's interpretability and explainability using <bold>LIME</bold> and <bold>SHAP</bold>, and not on insurance premium charge prediction. Two experiments were conducted, experiment one focused on interpreting the random forest regression model using <bold>LIME</bold> techniques while experiment two used the <bold>SHAP</bold> technique to interpret the model.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Machine Learning in HealthcareArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen