Lately, I used to be requested to discover a automobile insurance coverage premium dataset and predict a “truthful” premium for every policyholder. The duty included exploratory information evaluation, characteristic engineering, mannequin choice and comparability, and ideas for mannequin enchancment. It was an attention-grabbing and insightful activity, as our expectations typically fail and we will’t sort out the issue in a routine and simple method. Right here, I contact upon the final course of and details. The longer model of the analyses might be present in this GitHub repository.
The dataset belongs to a French automobile insurer and is in two components, which might be downloaded at this and this hyperlinks. One desk comprises details about policyholders, and the opposite desk consists of claims and the quantity of claims for every policyholder, if any. Though the duty is the prediction of a “truthful” premium for every policyholder, in essence, insurance coverage corporations goal to not lose cash in the long run whereas additionally growing the variety of their clients.
Let’s take a look on the information. The 2 tables are in arff
format, so I first load them and convert them to dataframe
format.
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
import arff
data_freq = arff.load("information/freMTPL2freq.arff")
df_freq = pd.DataFrame(
data_freq,
columns=[
"IDpol",
"ClaimNb",
"Exposure",
"Area",
"VehPower",
"VehAge",
"DrivAge",
"BonusMalus",
"VehBrand",
"VehGas",
"Density",
"Region",
],
)
data_sev = arff.load("information/freMTPL2sev.arff")
df_sev = pd.DataFrame(data_sev, columns=["IDpol", "ClaimAmount"])
As talked about above, the primary desk comprises details about policyholders. In complete, there are greater than 678K data on this dataset.
print(df_freq.form)
df_freq.head()
(678013, 12)
The variety of claims recorded within the second desk is 26,636.
print(df_sev.form)
df_sev.head()
(26639, 2)
As some policyholders have had a couple of declare, we must always combination the entire quantity of claims for every policyholder. As proven under, the variety of policyholders who’ve filed claims a minimum of as soon as is 24,950.
# sum of claims per policyholder
df_sev_agg = (
df_sev.groupby("IDpol")["ClaimAmount"]
.sum()
.to_frame()
.rename(columns={"ClaimAmount": "SumClaimAmount"})
.reset_index()
)
df_sev_agg.form
(24950, 2)
Now we will merge the 2 tables as proven under. Evaluating the size of df_sev_agg
with the non-null values within the merged dataframe, df_merged
, reveals that for some claims, we do not have details about their policyholders, and their information is lacking. The left be a part of has already eliminated these data from our dataset.
df_merged = pd.merge(df_freq, df_sev_agg, on="IDpol", how="left")
df_merged["SumClaimAmount"].notna().value_counts()
SumClaimAmount
False 653069
True 24944
Identify: depend, dtype: int64
The dependent variable is the sum of declare quantities divided by the publicity, which is the length of an insurance coverage contract per yr. We compute the goal variable and add it to the dataset as proven under.
Out of 678,013 contracts, solely 24,944 contracts have had a declare, which is about 3.7%. This means that the dataset is extremely unbalanced. To research the variables, we’ll give attention to contracts with declare prices. When modeling, we will both analyze solely contracts with declare prices or take into account all contracts whereas being conscious of the unbalanced lessons.
# Calculate the goal variable
df_merged["target"] = df_merged["SumClaimAmount"] / df_merged["Exposure"]# Outline numerical and categorical columns
num_cols = [
"Exposure",
"SumClaimAmount",
"VehPower",
"VehAge",
"DrivAge",
"BonusMalus",
"Density",
"target",
]
cat_cols = ["Area", "VehBrand", "VehGas", "Region"]
# Create a mixed DataFrame with numerical columns and categorical columns transformed to class sort
df_all = pd.concat(
[df_merged[num_cols]] + [df_merged[c].astype("class") for c in cat_cols], axis=1
)
# Dataframe of contracts with a declare value
df = df_all[df_all["SumClaimAmount"].notna()]
# Add 'HasClaim' column to df_all indicating whether or not there was a declare
df_all["HasClaim"] = df_all["target"].notna()
# Fill lacking 'goal' values with 0.0
df_all["target"] = df_all["target"].fillna(0.0)
# df = df_all
First, we familiarize ourselves with the goal variable.
fig, axs = plt.subplots(1, 3, figsize=(12, 4))
_ = axs[0].hist(df["target"], bins=100)
axs[0].set_title("Hist. of Goal")
_ = axs[1].hist(df["target"].map(np.log10), bins=100)
axs[1].set_title("Hist. of log(Goal)")
_ = axs[2].boxplot(df["target"].map(np.log10), labels=["log(Target)"])
_ = axs[2].set_title("Boxplot of Goal")
The plots point out that the dependent variable, goal
, displays a really wide selection and is extremely skewed. Moreover, there are some information factors with exceptionally excessive or low values, which can be thought-about outliers. Nonetheless, we needs to be cautious about labeling a knowledge level as an “outlier”, since we’re coping with an insurance coverage drawback.
Might we examine the place these outliers come from? Since goal
is calculated by SumClaimAmount
divided by Publicity
, it’s price analyzing every of those variables.
fig, axs = plt.subplots(1, 3, figsize=(12, 3))
_ = axs[0].hist(df["Exposure"], bins=50)
axs[0].set_title("Publicity")
_ = axs[1].hist(df["SumClaimAmount"], bins=50)
axs[1].set_title("SumClaimAmount")
_ = axs[2].hist(df["SumClaimAmount"].map(np.log10), bins=50)
_ = axs[2].set_title("log(SumClaimAmount)")
plt.tight_layout()
The plots reveal situations within the dataset the place Publicity
values are very low, leading to disproportionately excessive values for the dependent variable. Nonetheless, contemplating the low probability of elevating a declare, defining the dependent variable as SumClaimAmount
divided by Publicity
as a measure of the anticipated declare value for every policyholder seems to overestimate the fee. This, in flip, results in quite a few excessive values, making the distribution of the dependent variable extremely skewed.
Alternatively, there are instances with very excessive SumClaimAmount
. Thus, eradicating situations that appear to be outliers requires cautious consideration.
As goal
has an unlimited vary and is skewed, we may additionally take into account analyzing and utilizing the log-transformed values of it.
df["targetLog"] = df["target"].map(np.log10)
_ = sns.histplot(information=df, x="targetLog")
If we determine to take away outliers, we will use the IQR or Z-Rating methodology. Right here, the latter is used on the log values of the response variable.
# Z-Rating
z_scores = stats.zscore(df["targetLog"])
threshold = 3
outliers = (z_scores < -threshold) | (z_scores > threshold)
df["IsOutlier"] = outliers
print(f"Variety of outliers: {outliers.value_counts()[True]}")
Variety of outliers: 299
Specializing in policyholders with declare prices, we analyze the numerical variables. Within the Jupyter pocket book on GitHub, you possibly can see the plots for every variable. Right here, I briefly point out them:
VehPower
: Car energyVehAge
: Car Age ranges from 0 to 100. Solely 20 situations haveVehAge
> 30. These situations could possibly be eliminated throughout modeling and handled in a different way later if crucial.DrivAge
: Solely 14 situations haveDrivAge
> 90. These situations could possibly be eliminated throughout modeling and handled in a different way later if crucial.BonusMalus
: Solely 40 situations haveBonusMalus
> 150. These situations could possibly be eliminated throughout modeling and handled in a different way later if crucial.Density
: As proven within the plots under, density has a really wide selection, and its distribution appears to comply with an influence legislation. So, we will additionally convert it right into a log scale to resemble a traditional distribution.
fig, axs = plt.subplots(2, 2, figsize=(12, 8), width_ratios=(1, 1.5))
axs = axs.flatten()
_ = sns.histplot(information=df, x="Density", ax=axs[0])
bins = pd.minimize(df["Density"], proper=False, bins=vary(0, 30000, 500))
df_tmp = df.groupby(bins, noticed=True)["target"].imply().to_frame().reset_index()
_ = sns.barplot(df_tmp, x="Density", y="goal", errorbar=None, ax=axs[1])
axs[1].set_ylabel("Imply Goal")
axs[1].tick_params(axis="x", rotation=90)
df["DensityLog"] = df["Density"].map(np.log10)
df_all["DensityLog"] = df_all["Density"].map(np.log10)
_ = sns.histplot(df["DensityLog"], ax=axs[2])
bins = pd.minimize(df["DensityLog"], proper=False, bins=np.linspace(0, 5, 50))
df_tmp = df.groupby(bins, noticed=True)["target"].imply().to_frame().reset_index()
_ = sns.barplot(df_tmp, x="DensityLog", y="goal", errorbar=None, ax=axs[3])
axs[3].tick_params(axis="x", rotation=90)
plt.tight_layout()
As proven within the plot under, the numerical variables exhibit very low correlation with each goal
and targetLog
. Subsequently, these options is probably not extremely efficient in predicting the dependent variables.
plt.determine(figsize=(8, 8))
cols = [
"target",
"targetLog",
"SumClaimAmount",
"Exposure",
"VehPower",
"VehAge",
"DrivAge",
"BonusMalus",
"Density",
"DensityLog",
]
_ = sns.heatmap(df[cols].corr(), annot=True, linewidths=1, fmt=".3f")
You possibly can see the plots for the specific variables within the Jupyter pocket book on GitHub. Right here, we conduct statistical analyses of those variables. An ANOVA take a look at reveals that the distribution of the goal
variable shouldn’t be considerably completely different throughout all categorical variables. Nonetheless, some categorical variables exhibit a major distinction for targetLog
. Consequently, modeling targetLog
could yield higher outcomes.
def ANOVA_test(target_col):
for c in cat_cols:
df_tmp = [g[1] for g in df.groupby([c], noticed=True)[target_col]]
_, pvalue = stats.f_oneway(*df_tmp)
print(f"Variable: {c}tp-value: {pvalue}")print("Dependent Variable: Goal")
ANOVA_test("goal")
print("nnDependent Variable: log(Goal)")
ANOVA_test("targetLog")
Dependent Variable: Goal
Variable: Space p-value: 0.5621655789253363
Variable: VehBrand p-value: 0.6401726096346133
Variable: VehGas p-value: 0.22047970212461002
Variable: Area p-value: 0.9735818647330804Dependent Variable: log(Goal)
Variable: Space p-value: 2.864841485854489e-08
Variable: VehBrand p-value: 3.6498732677036494e-60
Variable: VehGas p-value: 0.8569818310183478
Variable: Area p-value: 6.55718358261526e-36
The statistical evaluation of the options revealed that we don’t have any standout options that can be utilized for predicting the dependent variable. First, we try and predict the anticipated declare quantity for a policyholder on condition that she or he has a declare.
We strive three fashions:
- SVR (Help Vector Regression)
- XGBoost
- Random Forest
import xgboost as xgb
from sklearn import preprocessing
from sklearn.compose import ColumnTransformer
from sklearn.metrics import mean_squared_error, classification_report
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVRdef get_SVR_model():
preprocess = ColumnTransformer(
transformers=[
(
"Cat",
Pipeline(
[
("OneHot", preprocessing.OneHotEncoder()),
("Normalizer", preprocessing.StandardScaler(with_mean=False)),
]
),
cat_cols,
),
("Num-Normalizer", preprocessing.StandardScaler(), num_cols),
]
)
mannequin = SVR()
pipeline = Pipeline([("preprocess", preprocess), ("svr_model", model)])
return pipeline
def get_XGB_model():
preprocess = ColumnTransformer(
[
("OrdinalEncoder", preprocessing.OrdinalEncoder(), cat_cols),
("Normalizer", preprocessing.StandardScaler(), num_cols),
]
)
mannequin = xgb.XGBRegressor()
pipeline = Pipeline([("preprocess", preprocess), ("xgb_model", model)])
return pipeline
def get_RF_model():
preprocess = ColumnTransformer(
[
("OrdinalEncoder", preprocessing.OrdinalEncoder(), cat_cols),
("Normalizer", preprocessing.StandardScaler(), num_cols),
]
)
mannequin = RandomForestRegressor()
pipeline = Pipeline([("preprocess", preprocess), ("rf_model", model)])
return pipeline
def get_test_train(df, cols, goal):
X = df[cols]
y = df[target]
return train_test_split(X, y, test_size=0.20, random_state=37)
def prepare(mannequin):
mannequin.match(train_x, train_y)
pred_y = mannequin.predict(test_x)
mse = mean_squared_error(test_y, pred_y)
return mse
def filter_dataset(df):
df_tmp = df.copy()
df_tmp = df_tmp[df_tmp["VehAge"] <= 30]
df_tmp = df_tmp[df_tmp["DrivAge"] <= 90]
df_tmp = df_tmp[df_tmp["BonusMalus"] <= 150]
df_tmp = df_tmp[~df_tmp["IsOutlier"]]
return df_tmp
We use imply sq. error to guage outcomes and evaluate them with a baseline, which is the imply of SumClaimAmount
. We prepare and take a look at fashions for each goal
and targetLog
.
num_cols = ["VehPower", "VehAge", "DrivAge", "BonusMalus", "DensityLog"]
cat_cols = ["Area", "VehBrand", "VehGas", "Region"]
cols = cat_cols + num_colsdf_tmp = filter_dataset(df)
goal = "goal"
train_x, test_x, train_y, test_y = get_test_train(df_tmp, cols, goal)
mse = mean_squared_error(test_y, [train_y.mean()] * len(test_y))
print(f"Baseline - Root imply squared error (RMSE): {np.sqrt(mse):.2f}")
rf_model = get_RF_model()
mse = prepare(rf_model)
print(f"nRandomForest - Root imply squared error (RMSE): {np.sqrt(mse):.2f}")
xgb_model = get_XGB_model()
mse = prepare(xgb_model)
print(f"nXGB - Root imply squared error (RMSE): {np.sqrt(mse):.2f}")
svr_model = get_SVR_model()
mse = prepare(svr_model)
print(f"nSVR - Root imply squared error (RMSE): {np.sqrt(mse):.2f}")
Baseline - Root imply squared error (RMSE): 9983.06
RandomForest - Root imply squared error (RMSE): 10464.24
XGB - Root imply squared error (RMSE): 10568.72
SVR - Root imply squared error (RMSE): 10333.84
goal = "targetLog"
train_x, test_x, train_y, test_y = get_test_train(df_tmp, cols, goal)
mse = mean_squared_error(test_y, [train_y.mean()] * len(test_y))
print(f"Baseline - Root imply squared error (RMSE): {np.energy(10, np.sqrt(mse)):.2f}")
rf_model = get_RF_model()
mse = prepare(rf_model)
print(
f"nRandomForest - Root imply squared error (RMSE): {np.energy(10, np.sqrt(mse)):.2f}"
)
xgb_model = get_XGB_model()
mse = prepare(xgb_model)
print(f"nXGB - Root imply squared error (RMSE): {np.energy(10, np.sqrt(mse)):.2f}")
svr_model = get_SVR_model()
mse = prepare(svr_model)
print(f"nSVR - Root imply squared error (RMSE): {np.energy(10, np.sqrt(mse)):.2f}")
Baseline - Root imply squared error (RMSE): 3.69
RandomForest - Root imply squared error (RMSE): 3.73
XGB - Root imply squared error (RMSE): 3.74
SVR - Root imply squared error (RMSE): 3.64
The above fashions didn’t carry out satisfactorily. In reality, they carried out worse than the baseline, which was merely the imply of the goal variable. Nonetheless, these outcomes aren’t stunning given the outcomes of the statistical exams we noticed beforehand.
We could try parameter optimization right here. On this post and this GitHub repository, I’ve defined how you can optimize hyperparameters for XGBoost, and the tactic can simply be generalized for different fashions as properly. Nonetheless, given the poor efficiency of the fashions with the default settings, I don’t assume parameter optimization will change something right here.
We didn’t get passable outcomes with regression. Now, I’m interested by investigating whether or not we will predict if a policyholder recordsdata a declare utilizing the options. Thus, we convert the issue right into a binary classification drawback.
We already know that within the dataset, solely 3.7% of policyholders have filed a declare, whereas 96.3% haven’t. Such a extremely imbalanced dataset requires cautious dealing with of the imbalanced lessons. Detecting policyholders who file a declare, i.e., recall, is extra necessary than the accuracy or precision of the mannequin. Through the use of class weights and weighting the constructive class (these situations with HasClaim=True
), we will information the mannequin in the direction of our desired final result.
For this classification activity, I exploit logistic regression.
import torch
from torch.utils.information import DataLoader, TensorDatasetnum_cols = ["VehPower", "VehAge", "DrivAge", "BonusMalus", "DensityLog"]
cat_cols = ["Area", "VehBrand", "VehGas", "Region"]
# Preprocess the info
preprocess = ColumnTransformer(
transformers=[
(
"cat",
Pipeline(
[
("OneHot", preprocessing.OneHotEncoder()),
(
"Normalizer",
preprocessing.StandardScaler(with_mean=False),
),
]
),
cat_cols,
),
("num", preprocessing.StandardScaler(), num_cols),
]
)
goal = "HasClaim"
# Break up the info into coaching and take a look at units
X = df_all[num_cols + cat_cols]
y = df_all[target]
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=37
)
X_train = preprocess.fit_transform(X_train).toarray()
X_test = preprocess.fit_transform(X_test).toarray()
X_train_torch = torch.tensor(X_train, dtype=torch.float32)
y_train_torch = torch.tensor(record(y_train), dtype=torch.float32)
X_test_torch = torch.tensor(X_test, dtype=torch.float32)
y_test_torch = torch.tensor(record(y_test), dtype=torch.float32)
# Create datasets
train_dataset = TensorDataset(X_train_torch, y_train_torch)
test_dataset = TensorDataset(X_test_torch, y_test_torch)
# Create information loaders
train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)
test_loader = DataLoader(test_dataset, batch_size=32, shuffle=False)
import torch.nn as nn
from sklearn.utils.class_weight import compute_class_weight
from tqdm import tqdmsystem = torch.system("cuda" if torch.cuda.is_available() else "cpu")
class LogisticRegression(nn.Module):
def __init__(self, input_dim):
tremendous(LogisticRegression, self).__init__()
self.linear = nn.Linear(input_dim, 1)
def ahead(self, x):
return torch.sigmoid(self.linear(x))
def train_model(num_epochs, positive_class_weight=None):
if positive_class_weight shouldn't be None:
class_weights = compute_class_weight(
"balanced", lessons=np.distinctive(y_train), y=y_train
)
class_weight_tensor = torch.tensor(
[positive_class_weight * class_weights[1]], dtype=torch.float32
)
# Outline the loss perform with class weight
criterion = nn.BCEWithLogitsLoss(pos_weight=class_weight_tensor)
else:
criterion = nn.BCEWithLogitsLoss()
# Transfer the criterion to GPU if obtainable
mannequin = LogisticRegression(X_train.form[1]).to(system)
criterion = criterion.to(system)
optimizer = torch.optim.Adam(mannequin.parameters(), lr=0.001)
print("Coaching ...")
for epoch in vary(num_epochs):
mannequin.prepare()
for information, goal in train_loader:
information, goal = information.to(system), goal.to(system)
optimizer.zero_grad()
output = mannequin(information)
loss = criterion(output, goal.unsqueeze(1))
loss.backward()
optimizer.step()
if (epoch + 1) % 5 == 0:
print(f"Epoch {epoch+1}, Loss: {loss.merchandise()}")
return mannequin
def consider(mannequin):
mannequin.eval()
y_pred = []
with torch.no_grad():
for information, goal in test_loader:
information, goal = information.to(system), goal.to(system)
outputs = mannequin(information)
predicted = (outputs.information > 0.5).float()
y_pred += predicted.tolist()
print(classification_report(y_test, y_pred, zero_division=0))
We will strive completely different values for the burden of the constructive class. If we set it with the next worth, we will obtain a recall of 73%, on the expense of decrease precision and accuracy.
mannequin = train_model(50, positive_class_weight=4)
consider(mannequin)
Coaching ...
Epoch 5, Loss: 1.093119740486145
Epoch 10, Loss: 1.0352026224136353
Epoch 15, Loss: 1.1928868293762207
Epoch 20, Loss: 2.6955227851867676
Epoch 25, Loss: 1.1254448890686035
Epoch 30, Loss: 0.9496679306030273
Epoch 35, Loss: 0.9839045405387878
Epoch 40, Loss: 1.033744215965271
Epoch 45, Loss: 0.9856802225112915
Epoch 50, Loss: 1.0736278295516968
precision recall f1-score help
False 0.98 0.42 0.58 130699
True 0.05 0.73 0.08 4904
accuracy 0.43 135603
macro avg 0.51 0.57 0.33 135603
weighted avg 0.94 0.43 0.56 135603
Hypothetically, the insurance coverage premium for every policyholder can initially be set primarily based on the chance of submitting a declare, with a secure margin included. Then, we use the classification mannequin to regulate the premium: if the policyholder is classed as constructive, we enhance their premium proportionally primarily based on the statistics of beforehand filed ClaimAmount
s.
This was a simple evaluation of insurance coverage premium modeling and supposed as an instance that machine studying fashions could not at all times meet our preliminary expectations. Nonetheless, by way of a shift in perspective and an intensive evaluation of the issue, we will nonetheless make the most of their potential successfully.