Accountability Metrics Page

This page provides different accountability metrics of a model.

This table shows a list of various model performance metrics for each class.

The confusion matrix shows the number of True Negatives (predicted negative, observed negative), True Positives (predicted positive, observed positive), False Negatives (predicted negative, but observed positive) and False Positives (predicted positive, but observed negative). For different cutoff values, you will get a different number of False Positives and False Negatives. This plot allows you to find the optimal cutoff.

This classification plot shows the fraction of each class above and below the cutoff.

The precision plot shows the precision values binned by equal prediction probabilities. It provides an overview of how precision changes as the prediction probability increases.

Currentness metric measures the time of executing different XAI methods compared to the time of executing AI models. Obviously, SHAP’s currentness score is much bigger than LIME’s score as SHAP process is usually very slow.

Last updated