Uncertainty metrics

alpaca.utils.ue_metrics.uq_accuracy(uq, errors, percentile=0.1)[source]

Shows intersection of worst by error/uq in percentile

alpaca.utils.ue_metrics.dcg(relevances, scores, k)[source]

Discounting cumulative gain, metric of ranking quality For UQ - relevance is ~ error, scores is uq

alpaca.utils.ue_metrics.ndcg(relevances, scores)[source]

Normalized DCG. We norm fact DCG on ideal DCG score expect relevances, scores to be numpy ndarrays

alpaca.utils.ue_metrics.uq_ndcg(errors, uq, bins=None)[source]

In UQ we care most of top erros, so we restructure errors to give top errors bigger relevance

alpaca.utils.ue_metrics.classification_metric(uncertainties, correct_predictions)[source]

Classification metric