Uncertainty metrics¶
-
alpaca.utils.ue_metrics.
uq_accuracy
(uq, errors, percentile=0.1)[source]¶ Shows intersection of worst by error/uq in percentile
-
alpaca.utils.ue_metrics.
dcg
(relevances, scores, k)[source]¶ Discounting cumulative gain, metric of ranking quality For UQ - relevance is ~ error, scores is uq
-
alpaca.utils.ue_metrics.
ndcg
(relevances, scores)[source]¶ Normalized DCG. We norm fact DCG on ideal DCG score expect relevances, scores to be numpy ndarrays