dgs.models.metric.metric.compute_near_k_accuracy

dgs.models.metric.metric.compute_near_k_accuracy(a_pred: torch.Tensor, a_targ: torch.Tensor, ks: list[int]) dict[int, float][source]

Compute the number of correct predictions within a margin of k percent for all k.

Test whether the predicted alpha probability (\(\alpha_{\mathrm{pred}}\)) matches the given ground truth probability (\(\alpha_{\mathrm{correct}}\)). With \(\alpha{\mathrm{pred}} = \frac{\alpha_{\mathrm{nof correct}}}{\mathrm{nof total}}\), :math`alpha{mathrm{pred}}` is counted as correct if \(\alpha{\mathrm{pred}}-k \leq \alpha{\mathrm{correct}} \leq \alpha{\mathrm{pred}}+k\).

Parameters:
  • a_pred – The predicted alpha probabilities as tensor of shape [N (x 1)].

  • a_targ – The correct / target alpha probabilities as tensor of shape [N (x 1)].

  • ks – A list of length K containing percentage values. Used to check whether the accuracies lie within a margin of k percent.

Returns:

A dict mapping the integer value k to the respective accuracy.