lenskit.metrics.MAE#
- class lenskit.metrics.MAE(missing_scores='error', missing_truth='error')#
Bases:
PredictMetric
,ListMetric
,DecomposedMetric
Compute MAE (mean absolute error). This is computed as:
\[\sum_{r_{ui} \in R} \left|r_{ui} - s(i|u)\right|\]This metric does not do any fallbacks; if you want to compute MAE with fallback predictions (e.g. usign a bias model when a collaborative filter cannot predict), generate predictions with
FallbackScorer
.- __init__(missing_scores='error', missing_truth='error')#
Methods
__init__
([missing_scores, missing_truth])align_scores
(predictions[, truth])Align prediction scores and rating values, applying the configured missing dispositions.
compute_list_data
(output, test)Compute measurements for a single list.
extract_list_metric
(metric)Extract a single-list metric from the per-list measurement result (if applicable).
global_aggregate
(values)Aggregate list metrics to compute a global value.
measure_list
(predictions[, test])Compute the metric value for a single result list.
Attributes
default
The default value to infer when computing statistics over missing values.
label
The metric's default label in output.
missing_scores
missing_truth
- measure_list(predictions, test=None, /)#
Compute the metric value for a single result list.
Individual metric classes need to implement this method.
- compute_list_data(output, test)#
Compute measurements for a single list.
- extract_list_metric(metric)#
Extract a single-list metric from the per-list measurement result (if applicable).
- Returns:
The per-list metric, or
None
if this metric does not compute per-list metrics.
- global_aggregate(values)#
Aggregate list metrics to compute a global value.