Top-N Accuracy Metrics

The lenskit.metrics.topn module contains metrics for evaluating top-N recommendation lists.

Classification Metrics

These metrics treat the recommendation list as a classification of relevant items.

lenskit.metrics.topn.precision(recs, relevant)

Compute the precision of a set of recommendations.

Parameters:
  • recs (array-like) – a sequence of recommended items
  • relevant (set-like) – the set of relevant items
Returns:

the fraction of recommended items that are relevant

Return type:

double

lenskit.metrics.topn.recall(recs, relevant)

Compute the recall of a set of recommendations.

Parameters:
  • recs (array-like) – a sequence of recommended items
  • relevant (set-like) – the set of relevant items
Returns:

the fraction of relevant items that were recommended.

Return type:

double

Ranked List Metrics

These metrics treat the recommendation list as a ranked list of items that may or may not be relevant.

lenskit.metrics.topn.recip_rank(recs, relevant)

Compute the reciprocal rank of the first relevant item in a recommendation list. This is used to compute MRR.

Parameters:
  • recs (array-like) – a sequence of recommended items
  • relevant (set-like) – the set of relevant items
Returns:

the reciprocal rank of the first relevant item.

Return type:

double

Utility Metrics

The DCG function estimates a utility score for a ranked list of recommendations. The results can be combined with ideal DCGs to compute nDCG.

lenskit.metrics.topn.dcg(scores, discount=<ufunc 'log2'>)

Compute the Discounted Cumulative Gain of a series of recommended items with rating scores. These should be relevance scores; they can be \({0,1}\) for binary relevance data.

Discounted cumultative gain is computed as:

\[\begin{align*} \mathrm{DCG}(L,u) & = \sum_{i=1}^{|L|} \frac{r_{ui}}{d(i)} \end{align*}\]

You will usually want normalized discounted cumulative gain; this is

\[\begin{align*} \mathrm{nDCG}(L, u) & = \frac{\mathrm{DCG}(L,u)}{\mathrm{DCG}(L_{\mathrm{ideal}}, u)} \end{align*}\]

Compute that by computing the DCG of the recommendations & the test data, then merge the results and divide. The compute_ideal_dcgs() function is helpful for preparing that data.

Parameters:
  • scores (array-like) – The utility scores of a list of recommendations, in recommendation order.
  • discount (ufunc) – the rank discount function. Each item’s score will be divided the discount of its rank, if the discount is greater than 1.
Returns:

the DCG of the scored items.

Return type:

double

lenskit.metrics.topn.compute_ideal_dcgs(ratings, discount=<ufunc 'log2'>)

Compute the ideal DCG for rating data. This groups the rating data by everything except its item and rating columns, sorts each group by rating, and computes the DCG.

Parameters:ratings (pandas.DataFrame) – A rating data frame with item, rating, and other columns.
Returns:
The data frame of DCG values. The item and rating columns in
ratings are replaced by an ideal_dcg column.
Return type:pandas.DataFrame