lenskit.graphs#

Graph-based models, especially GNNs with torch_geometric.

class lenskit.graphs.LightGCNConfig(embedding_size=64, layer_count=2, layer_blend=None, batch_size=8192, learning_rate=0.01, epochs=10, regularization=0.01, loss='pairwise')#

Bases: object

Configuration for LightGCNScorer.

Stability:

Experimental

Parameters:
batch_size: Annotated[int, Gt(gt=0)] = 8192#

The training batch size.

embedding_size: Annotated[int, Gt(gt=0)] = 64#

The dimension of the embedding space (number of latent features). Seems to work best as a power of 2.

epochs: Annotated[int, Gt(gt=0)] = 10#

The number of training epochs.

layer_blend: Annotated[float, Gt(gt=0)] | list[Annotated[float, Gt(gt=0)]] | None = None#

The blending coefficient(s) for layer blending. This is equivalent to alpha in LightGCN.

layer_count: Annotated[int, Gt(gt=0)] = 2#

The number of layers to use.

learning_rate: Annotated[float, Gt(gt=0)] = 0.01#

The learning rate for training.

loss: Literal['logistic', 'pairwise'] = 'pairwise'#

The loss to use for model training.

pairwise

BPR pairwise ranking loss, using LightGCN.recommend_loss().

logistic

Logistic link prediction loss, using LightGCN.link_pred_loss().

regularization: Annotated[float, Gt(gt=0)] | None = 0.01#

The regularization strength.

class lenskit.graphs.LightGCNScorer(config=None, **kwargs)#

Bases: UsesTrainer, Component[ItemList, …]

Scorer using LightGCN [].

Stability:

Experimental

Parameters:
create_trainer(data, options)#

Create a model trainer to train this model.

to(device)#

Move the model to a different device.

Modules

lightgcn

LightGCN recommendation.