lenskit.graphs#
Graph-based models, especially GNNs with torch_geometric
.
- class lenskit.graphs.LightGCNConfig(*, embedding_size=64, layer_count=2, layer_blend=None, batch_size=8192, learning_rate=0.01, epochs=10, regularization=0.01, loss='pairwise')#
Bases:
EmbeddingSizeMixin
,BaseModel
Configuration for
LightGCNScorer
.- Stability:
Experimental
- Parameters:
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- embedding_size: PositiveInt#
The dimension of the embedding space (number of latent features). Seems to work best as a power of 2.
- layer_count: PositiveInt#
The number of layers to use.
- layer_blend: PositiveFloat | list[PositiveFloat] | None#
The blending coefficient(s) for layer blending. This is equivalent to
alpha
inLightGCN
.
- batch_size: PositiveInt#
The training batch size.
- learning_rate: PositiveFloat#
The learning rate for training.
- epochs: PositiveInt#
The number of training epochs.
- loss: Literal['logistic', 'pairwise']#
The loss to use for model training.
pairwise
BPR pairwise ranking loss, using
LightGCN.recommend_loss()
.logistic
Logistic link prediction loss, using
LightGCN.link_pred_loss()
.
- class lenskit.graphs.LightGCNScorer(config=None, **kwargs)#
Bases:
UsesTrainer
,Component
[ItemList
, …]Scorer using
LightGCN
[].- Stability:
Experimental
- Parameters:
config (LightGCNConfig)
kwargs (Any)
- create_trainer(data, options)#
Create a model trainer to train this model.
- to(device)#
Move the model to a different device.
Modules
LightGCN recommendation. |