lenskit.flexmf#
Flexible PyTorch matrix factorization models for LensKit.
The components in this package implement several matrix factorization models for LensKit, and also serve as an example for practical PyTorch recommender training.
Stability: Internal
This API is at the internal or experimental stability level: it may change at any time, and breaking changes will not necessarily be described in the release notes. See Stability Levels for details. FlexMF is provided as a preview release, and may change in the next months as we gain more experience with it.
as we gain more experience with it.
- class lenskit.flexmf.FlexMFConfigBase(embedding_size=50, batch_size=8192, learning_rate=0.01, epochs=10, regularization=0.1, reg_method='L2')#
Bases:
object
Common configuration for all FlexMF scoring components.
- Stability:
Experimental
- Parameters:
- reg_method: Literal['AdamW', 'L2'] | None = 'L2'#
The regularization method to use.
With the default L2 regularization, training will use sparse gradients and the
torch.optim.SparseAdam
optimizer.None
Use no regularization.
"L2"
Use L2 regularization on the parameters used in each training batch. The strength is applied to the _mean_ norms in a batch, so that the regularization term scale is not dependent on the batch size.
"AdamW"
Use
torch.optim.AdamW
with the specified regularization strength. This configuration does not use sparse gradients and may train more slowly.
Note
Regularization values do not necessarily have the same range or meaning for the different regularization methods.
- class lenskit.flexmf.FlexMFScorerBase(config=None, **kwargs)#
Bases:
UsesTrainer
,Component
Base class for the FlexMF scorers, providing common Torch support.
- Stability:
Experimental
- Parameters:
config (FlexMFConfigBase)
kwargs (Any)
- score_items(users, items)#
Score for users and items, after resolivng them and limiting to known users and items.
- to(device)#
Move the model to a different device.
- class lenskit.flexmf.FlexMFExplicitConfig(embedding_size=50, batch_size=8192, learning_rate=0.01, epochs=10, regularization=0.1, reg_method='L2')#
Bases:
FlexMFConfigBase
Configuration for
FlexMFExplicitScorer
.
- class lenskit.flexmf.FlexMFExplicitScorer(config=None, **kwargs)#
Bases:
FlexMFScorerBase
Explicit-feedback rating prediction with FlexMF. This realizes a biased matrix factorization model (similar to
lenskit.als.BiasedMF
) trained with PyTorch.- Stability:
Experimental
- Parameters:
config (FlexMFConfigBase)
kwargs (Any)
- create_trainer(data, options)#
Create a model trainer to train this model.
- class lenskit.flexmf.FlexMFImplicitConfig(embedding_size=50, batch_size=8192, learning_rate=0.01, epochs=10, regularization=0.1, reg_method='L2', loss='logistic', negative_strategy=None, negative_count=1, positive_weight=1.0, user_bias=None, item_bias=True)#
Bases:
FlexMFConfigBase
Configuration for
FlexMFImplicitScorer
. It inherits base model options fromFlexMFConfigBase
.- Stability:
Experimental
- Parameters:
- negative_count: Annotated[int, Gt(gt=0)] = 1#
The number of negative items to sample for each positive item in the training data. With BPR loss, the positive item is compared to each negative item; with logistic loss, the positive item is treated once per learning round, so this setting effectively makes the model learn on _n_ negatives per positive, rather than giving positive and negative examples equal weight.
- negative_strategy: Literal['uniform', 'popular', 'misranked'] | None = None#
The negative sampling strategy. The default is
"misranked"
for WARP loss and"uniform"
for other losses.
- class lenskit.flexmf.FlexMFImplicitScorer(config=None, **kwargs)#
Bases:
FlexMFScorerBase
Implicit-feedback rating prediction with FlexMF. This is capable of realizing multiple models, including:
BPR-MF (Bayesian personalized ranking) [RFGSchmidtThieme09] (with
"pairwise"
loss)Logistic matrix factorization [Joh14] (with
"logistic"
loss)
All use configurable negative sampling, including the sampling approach from WARP.
- Stability:
Experimental
- Parameters:
config (FlexMFImplicitConfig)
kwargs (Any)
- create_trainer(data, options)#
Create a model trainer to train this model.