Need help with qmf?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

461 Stars 92 Forks Apache License 2.0 17 Commits 19 Opened issues


A fast and scalable C++ library for implicit-feedback matrix factorization models

Services available


Need anything else?

Contributors list

QMF - a matrix factorization library

Build Status


QMF is a fast and scalable C++ library for implicit-feedback matrix factorization models. The current implementation supports two main algorithms:

  • Weighted ALS [1]. This model optimizes a weighted squared loss, and thus allows you to specify different weights on each positive example. The algorithm is based on alternating minimization on user and item factors matrices. QMF uses efficient parallelization to perform these minimizations.
  • BPR [2]. This model (approximately) optimizes average per-user AUC using stochastic gradient descent (SGD) on randomly sampled (user, positive item, negative item) triplets. Asynchronous, parallel Hogwild! [3] updates are supported in QMF to achieve near-linear speedup in the number of processors (when the dataset is sparse enough).

For evaluation, QMF supports various ranking-based metrics that are computed per-user on test data, in addition to training or test objective values.

For more information, see our blog post about QMF here:

Building QMF

QMF requires gcc 5.0+, as it uses the C++14 standard, and CMake version 2.8+. It also depends on glog, gflags and lapack libraries.


To install libraries dependencies:

sudo apt-get install libgoogle-glog-dev libgflags-dev liblapack-dev

To build the binaries:

cmake .
To run tests:
make test

Output binaries will be under the



Here's a basic example of usage: ```

to train a WALS model

./wals \ --traindataset=<traindataset> \ --testdataset=<testdataset> \ --userfactors=<userfactorsfile> \ --itemfactors= \ --regularizationlambda=0.05 \ --confidenceweight=40 \ --nepochs=10 \ --nfactors=30 \ --nthreads=4

to train a BPR model

./bpr \ --traindataset=<traindataset> \ --testdataset=<testdataset> \ --userfactors=<userfactorsfile> \ --itemfactors= \ --nepochs=10 \ --nfactors=30 \ --numhogwildthreads=4 \ --nthreads=4

The input dataset files should adhere to the following format:
... ``
is always
in BPR, but can be any integer in WALS (
r_ui` in the paper [1]).

The output files will be in the following format:

 []   ... 
where the bias term will only be present for BPR item factors when the
option is specified.

In order to compute test ranking metrics (averaged per-user), you can add the following parameters to either binary: *

specifies the metrics, which include
(area under the ROC curve),
(average precision),
[email protected]
[email protected]
for precision at 10),
[email protected]
(recall at k) *
specifies the number of users to consider when computing test metrics (by default 0 = all users). Computing these metrics requires computing predicted scores for all items and test users, which can be slow as the number of user gets big. The users are picked uniformely at random with a fixed seed (which can be specified with
) *
will compute these metrics after each epoch (by default they're computed only after the last epoch)

In the case of BPR, a set of (user, positive item, negative item) triplets is sampled during initialization for both training and test sets (with a fixed seed, or as given by

), and is used to compute an estimate of the loss after each epoch. This has no effect on training or on the computation of ranking metrics.

Options for WALS: *

(default 10): number of iterations of alternating least squares *
(default 30): dimensionality of the learned user and item factors *
: regularization coefficient *
: weight multiplier for positive items (alpha in the paper [1]) *
(default 0.01): bound (in absolute value) on weight initialization (with the default, weights are initialized uniformly between -0.01 and 0.01)

Options for BPR: *

(default 10): number of iterations of SGD *
(default 30): dimensionality of the learned user and item factors *
(default false): whether to use additive item biases *
: regularization coefficient on user factors *
: regularization coefficient on item factors *
: regularization coefficient on biases *
: initial learning rate *
(default 0.9): multiplicative decay applied to the learning rate after each epoch *
(default 0.01): bound (in absolute value) on weight initialization (with the default, weights are initialized uniformly between -0.01 and 0.01) *
(default 3): number of random negatives sampled for each positive item *
(default 1): number of parallel hogwild threads to use for SGD (in contrast,
determines parallelism for deterministic operations, e.g. for evaluation) *
(default 3): number of random negatives per positive used to generate the fixed evaluation sets mentioned above (used for computing train/test loss, does not affect training or ranking metrics)

For more details on the command-line options, see the definitions in



This library was built at Quora by Denis Yarats and Alberto Bietti.


QMF is released under the Apache 2.0 Licence.


[1] Hu, Koren and Volinsky. Collaborative Filtering for Implicit Feedback Datasets. In ICDM 2008.

[2] Rendle, Freudenthaler, Gantner and Schmidt-Thieme. BPR: Bayesian Personalized Ranking from Implicit Feedback. In UAI 2009.

[3] Niu, Recht, Ré and Wright. Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. In NIPS 2011.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.