Need help with mlflow?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

10.5K Stars 2.3K Forks Apache License 2.0 2.3K Commits 906 Opened issues


Open source platform for the machine learning lifecycle

Services available


Need anything else?

Contributors list


MLflow: A Machine Learning Lifecycle Platform

MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud). MLflow's current components are:

  • MLflow Tracking 
    _: An API to log parameters, code, and results in machine learning experiments and compare them using an interactive UI.
  • MLflow Projects 
    _: A code packaging format for reproducible runs using Conda and Docker, so you can share your ML code with others.
  • MLflow Models 
    _: A model packaging format and tools that let you easily deploy the same model (from any ML library) to batch and real-time scoring on platforms such as Docker, Apache Spark, Azure ML and AWS SageMaker.
  • MLflow Model Registry 
    _: A centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of MLflow Models.

|docs| |labeling| |examples| |cross-version-tests| |pypi| |conda-forge| |cran| |maven| |license| |downloads| |slack|

.. |docs| image:: :target: :alt: Latest Docs .. |labeling| image:: :target: :alt: Labeling Action Status .. |examples| image:: :target: :alt: Examples Action Status .. |cross-version-tests| image:: :target: :alt: Examples Action Status .. |pypi| image:: :target: :alt: Latest Python Release .. |conda-forge| image:: :target: :alt: Latest Conda Release .. |cran| image:: :target: :alt: Latest CRAN Release .. |maven| image:: :target: :alt: Maven Central .. |license| image:: :target: :alt: Apache 2 License .. |downloads| image:: :target: :alt: Total Downloads .. |slack| image::[email protected]?logo=slack&logoColor=white&labelColor=3F0E40 :target:

_ :alt: Slack

.. Slack:


Install MLflow from PyPI via

pip install mlflow

MLflow requires

to be on the
for the projects feature.

Nightly snapshots of MLflow master are also available


Install a lower dependency subset of MLflow from PyPI via

pip install mlflow-skinny
Extra dependencies can be added per desired scenario. For example,
pip install mlflow-skinny pandas numpy
allows for mlflow.pyfunc.log_model support.


Official documentation for MLflow can be found at


The current MLflow Roadmap is available at We are seeking contributions to all of our roadmap items with the

help wanted
label. Please see the
_ section for more information.


For help or questions about MLflow usage (e.g. "how do I do X?") see the

_ or
Stack Overflow 

To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.

For release announcements and other discussions, please subscribe to our mailing list ([email protected]) or join us on


Running a Sample App With the Tracking API

The programs in

use the MLflow Tracking API. For instance, run::
python examples/quickstart/

This program will use

MLflow Tracking API 
_, which logs tracking data in
. This can then be viewed with the Tracking UI.

Launching the Tracking UI

The MLflow Tracking UI will show runs logged in

Start it with::
mlflow ui

Note: Running

mlflow ui
from within a clone of MLflow is not recommended - doing so will run the dev UI from source. We recommend running the UI from a different working directory, specifying a backend store via the
option. Alternatively, see instructions for running the dev UI in the
contributor guide 

Running a Project from a URI


mlflow run
command lets you run a project packaged with a MLproject file from a local path or a Git URI::
mlflow run examples/sklearn_elasticnet_wine -P alpha=0.4

mlflow run -P alpha=0.4


for a sample project with an MLproject file.

Saving and Serving Models

To illustrate managing models, the

package can log scikit-learn models as MLflow artifacts and then load them again for serving. There is an example training application in
that you can run as follows::
$ python examples/sklearn_logistic_regression/
Score: 0.666
Model saved in run 

$ mlflow models serve --model-uri runs://model

$ curl -d '{"columns":[0],"index":[0,1],"data":[[1],[-1]]}' -H 'Content-Type: application/json' localhost:5000/invocations


We happily welcome contributions to MLflow. We are also seeking contributions to items on the

MLflow Roadmap 
. Please see our
contribution guide 
to learn more about contributing to MLflow.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.