Need help with applied-ml?
Click the β€œchat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

320 Stars 37 Forks MIT License 31 Commits 0 Opened issues


Services available


Need anything else?

Contributors list

# 124
30 commits

 Made With ML

Applied ML Β· MLOps Β· Production
Join 20K+ developers in learning how to responsibly deliver value with ML.


If you need refresh yourself on ML algorithms, check out our ML Foundations repository (πŸ”₯  Among the top ML repositories on GitHub)

πŸ“¦  Product πŸ”’  Data πŸ“ˆ  Modeling
Objective Annotation Baselines
Solution Exploratory data analysis Experiment tracking
Evaluation Splitting Optimization
Iteration Preprocessing
πŸ“  Scripting (cont.) πŸ“¦  Application βœ…  Testing
Organization Styling CLI Code
Packaging Makefile API Data
Documentation Logging Models
♻️  Reproducability πŸš€  Production (cont.)
Git Dashboard Feature stores
Pre-commit CI/CD Workflows
Versioning Monitoring

πŸ“†  New lessons every month!
Subscribe for our monthly updates on new content.

Directory structure

β”œβ”€β”€        - FastAPI app
└──        - CLI app
β”œβ”€β”€    - API model schemas
β”œβ”€β”€     - configuration setup
β”œβ”€β”€       - data processing components
β”œβ”€β”€       - evaluation components
β”œβ”€β”€       - training/optimization pipelines
β”œβ”€β”€     - model architectures
β”œβ”€β”€    - inference components
β”œβ”€β”€      - training components
└──      - supplementary utilities

Documentation for this application can be found here.


Use existing model

  1. Set up environment.

    export venv_name="venv"
    make venv name=${venv_name} env="dev"
    source ${venv_name}/bin/activate
  2. Pull latest model.

    dvc pull
  3. Run Application

    make app env="dev"
    You can interact with the API directly or explore via the generated documentation at

Update model (CI/CD)

Coming soon after CI/CD lesson where the entire application will be retrained and deployed when we push new data (or trigger manual reoptimization/training). The deployed model, with performance comparisons to previously deployed versions, will be ready on a PR to push to the main branch.

Update model (manual)

  1. Set up the development environment.

    export venv_name="venv"
    make venv name=${venv_name} env="dev"
    source ${venv_name}/bin/activate
  2. Pull versioned data and model artifacts.

    dvc pull
  3. Optimize using distributions specified in

    . This also writes the best model's params to config/params.json
    tagifai optimize \
    --params-fp config/params.json \
    --study-name optimization \
    --num-trials 100

    We'll cover how to train using compute instances on the cloud from Amazon Web Services (AWS) or Google Cloud Platforms (GCP) in later lessons. But in the meantime, if you don't have access to GPUs, check out the optimize.ipynb notebook for how to train on Colab and transfer to local. We essentially run optimization, then train the best model to download and transfer it's artifacts.

  4. Train a model (and save all it's artifacts) using params from config/params.json and publish metrics to model/performance.json. You can view the entire run's details inside

    or via the API (
    tagifai train-model \
    --params-fp config/params.json \
    -- model-dir model \
    --experiment-name best \
    --run-name model
  5. Predict tags for an input sentence. It'll use the best model saved from

    but you can also specify a
    to choose a specific model.
    tagifai predict-tags --text "Transfer learning with BERT"  # test with CLI app
    make app env="dev"  # run API and test as well
  6. View improvements Once you're done training the best model using the current data version, best hyperparameters, etc., we can view performance difference.

    tagifai diff --commit-a workspace --commit-b HEAD
  7. Push versioned data and model artifacts.

    make dvc
  8. Commit to git This will clean and update versioned assets (data, experiments), run tests, styling, etc.

    git add .
    git commit -m ""
    git push origin main



make docker  # docker build -t tagifai:latest -f Dockerfile .
             # docker run -p 5000:5000 --name tagifai tagifai:latest


make app  # uvicorn app.api:app --host --port 5000 --reload --reload-dir tagifai --reload-dir app
make app-prod  # gunicorn -c config/ -k uvicorn.workers.UvicornWorker app.api:app

Streamlit dashboard

make streamlit  # streamlit run streamlit/


make mlflow  # mlflow server -h -p 5000 --backend-store-uri stores/model/


make docs  # python -m mkdocs serve


make great-expectations  # great_expectations checkpoint run [projects, tags]
make test  # pytest --cov tagifai --cov app --cov-report html
make test-non-training  # pytest -m "not training"

Start Jupyterlab

python -m ipykernel install --user --name=tagifai
jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install @jupyterlab/toc
jupyter lab

You can also run all notebooks on Google Colab.


Why is this free?

While this content is for everyone, it's especially targeted towards people who don't have as much opportunity to learn. I firmly believe that creativity and intelligence are randomly distributed but opportunity is siloed. I want to enable more people to create and contribute to innovation.

Who is the author?

  • I've deployed large scale ML systems at Apple as well as smaller systems with constraints at startups and want to share the common principles I've learned along the way.
  • I created Made With ML so that the community can explore, learn and build ML and I learned how to build it into an end-to-end product that's currently used by over 20K monthly active users.
  • Connect with me on Twitter and LinkedIn

To cite this course, please use:
    title  = "Applied ML - Made With ML",
    author = "Goku Mohandas",
    url    = ""
    year   = "2021",

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.