Need help with mlogger?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

125 Stars 13 Forks MIT License 101 Commits 1 Opened issues


a lightweight and simple logger for Machine Learning

Services available


Need anything else?

Contributors list

MLogger: a Machine Learning logger

Currently in version alpha, the API might undergo some minor changes.


To install the package, run: *

pip install mlogger

Why Use MLogger?

These are the strengths of

that make it a useful tool for logging machine learning experiments.
  • Readable code that is easy to add to current projects:
    acc = mlogger.metric.Average()
    print(acc.value)  # 96.0
    acc.log()  # internally stores value of 96.0 with automatic time-stamp
    acc.reset()  # reset average value
  • Flexible use of metrics with containers, easy to save and re-load: ```python xp = mlogger.Container() xp.train = mlogger.Container() xp.train.accuracy = mlogger.metric.Average() xp.total_timer = mlogger.metric.Timer()

xp.totaltimer.reset() # start timer xp.train.accuracy.update(97) xp.totaltimer.update() # say 0.0001 second has elapsed since timer started, currentvalue is 0.0001 xp.saveto('saved_state.json')

newxp = mlogger.loadcontainer('savedstate.json') print(newxp.train.accuracy.value) # 97.0 print(newxp.totaltimer.value) # 0.0001 ```

  • Improve your user experience with

    • Ease of use:
      plotter = mlogger.VisdomPlotter(({'env': 'my_experiment', 'server': 'http://localhost', 'port': 8097}))
      acc = mlogger.metric.Average(plotter=plotter, plot_title="Accuracy")
      print(acc.value)  # 96.0
      acc.log()  # automatically sends 96.0 to visdom server on window with title 'Accuracy'
    • Robustness: if
      fails to send data (due to a network instability for instance),
      automatically caches it and tries to send it together with the next request
    • Performance: you can manually choose when to update the
      plots. This permits to batch the data being sent and yields considerable speedups when logging thousands or more points per second.
  • Save all output printed in the console to a text file ```python with mlogger.stdoutto('printedstuff.txt'):

    code printing stuff here...

  • Automatically save information about the date, time, current directory, machine name, version control status of the code.

    cfg = mlogger.Config(get_general_info=True, get_git_info=True)
    print(cfg.date_and_time, cfg.cwd, cfg.git_hash, cfg.git_diff)


The following example shows some functionalities of the package (full example code in

import mlogger
import numpy as np


code to generate fake data


some hyper-parameters of the experiment

use_visdom = True lr = 0.01 n_epochs = 10


Prepare logging


log the hyperparameters of the experiment

if use_visdom: plotter = mlogger.VisdomPlotter({'env': 'my_experiment', 'server': 'http://localhost', 'port': 8097}, manual_update=True) else: plotter = None

xp = mlogger.Container()

xp.config = mlogger.Config(plotter=plotter) xp.config.update(lr=lr, n_epochs=n_epochs)

xp.epoch = mlogger.metric.Simple()

xp.train = mlogger.Container() xp.train.acc1 = mlogger.metric.Average(plotter=plotter, plot_title="[email protected]", plot_legend="training") xp.train.acck = mlogger.metric.Average(plotter=plotter, plot_title="[email protected]", plot_legend="training") xp.train.loss = mlogger.metric.Average(plotter=plotter, plot_title="Objective") xp.train.timer = mlogger.metric.Timer(plotter=plotter, plot_title="Time", plot_legend="training")

xp.val = mlogger.Container() xp.val.acc1 = mlogger.metric.Average(plotter=plotter, plot_title="[email protected]", plot_legend="validation") xp.val.acck = mlogger.metric.Average(plotter=plotter, plot_title="[email protected]", plot_legend="validation") xp.val.timer = mlogger.metric.Timer(plotter=plotter, plot_title="Time", plot_legend="validation")

xp.val_best = mlogger.Container() xp.val_best.acc1 = mlogger.metric.Maximum(plotter=plotter, plot_title="[email protected]", plot_legend="validation-best") xp.val_best.acck = mlogger.metric.Maximum(plotter=plotter, plot_title="[email protected]", plot_legend="validation-best")




for epoch in range(n_epochs): # train model for metric in xp.train.metrics(): metric.reset() for (x, y) in training_data(): loss, acc1, acck = oracle(x, y) # accumulate metrics (average over mini-batches) batch_size = len(x) xp.train.loss.update(loss, weighting=batch_size) xp.train.acc1.update(acc1, weighting=batch_size) xp.train.acck.update(acck, weighting=batch_size) xp.train.timer.update() for metric in xp.train.metrics(): metric.log()

# reset metrics in container xp.val
# (does not include xp.val_best.acc1 and xp.val_best.acck, which we do not want to reset)
for metric in xp.val.metrics():

# update values on validation set
for (x, y) in validation_data():
    _, acc1, acck = oracle(x, y)
    batch_size = len(x)
    xp.val.acc1.update(acc1, weighting=batch_size)
    xp.val.acck.update(acck, weighting=batch_size)
# log values on validation set
for metric in xp.val.metrics():

# update best values on validation set
# log best values on validation set
for metric in xp.val_best.metrics():

print("=" * 50) print("Best Performance On Validation Data:") print("-" * 50) print("[email protected]: \t {0:.2f}%".format(xp.val_best.acc1.value)) print("[email protected]: \t {0:.2f}%".format(xp.val_best.acck.value))



Save & load experiment


xp.train.loss.reset() xp.train.loss.update(1) print('Train loss value before saving state: {}'.format(xp.train.loss.value))


new_plotter = mlogger.VisdomPlotter(visdom_opts={'env': 'my_experiment', 'server': 'http://localhost', 'port': 8097}, manual_update=True)

new_xp = mlogger.load_container('state.json') new_xp.plot_on(new_plotter) new_plotter.update_plots()

print('Current train loss value: {}'.format(new_xp.train.loss.value)) new_xp.train.loss.update(2) print('Updated train loss value: {}'.format(new_xp.train.loss.value))

# remove the file


This generates (twice) the following plots on

: alt text


Full credits to the authors of tnt for the structure with metrics.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.