Need help with importance-sampling?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

252 Stars 53 Forks Other 147 Commits 7 Opened issues


Code for experiments regarding importance sampling for training neural networks

Services available


Need anything else?

Contributors list

Importance Sampling

This python package provides a library that accelerates the training of arbitrary neural networks created with

__ using importance sampling.

.. code:: python

# Keras imports

from import ImportanceTraining

x_train, y_train, x_val, y_val = load_data() model = create_keras_model() model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] )

ImportanceTraining(model).fit( x_train, y_train, batch_size=32, epochs=10, verbose=1, validation_data=(x_val, y_val) )

model.evaluate(x_val, y_val)

Importance sampling for Deep Learning is an active research field and this library is undergoing development so your mileage may vary.

Relevant Research


  • Not All Samples Are Created Equal: Deep Learning with Importance Sampling [
  • Biased Importance Sampling for Deep Neural Network Training [

By others

  • Stochastic optimization with importance sampling for regularized loss minimization [
  • Variance reduction in SGD by distributed importance sampling [

Dependencies & Installation

Normally if you already have a functional Keras installation you just need to

pip install keras-importance-sampling
  • Keras
    > 2
  • A Keras backend among Tensorflow, Theano and CNTK
  • blinker
  • numpy
  • matplotlib
    are optional (used by the plot scripts)


The module has a dedicated

documentation site
__ but you can also read the
source code 
__ and the
__ to get an idea of how the library should be used and extended.


In the

folder you can find some Keras examples that have been edited to use importance sampling.

Code examples

In this section we will showcase part of the API that can be used to train neural networks with importance sampling.

.. code:: python

# Import what is needed to build the Keras model
from keras import backend as K
from keras.layers import Dense, Activation, Flatten
from keras.models import Sequential

Import a toy dataset and the importance training

from importance_sampling.datasets import MNIST from import ImportanceTraining

def create_nn(): """Build a simple fully connected NN""" model = Sequential([ Flatten(input_shape=(28, 28, 1)), Dense(40, activation="tanh"), Dense(40, activation="tanh"), Dense(10), Activation("softmax") # Needs to be separate to automatically # get the preactivation outputs ])


return model

if name == "main": # Load the data dataset = MNIST() x_train, y_train = dataset.train_data[:] x_test, y_test = dataset.test_data[:]

# Create the NN and keep the initial weights
model = create_nn()
weights = model.get_weights()

# Train with uniform sampling
K.set_value(, 0.01)
    x_train, y_train,
    batch_size=64, epochs=10,
    validation_data=(x_test, y_test)

# Train with importance sampling
K.set_value(, 0.01)
    x_train, y_train,
    batch_size=64, epochs=2,
    validation_data=(x_test, y_test)

Using the script

The following terminal commands train a small VGG-like network to ~0.65% error on MNIST (the numbers are from a CPU). .. code::

$ # Train a small cnn with mnist for 500 mini-batches using importance
$ # sampling with bias to achieve ~ 0.65% error (on the CPU).
$ time ./ \
>   small_cnn \
>   oracle-gnorm \
>   model \
>   predicted \
>   mnist \
>   /tmp/is \
>   --hyperparams 'batch_size=i128;lr=f0.003;lr_reductions=I10000' \
>   --train_for 500 --validate_every 500
real    1m41.985s
user    8m14.400s
sys     0m35.900s
$ # And with uniform sampling to achieve ~ 0.9% error.
$ time ./ \
>   small_cnn \
>   oracle-loss \
>   uniform \
>   unweighted \
>   mnist \
>   /tmp/uniform \
>   --hyperparams 'batch_size=i128;lr=f0.003;lr_reductions=I10000' \
>   --train_for 3000 --validate_every 3000
real    9m23.971s
user    47m32.600s
sys     3m4.188s

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.