Need help with QuantizedNeuralNetworks-Keras-Tensorflow?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

BertMoons
129 Stars 43 Forks BSD 3-Clause "New" or "Revised" License 25 Commits 4 Opened issues

Description

Quantized Neural Networks - networks trained for inference at arbitrary low precision.

Services available

!
?

Need anything else?

Contributors list

No Data

Training Quantized Neural Networks

Introduction

Train your own Quantized Neural Networks (QNN) - networks trained with quantized weights and activations - in Keras / Tensorflow. If you use this code, please cite "B.Moons et al. "Minimum Energy Quantized Neural Networks", Asilomar Conference on Signals, Systems and Computers, 2017". Take a look at our presentation or at the paper on arxiv.

This code is based on a lasagne/theano and a Keras/Tensorflow version of BinaryNet.

Preliminaries

Running this code requires: 1. Tensorflow 2. Keras 2.0 3. pylearn2 + the correct PYLEARN2DATAPATH in ./personalconfig/shellsource.sh 3. A GPU with recent versions of CUDA and CUDNN 4. Correct paths in ./personalconfig/shellsource.sh

Make sure your backend='tensorflow' and imagedataformat='channels_last' in the ~/.keras/keras.json file.

Training your own QNN

This repo includes toy examples for CIFAR-10 and MNIST. Training can be done by running the following:

./train.sh -o

-o overrides parameters in the .

The following parameters are crucial: * network_type: 'float', 'qnn', 'full-qnn', 'bnn', 'full-bnn' * wbits, abits: the number of bits used for weights and activations * lr: the used learning rate. 0.01 is a typical good starting point * dataset, dim, channels: variables depending on the used dataset * nl<>: the number of layers in block A, B, C * nf<>: the number of filters in block A, B, C

Examples

  • This is how to train a 4-bit full qnn on CIFAR-10:

./train.sh configCIFAR-10 -o lr=0.01 wbits=4 abits=4 networktype='full-qnn'

  • This is how to train a qnn with 4-bit weights and floating point activations on CIFAR-10:

./train.sh configCIFAR-10 -o lr=0.01 wbits=4 networktype='qnn'

  • This is how to train a BinaryNet on CIFAR-10:

./train.sh configCIFAR-10 -o lr=0.01 networktype='full-bnn'


The included networks have parametrized sizes and are split into three blocks (A-B-C), each with a number of layers (nl) and a number of filters per layer (nf).

  • This is how to train a small 2-bit network on MNIST:

./train.sh configMNIST -o nla=1 nfa=64 nlb=1 nfb=64 nlc=1 nfc=64 wbits=2 abits=2 networktype='full-qnn'

  • This is how to train a large 8-bit network on CIFAR-10:

./train.sh configCIFAR-10 -o nla=3 nfa=256 nlb=3 nfb=256 nlc=3 nfc=256 wbits=8 abits=8 networktype='full-qnn'

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.