An Open-Source Library for Training Binarized Neural Networks
Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs).
Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry. This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on the
tf.kerasinterface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms in Larq Compute Engine.
To build a QNN, Larq introduces the concept of quantized layers and quantizers. A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires an
kernel_quantizerthat describe the way of quantizing the incoming activations and weights of the layer respectively. If both
Nonethe layer is equivalent to a full precision layer.
You can define a simple binarized fully-connected Keras model using the Straight-Through Estimator the following way:
model = tf.keras.models.Sequential( [ tf.keras.layers.Flatten(), larq.layers.QuantDense( 512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip" ), larq.layers.QuantDense( 10, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", activation="softmax", ), ] )
Check out our examples on how to train a Binarized Neural Network in just a few lines of code:
Before installing Larq, please install:
shell pip install tensorflow # or tensorflow-gpu
You can install Larq with Python's pip package manager:
pip install larq
Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks.