Need help with caffe-googlenet-bn?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

131 Stars 74 Forks 8 Commits 7 Opened issues


re-implementation of googlenet batch normalization

Services available


Need anything else?

Contributors list

No Data


This model is a re-implementation of Batch Normalization publication, and the model is trained with a customized caffe; however, the modifications are minor. Thus, you can run this with the currently available official caffe version, including cudnn v4 support and multigpu support.

The network definition and solver prototxt files are modified from

Notes: - training with random crop; - training without any data-augmentation except random crop; - uses "xavier" to initialize the weights; - training with real-time shuffle with a modified data_reader.cpp; - ~~batch normalization layer is modified version of The modified bn layer supports batch normalization for inference. (See neuron_layers.hpp, bn_layer.cpp, and - the official batch normalization layer is used and the usage of it is adopted from - use test_bn.cpp and predict_bn.cpp for inference. - use a mini-batch of 64 on 2 GPUs, i.e. 32 per GPU - use ILSVRC2015 labels, instead of 12' label. - Data (images) are resized with 256 x 256 convert_imageset.cpp.

The uploaded caffemodel is the snapshot of 1,200,000 iteration (30 epochs) using solverstepsize6400.prototxt

The uploaded model achieves a top-1 accuracy 72.05% (27.95% error) and a top-5 accuracy 90.87% (9.13% error) on the validation set, using a single center crop.

Thank John Lee for helping me training this model.

Tips for performance

  1. Real-time data shuffling is important
  2. Data augmentation during training should improve the accuracy.
  3. Change interpolation method (default is bilinear) of opencv to bicubic when you convert image will give you minor improvement.


  1. Data augmentation





This model is released for unrestricted use.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.