A Complete and Simple Implementation of MobileNet-V2 in PyTorch
An implementation of
Google MobileNet-V2introduced in PyTorch. According to the authors,
MobileNet-V2improves the state of the art performance of mobile models on multiple tasks and benchmarks. Its architecture is based on an inverted residual structure where the input and output of the residual block are thin bottleneck layers opposite to traditional residual models which use expanded representations in the input an MobileNetV2 uses lightweight depthwise convolutions to filter features in the intermediate expansion layer.
Link to the original paper: Inverted Residuals and Linear Bottlenecks: Mobile Networks for Classification, Detection and Segmentation
This implementation was made to be an example of a common deep learning software architecture. It's simple and designed to be very modular. All of the components needed for training and visualization are added.
This project uses Python 3.5.3 and PyTorch 0.3.
pytorch 0.3 numpy 1.13.1 tqdm 4.15.0 easydict 1.7 matplotlib 2.0.2 tensorboardX 1.0
bash pip install -r requirements.txt
python main.py config/.json
Due to the lack of computational power. I trained on CIFAR-10 dataset as an example to prove correctness, and was able to achieve test top1-accuracy of 90.9%.
Tensorboard is integrated with the project using
tensorboardXlibrary which proved to be very useful as there is no official visualization library in pytorch.
You can start it using:
bash tensorboard --logdir experimenets//summaries
These are the learning curves for the CIFAR-10 experiment.
Measuring FLOPS on this architecture to compare with other realtime architectures. PyTorch doesn't have a profiler like TensorFlow's. So, I'll be working on measuring FLOPS on my own.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.