MobileNetV3-Pytorch

by leaderj1001

leaderj1001 / MobileNetV3-Pytorch

Implementing Searching for MobileNetV3 paper using Pytorch

224 Stars 52 Forks Last release: Not found MIT License 66 Commits 0 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Implementing Searching for MobileNetV3 paper using Pytorch

  • The current model is a very early model. I will modify it as a general model as soon as possible. ## Paper
  • Searching for MobileNetV3 paper
  • Author: Andrew Howard(Google Research), Mark Sandler(Google Research, Grace Chu(Google Research), Liang-Chieh Chen(Google Research), Bo Chen(Google Research), Mingxing Tan(Google Brain), Weijun Wang(Google Research), Yukun Zhu(Google Research), Ruoming Pang(Google Brain), Vijay Vasudevan(Google Brain), Quoc V. Le(Google Brain), Hartwig Adam(Google Research)

Todo

  • Experimental need for ImageNet dataset.
  • Code refactoring

MobileNetV3 Block

캡처

Experiments

  • For CIFAR-100 data, I experimented with resize (224, 224).

| Datasets | Model | acc1 | acc5 | Epoch | Parameters | :---: | :---: | :---: | :---: | :---: | :---: | CIFAR-100 | MobileNetV3(LARGE) | 70.44% | 91.34% | 80 | 3.99M CIFAR-100 | MobileNetV3(SMALL) | 67.04% | 89.41% | 55 | 1.7M IMAGENET | MobileNetV3(LARGE) WORK IN PROCESS | | | | 5.15M IMAGENET | MobileNetV3(SMALL) WORK IN PROCESS | | | | 2.94M

Usage

Train

python main.py
  • If you want to change hyper-parameters, you can check "python main.py --help"

Options: -

--dataset-mode
(str) - which dataset you use, (example: CIFAR10, CIFAR100), (default: CIFAR100). -
--epochs
(int) - number of epochs, (default: 100). -
--batch-size
(int) - batch size, (default: 128). -
--learning-rate
(float) - learning rate, (default: 1e-1). -
--dropout
(float) - dropout rate, (default: 0.3). -
--model-mode
(str) - which network you use, (example: LARGE, SMALL), (default: LARGE). -
--load-pretrained
(bool) - (default: False). -
--evaluate
(bool) - Used when testing. (default: False). -
--multiplier
(float) - (default: 1.0).

Test

python main.py --evaluate True
  • Put the saved model file in the checkpoint folder and saved graph file in the saved_graph folder and type "python main.py --evaluate True".
  • If you want to change hyper-parameters, you can check "python test.py --help"

Options: -

--dataset-mode
(str) - which dataset you use, (example: CIFAR10, CIFAR100), (default: CIFAR100). -
--epochs
(int) - number of epochs, (default: 100). -
--batch-size
(int) - batch size, (default: 128). -
--learning-rate
(float) - learning rate, (default: 1e-1). -
--dropout
(float) - dropout rate, (default: 0.3). -
--model-mode
(str) - which network you use, (example: LARGE, SMALL), (default: LARGE). -
--load-pretrained
(bool) - (default: False). -
--evaluate
(bool) - Used when testing. (default: False). -
--multiplier
(float) - (default: 1.0).

Number of Parameters

import torch

from model import MobileNetV3

def get_model_parameters(model): total_parameters = 0 for layer in list(model.parameters()): layer_parameter = 1 for l in list(layer.size()): layer_parameter *= l total_parameters += layer_parameter return total_parameters

tmp = torch.randn((128, 3, 224, 224)) model = MobileNetV3(model_mode="LARGE", multiplier=1.0) print("Number of model parameters: ", get_model_parameters(model))

Requirements

  • torch==1.0.1

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.