MNN is a blazing fast, lightweight deep learning framework, battle-tested by business-critical use cases in Alibaba
MNN is a highly efficient and lightweight deep learning framework. It supports inference and training of deep learning models, and has industry leading performance for inference and training on-device. At present, MNN has been integrated in more than 20 apps of Alibaba Inc, such as Taobao, Tmall, Youku, Dingtalk, Xianyu and etc., covering more than 70 usage scenarios such as live broadcast, short video capture, search recommendation, product searching by image, interactive marketing, equity distribution, security risk control. In addition, MNN is also used on embedded devices, such as IoT.
The design principles and performance data of MNN has been published in an MLSys 2020 paper here. Please cite MNN in your publications if it helps your research:
@inproceedings{alibaba2020mnn, author = {Jiang, Xiaotang and Wang, Huan and Chen, Yiliu and Wu, Ziqi and Wang, Lichuan and Zou, Bin and Yang, Yafeng and Cui, Zongyang and Cai, Yu and Yu, Tianhang and Lv, Chengfei and Wu, Zhihua}, title = {MNN: A Universal and Efficient Inference Engine}, booktitle = {MLSys}, year = {2020} }
MNN's docs are in placed in Yuque docs here.
MNN Workbench could be downloaded from MNN's homepage, which provides pretrained models, visualized training tools, and one-click deployment of models to devices.
OpenCL,
Vulkan, and
OpenGLare available and deep tuned for mainstream GPUs (
Adrenoand
Mali).
Tensorflow,
Caffe,
ONNX, and supports common neural networks such as
CNN,
RNN,
GAN.
TensorflowOPs, 58
TFLiteOPs, 47
CaffeOPs and 74
ONNXOPs; Number of OPs by different MNN hardware backends: 111 for CPU, 6 for ARM V8.2, 55 for Metal, 43 for OpenCL, and 32 for Vulkan.
MNN can be divided into two parts: Converter and Interpreter.
Converter consists of Frontends and Graph Optimize. The former is responsible for supporting different training frameworks. MNN currently supports Tensorflow, Tensorflow Lite, Caffe and ONNX (PyTorch/MXNet); the latter optimizes graphs by operator fusion, operator substitution, and layout adjustment.
Interpreter consists of Engine and Backends. The former is responsible for the loading of the model and the scheduling of the calculation graph; the latter includes the memory allocation and the Op implementation under each computing device. In Engine and Backends, MNN applies a variety of optimization schemes, including applying Winograd algorithm in convolution and deconvolution, applying Strassen algorithm in matrix multiplication, low-precision calculation, Neon optimization, hand-written assembly, multi-thread optimization, memory reuse, heterogeneous computing, etc.
Scan the following QR codes to join Dingtalk discussion group. The group discussions are predominantly Chinese. But we welcome and will help English speakers.
Group #1 (Full):
Group #2 (Full):
Group #3:
Apache 2.0
MNN participants: Taobao Technology Department, Search Engineering Team, DAMO Team, Youku and other Alibaba Group employees.
MNN refers to the following projects: - Caffe - flatbuffer - gemmlowp - Google Vulkan demo - Halide - Mace - ONNX - protobuffer - skia - Tensorflow - ncnn - paddle-mobile - stb - rapidjson - pybind11 - pytorch - bolt - libyuv