Need help with Paddle2ONNX?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

PaddlePaddle
288 Stars 61 Forks Apache License 2.0 472 Commits 133 Opened issues

Description

ONNX Model Exporter for PaddlePaddle

Services available

!
?

Need anything else?

Contributors list

Paddle2ONNX

简体中文 | English

Introduction

Paddle2ONNX enables users to convert models from PaddlePaddle to ONNX.

  • Supported model format. Paddle2ONNX supports both dynamic and static computational graph of PaddlePaddle. For static computational graph, Paddle2ONNX converts PaddlePaddle models saved by API saveinferencemodel, for example IPthon example. For dynamic computational graph, it is now under experiment and more details will be released after the release of PaddlePaddle 2.0.
  • Supported operators. Paddle2ONNX can stably export models to ONNX Opset 9~11, and partialy support lower version opset. More details please refer to Operator list.
  • Supported models. You can find officially verified models by Paddle2ONNX in model zoo.

AIStudio Tutorials

What we can do with Paddle2ONNX

Environment Dependencies

Configuration

 python >= 2.7  
 static computational graph: paddlepaddle >= 1.8.0
 dynamic computational graph: paddlepaddle >= 2.0.0
 onnx == 1.7.0 | Optional

Installation

Via Pip

 pip install paddle2onnx

From Source

 git clone https://github.com/PaddlePaddle/Paddle2ONNX.git
 cd Paddle2ONNX
 python setup.py install

Usage

Static Computational Graph

Via Command Line Tool

Uncombined PaddlePaddle model(parameters saved in different files)

paddle2onnx --model_dir paddle_model  --save_file onnx_file --opset_version 10 --enable_onnx_checker True

Combined PaddlePaddle model(parameters saved in one binary file)

paddle2onnx --model_dir paddle_model  --model_filename model_filename --params_filename params_filename --save_file onnx_file --opset_version 10 --enable_onnx_checker True

Parameters

| Parameters | Description | |----------|--------------| |--modeldir | The directory path of the paddlepaddle model saved by `paddle.fluid.io.saveinferencemodel`| |--modelfilename |[Optional] The model file name under the directory designated by

--model_dir
. Only needed when all the model parameters saved in one binary file. Default value None| |--paramsfilename |[Optonal] the parameter file name under the directory designated by`--modeldir`. Only needed when all the model parameters saved in one binary file. Default value None| |--savefile | the directory path for the exported ONNX model| |--opsetversion | [Optional] To configure the ONNX Opset version. Opset 9-11 are stably supported. Default value is 9.| |--enableonnxchecker| [Optional] To check the validity of the exported ONNX model. It is suggested to turn on the switch. If set to True, onnx>=1.7.0 is required. Default value is False| |--enablepaddlefallback| [Optional] Whether custom op is exported using paddle_fallback mode. Default value is False| |--version |[Optional] check the version of paddle2onnx |
  • Two types of PaddlePaddle models
    • Combined model, parameters saved in one binary file. --modelfilename and --paramsfilename represents the file name and parameter name under the directory designated by --modeldir. --modelfilename and --paramsfilename are valid only with parameter --modeldir.
    • Uncombined model, parameters saved in different files. Only --model_dir is needed,which contains '__model__' file and the seperated parameter files.
  • Use onnxruntime to verify the Converted model
  • If there is a prompt that OP does not support during model conversion, users are welcome to develop by themselves, please refer to the document OP Development Guide

IPython tutorials

Dynamic Computational Graph

import paddle
from paddle import nn
from paddle.static import InputSpec
import paddle2onnx as p2o

class LinearNet(nn.Layer): def init(self): super(LinearNet, self).init() self._linear = nn.Linear(784, 10)

def forward(self, x):
    return self._linear(x)

layer = LinearNet()

configure model inputs

x_spec = InputSpec([None, 784], 'float32', 'x')

convert model to inference mode

layer.eval()

save_path = 'onnx.save/linear_net' p2o.dygraph2onnx(layer, save_path + '.onnx', input_spec=[x_spec])

when paddlepaddle>2.0.0, you can try:

paddle.onnx.export(layer, save_path, input_spec=[x_spec])

IPython tutorials

Documents

License

Apache-2.0 license.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.