ONNX-TensorRT: TensorRT backend for ONNX
Parses ONNX models for execution with TensorRT.
See also the TensorRT documentation.
For the list of recent changes, see the changelog.
For a list of commonly seen issues and questions, see the FAQ.
For business inquiries, please contact [email protected]
For press and other inquiries, please contact Hector Marinez at [email protected]
Development on the Master branch is for the latest version of TensorRT 7.2.2 with full-dimensions and dynamic shape support.
For previous versions of TensorRT, refer to their respective branches.
Building INetwork objects in full dimensions mode with dynamic shape support requires calling the following API:
const auto explicitBatch = 1U << static_cast(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); builder->createNetworkV2(explicitBatch)
import tensorrt explicit_batch = 1 << (int)(tensorrt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) builder.create_network(explicit_batch)
Current supported ONNX operators are found in the operator support matrix.
For building within docker, we recommend using and setting up the docker containers as instructed in the main (TensorRT repository)[https://github.com/NVIDIA/TensorRT#setting-up-the-build-environment] to build the onnx-tensorrt library.
Once you have cloned the repository, you can build the parser libraries and executables by running:
cd onnx-tensorrt mkdir build && cd build cmake .. -DTENSORRT_ROOT= && make -j // Ensure that you update your LD_LIBRARY_PATH to pick up the location of the newly built library: export LD_LIBRARY_PATH=$PWD:$LD_LIBRARY_PATH
For building only the libraries, append
-DBUILD_LIBRARY_ONLY=1to the CMake build command.
ONNX models can be converted to serialized TensorRT engines using the
onnx2trt my_model.onnx -o my_engine.trt
ONNX models can also be converted to human-readable text:
onnx2trt my_model.onnx -t my_model.onnx.txt
ONNX models can also be optimized by ONNX's optimization libraries (added by dsandler). To optimize an ONNX model and output a new one use
-mto specify the output model name and
-Oto specify a semicolon-separated list of optimization passes to apply:
onnx2trt my_model.onnx -O "pass_1;pass_2;pass_3" -m my_model_optimized.onnx
See more all available optimization passes by running:
See more usage information by running:
Python bindings for the ONNX-TensorRT parser are packaged in the shipped
.whlfiles. Install them with
python3 -m pip install /python/tensorrt-7.x.x.x-cp-none-linux_x86_64.whl
TensorRT 7.2.2 supports ONNX release 1.6.0. Install it with:
python3 -m pip install onnx==1.6.0
The ONNX-TensorRT backend can be installed by running:
python3 setup.py install
The TensorRT backend for ONNX can be used in Python as follows:
import onnx import onnx_tensorrt.backend as backend import numpy as np
model = onnx.load("/path/to/model.onnx") engine = backend.prepare(model, device='CUDA:1') input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32) output_data = engine.run(input_data) print(output_data) print(output_data.shape)
The model parser library, libnvonnxparser.so, has its C++ API declared in this header:
After installation (or inside the Docker container), ONNX backend tests can be run as follows:
Real model tests only:
python onnx_backend_test.py OnnxBackendRealModelTest
You can use
-vflag to make output more verbose.
Pre-trained models in ONNX format can be found at the ONNX Model Zoo