关于部署过程中paddle、onnx、trt等模型转换(二)

本文详细介绍了如何将Paddle2.0版本的模型转换为ONNX和TRT格式。首先,通过paddle2onnx.py脚本将模型转换为ONNX,然后在NVIDIA设备上使用onnx2trt.py脚本将ONNX模型转换为TRT。转换过程中需关注模型的输入输出,确保正确指定。遇到错误时,可能需要调整模型的输出设置。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


前言

	上文我们讲述了如何将paddle1.0版本的模型转换为onnx以及trt格式,很多人私信问如何将2.0版本的进行
转换。因此,这篇文章着重讲述paddle2.0的转换过程。

一、paddle2onnx

	首先,准备好训练模型以及相应的yml配置文件,放到对应目录下。编辑程序第36行,设置onnx输出名,程序
如下:
# paddle2onnx.py
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os
import sys
 
__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
 
os.environ["FLAGS_allocator_strategy"] = 'auto_growth'

import paddle
 
from ppocr.modeling.architectures import build_model
from ppocr.utils.save_load import init_model
import tools.program as program
 
 
def main():
    global_config = config['Global']
 
    # build model
    model = build_model(config['Architecture'])
 
    init_model(config, model, logger)
 
    model.eval()
	
    #定义输入数据
    input_spec = paddle.static.InputSpec(shape=[1, 3, 640, 640], dtype='float32', name='data')
 
    #ONNX模型导出
    paddle.onnx.export(model, "****", input_spec=[input_spec], opset_version=10)
 
 
if __name__ == '__main__':
    config, device, logger, vdl_writer = program.preprocess()
    main()
	编辑好之后,在控制台输入命令:
python <your_dir_name>/paddle2onnx.py  -c <your_yml_path> \
          -o Global.checkpoints='<your_train_model_path + modle_name>'
	例如:python my_programe/paddle2onnx.py  -c output/yanzhou_ID_detect/config.yml \
	            -o Global.checkpoints='./output/yanzhou_ID_detect/best_accuracy'
	回车,如果产生了相应的onnx文件,则表示这一步成功了。

二、onnx2trt

	如果说,paddle1.0的转换让你贼苦恼,那么,paddle2.0的转换将给你巨大的反差。将你转换出来的onnx
模型放到nvidia盒子上,编辑程序:
# onnx2trt.py
import tensorrt as trt

def ONNX_build_engine(trt_model_name,onnx_model_name):
    G_LOGGER = trt.Logger()
    explicit_batch = 1 << (int)(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)#trt7
    with trt.Builder(G_LOGGER) as builder, builder.create_network(explicit_batch) as network, trt.OnnxParser(network, G_LOGGER) as parser:
        builder.max_batch_size = 1
        builder.max_workspace_size = 1 << 30
        print('Loading ONNX file from path {}...'.format(onnx_model_name))
        with open(onnx_model_name, 'rb') as model:
            print('Beginning ONNX file parsing')
            #b = parser.parse(model.read())
            if not parser.parse(model.read()):
             for error in range(parser.num_errors):
                print(parser.get_error(error))
        if 1:
            print('Completed parsing of ONNX file')
            print('Building an engine from file {}; this may take a while...'.format(onnx_model_name))
 
            ####
            # builder.int8_mode = True
            # builder.int8_calibrator = calib
            builder.fp16_mode = True
            ####
            print("num layers:",network.num_layers)
            last_layer = network.get_layer(network.num_layers - 1)
            # if not last_layer.get_output(0):
                # network.mark_output(network.get_layer(network.num_layers - 1).get_output(0))# 有的模型需要,有的模型在转onnx的之后已经指定了,就不需要这行
            network.get_input(0).shape = [1, 3, 640, 640]# trt7
            engine = builder.build_cuda_engine(network)
            print("engine:",engine)
            print("Completed creating Engine")
            with open(trt_model_name, "wb") as f:
                f.write(engine.serialize())
            return engine
 
        else:
            print('Number of errors: {}'.format(parser.num_errors))
            error = parser.get_error(0) # if it gets mnore than one error this have to be changed
            del parser
            desc = error.desc()
            line = error.line()
            code = error.code()
            print('Description of the error: {}'.format(desc))
            print('Line where the error occurred: {}'.format(line))
            print('Error code: {}'.format(code))
            print("Model was not parsed successfully")
            exit(0)

ONNX_build_engine('engine路径','onnx路径')
	运行命令:
python3 onnx2trt.py
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
engine: None
Completed creating Engine
Traceback (most recent call last):
  File "onnx2trt.py", line 49, in <module>
    ONNX_build_engine('engine','onnx')
  File "onnx2trt.py", line 33, in ONNX_build_engine
    f.write(engine.serialize())
AttributeError: 'NoneType' object has no attribute 'serialize'
	若出现,以上情况,说明输入输出的定位出现了问题,此时将代码中,
if not last_layer.get_output(0)这两行放开,则问题解决:
Loading ONNX file from path **.onnx...
Beginning ONNX file parsing
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Completed parsing of ONNX file
Building an engine from file **.onnx; this may take a while...
num layers: 368
engine: <tensorrt.tensorrt.ICudaEngine object at 0x7f65cef618>
Completed creating Engine

总结

	其实,总的来说,转换过程中,最重要的就是要关注模型的结构,输入输出等问题。
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值