1、trt的engine生成需要适配本地环境,一般用trtexec.exe脚本实现。
比如: trtexec.exe --onnx=model.onnx --saveEngine=model.engine --noTF32
2、转换后的模型精度可以用polygraphy来分析。
polygraphy run model.onnx --onnxrt --trt --save-engine=model.plan --fp16 --input-shapes x:[1,3,1024,1024] --atol 1e-3 --rtol 1e-3 --verbose
加载指定的图像数据分析,使用采用trt自带json-save函数。
x = cv2.imread('10-01.jpg')
x = cv2.cvtColor(x, cv2.COLOR_BGR2RGB)
x = cv2.resize(x, (1500, 2000))
x = x.astype('float32')
x = x / (np.array([1, 1, 1]) * 255)
print(x.max())
# 调整维度顺序从(H, W, C)到(C, H, W)
x = np.transpose(x, (2, 0, 1))
# 添加批处理维度,变成(1, C, H, W)
x = np.expand_dims(x, axis=0)
# x=np.tile(x,(2,1,1,1))
print('x', x.shape)
# 确保输入类型为float32
x = x.astype('float32')
image_flattened = x.flatten()
json_data = json.dumps(x, default=encode_numpy_array)
json_bytes = json_data.encode('utf-8')
base64_bytes = base64.b64encode(json_bytes)
base64_string = base64_bytes.decode('utf-8')
# 将JSON数据保存到文件
with open('normalized_rgb_image.json', 'w') as json_file:
json_file.write(base64_string)
# Option 1: Define a function that will yield feed_dicts (i.e. Dict[str, np.ndarray])
def load_data():
for _ in range(1):
yield {
"input1": x
} # Still totally real data
# Option 2: Create a JSON file containing the input data using the `save_json()` helper.
# The input to `save_json()` should have type: List[Dict[str, np.ndarray]].
# For convenience, we'll reuse our `load_data()` implementation to generate the list.
input_data = list(load_data())
save_json(input_data, "custom_inputs-2.json", description="custom input data")