yolov5s的pt文件转换成rknn(在rk3399pro用python环境运行)

文章介绍了如何将特定版本的Yolov5模型(如dd7f0b7e05e7658e9cd4fc8f02de5b7df060785d)转换为ONNX格式,并在RK3399Pro上使用RKNN环境进行部署。转换过程涉及运行export.py脚本,使用netron检查ONNX模型结构,以及调整模型的输出节点参数。此外,还提到了环境搭建和参考RKNN的案例来测试转换后的模型。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

1 下载rknn中指定的yolov5版本,我查到的版本是c5360f6e7009eb4d05f14d1cc9dae0963e949213,但是这个版本没有转换成功,用dd7f0b7e05e7658e9cd4fc8f02de5b7df060785d这个版本转换成功了。下载地址:https://github.com/ultralytics/yolov5/tree/dd7f0b7e05e7658e9cd4fc8f02de5b7df060785d,其中下载yolov5s.pt文件注意是yolov5中5.0版本的。
2 在yolov5中运行转换程序,python export.py --weights yolov5s.pt --img 640 --batch 1 --include onnx。
执行这个命令就模型的格式转换为onnx。
3 下载netron,查看onnx的网络结构,点击view->properties

在这里插入图片描述
注意输出的节点,751,812,873,就是模型转换中需要修改的参数。

4 首先在rk3399pro搭建rknn环境,环境搭建可以参考我的另一篇文章,我的rknn驱动是1.7.1,python环境是3.7。

5 修改模型转换参数,将输出的节点参数改成这个
在这里插入图片描述
参考了rknn的案列,将输出的节点都减去19(我也没有去了解原理)。
6 根据rknn提供的onnx的yolov5案列,运行代码,python3 test.py。运行结果如下:
在这里插入图片描述

### YOLOv5s Classification Model Deployment on RK3588 Deploying a YOLOv5s classification model onto an RK3588 device involves several key steps, including preparing the environment, optimizing and converting the model to be compatible with the target hardware, configuring inputs and outputs as shown in code snippets similar to those used for other models[^2], and ensuring efficient execution. #### Preparing Environment To begin deploying any deep learning model such as YOLOv5s on RK3588, it is essential first to set up the development environment properly. This includes installing necessary libraries like OpenCV, PyTorch (if using Python), or TensorFlow Lite if opting for conversion into this format which might offer better performance optimizations specifically tailored towards mobile/embedded systems. #### Optimizing and Converting Models For optimal inference speed while maintaining accuracy levels required by applications running on edge devices like RK3588, consider quantization techniques that reduce precision from floating-point numbers down to integers without significantly impacting results' quality. Tools provided within frameworks can assist here; for instance, TensorRT offers powerful capabilities when working alongside NVIDIA GPUs but may also support ARM-based SoCs through specific configurations. Once optimized, convert the original framework-specific file formats (.pt/.pth files common among PyTorch users) into ONNX – an open standard exchangeable across different platforms before finally transforming them further depending upon what runtime engine will manage executions during actual operation time. #### Configuring Inputs and Outputs After loading the converted model into memory, one must configure its input/output parameters appropriately. For example: ```python input_blob = next(iter(net.inputs)) out_blob = next(iter(net.outputs)) n, c, h, w = net.inputs[input_blob].shape ``` This configuration ensures data fed matches expected dimensions exactly so processing occurs seamlessly at run-time avoiding potential mismatches leading to errors or incorrect predictions. #### Ensuring Efficient Execution Finally, fine-tuning aspects related directly to how computations occur inside the chip itself becomes crucial especially considering power consumption versus computational throughput trade-offs inherent in embedded environments. Leveraging asynchronous I/O operations where applicable could help improve overall system responsiveness under heavy workloads typical of real-world scenarios involving continuous video streams analysis tasks performed via object detection algorithms implemented over neural networks architectures designed originally targeting desktop/server-grade processors rather than resource-constrained ones found commonly today powering smart cameras/IoTs gadgets etc.[^1]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

qq_26847897

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值