paddleDetection3.0.0版本需要inference.json/model.json文件解决方案

PaddlePaddle-v3.3

PaddlePaddle-v3.3

PaddlePaddle

PaddlePaddle是由百度自主研发的深度学习平台,自 2016 年开源以来已广泛应用于工业界。作为一个全面的深度学习生态系统,它提供了核心框架、模型库、开发工具包等完整解决方案。目前已服务超过 2185 万开发者,67 万企业,产生了 110 万个模型

运行paddleDetection3.0.0版本属性识别时出现错误,说无法找到inference.json或model.json文件,但是模型文件夹没有这个文件

错误信息:

  File "D:\projects\python\PaddleDetection\deploy\python\infer.py", line 1011, in load_predictor
    config = Config(model_path, model_prefix)
             │      │           └ 'inference'
             │      └ '../weights\\pp\\PPLCNet_x1_0_person_attribute_945_infer'
             └ <class 'paddle.base.libpaddle.AnalysisConfig'>

RuntimeError: (NotFound) Cannot open file ../weights\pp\PPLCNet_x1_0_person_attribute_945_infer/inference.json, please confirm whether the file is normal.
  [Hint: Expected paddle::inference::IsFileExists(prog_file_) == true, but received paddle::inference::IsFileExists(prog_file_):0 != true:1.] (at ..\paddle\fluid\inference\api\analysis_config.cc:117)

模型文件夹:

参考PaddleDetection目标检测自定义训练-EW帮帮网这个链接将infer.py中的相关部分进行了改动,去掉了版本的判断,就能正常调用模型检测了

您可能感兴趣的与本文相关的镜像

PaddlePaddle-v3.3

PaddlePaddle-v3.3

PaddlePaddle

PaddlePaddle是由百度自主研发的深度学习平台,自 2016 年开源以来已广泛应用于工业界。作为一个全面的深度学习生态系统,它提供了核心框架、模型库、开发工具包等完整解决方案。目前已服务超过 2185 万开发者,67 万企业,产生了 110 万个模型

D:\trt\mmdetection3d-dev-1.x>python demo/pcd_demo.py demo/data/kitti/000008.bin configs/second/second_hv_secfpn_8xb6-80e_kitti-3d-car.py checkpoints/second_hv_secfpn_8xb6-80e_kitti-3d-car-75d9305e.pth --show 01/06 14:07:33 - mmengine - WARNING - Display device not found. `--show` is forced to False D:\trt\mmdetection3d-dev-1.x\mmdet3d\models\dense_heads\anchor3d_head.py:95: UserWarning: dir_offset and dir_limit_offset will be depressed and be incorporated into box coder in the future &#39;dir_offset and dir_limit_offset will be depressed and be &#39; Loads checkpoint by local backend from path: checkpoints/second_hv_secfpn_8xb6-80e_kitti-3d-car-75d9305e.pth The model and loaded state dict do not match exactly size mismatch for middle_encoder.conv_input.0.weight: copying a param with shape torch.Size([16, 3, 3, 3, 4]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 4, 16]). size mismatch for middle_encoder.encoder_layers.encoder_layer1.0.0.weight: copying a param with shape torch.Size([16, 3, 3, 3, 16]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 16, 16]). size mismatch for middle_encoder.encoder_layers.encoder_layer2.0.0.weight: copying a param with shape torch.Size([32, 3, 3, 3, 16]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 16, 32]). size mismatch for middle_encoder.encoder_layers.encoder_layer2.1.0.weight: copying a param with shape torch.Size([32, 3, 3, 3, 32]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 32, 32]). size mismatch for middle_encoder.encoder_layers.encoder_layer2.2.0.weight: copying a param with shape torch.Size([32, 3, 3, 3, 32]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 32, 32]). size mismatch for middle_encoder.encoder_layers.encoder_layer3.0.0.weight: copying a param with shape torch.Size([64, 3, 3, 3, 32]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 32, 64]). size mismatch for middle_encoder.encoder_layers.encoder_layer3.1.0.weight: copying a param with shape torch.Size([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]). size mismatch for middle_encoder.encoder_layers.encoder_layer3.2.0.weight: copying a param with shape torch.Size([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]). size mismatch for middle_encoder.encoder_layers.encoder_layer4.0.0.weight: copying a param with shape torch.Size([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]). size mismatch for middle_encoder.encoder_layers.encoder_layer4.1.0.weight: copying a param with shape torch.Size([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]). size mismatch for middle_encoder.encoder_layers.encoder_layer4.2.0.weight: copying a param with shape torch.Size([64, 3, 3, 3, 64]) from checkpoint, the shape in current model is torch.Size([3, 3, 3, 64, 64]). size mismatch for middle_encoder.conv_out.0.weight: copying a param with shape torch.Size([128, 3, 1, 1, 64]) from checkpoint, the shape in current model is torch.Size([3, 1, 1, 64, 128]). 01/06 14:07:33 - mmengine - WARNING - Failed to search registry with scope "mmdet3d" in the "function" registry tree. As a workaround, the current "function" registry in "mmengine" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet3d" is a correct scope, or whether the registry is initialized. 01/06 14:07:36 - mmengine - WARNING - `Visualizer` backend is not initialized because save_dir is None. C:\Users\Administrator\miniforge3\envs\DDD\lib\site-packages\torch\functional.py:478: UserWarning: torch.meshgrid: in anupcoming release, it will be required to pass the indexing argument. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\TensorShape.cpp:2895.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] Inference ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 01/06 14:07:39 - mmengine - INFO - results have been saved at output输出的json文件为空,是什么问题?
最新发布
01-07
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值