Error when loading the SDK:解决方案

本文介绍了一种常见的Eclipse启动错误及其解决方法。错误源于Android SDK中特定的devices.xml文件解析失败,通过替换这些文件可以解决该问题。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

错误情况:

当打开eclipse时出现如下窗口(内容如下)

Error when loading the SDK:

Error: Error parsing \Android\adt-bundle-windows-x86_64-20140702\sdk\system-images\android-22\android-wear\armeabi-v7a\devices.xml
cvc-complex-type.2.4.d: 发现了以元素 'd:skin' 开头的无效内容。此处不应含有子元素。
Error: Error parsing D:\Android\adt-bundle-windows-x86_64-20140702\sdk\system-images\android-22\android-wear\x86\devices.xml
cvc-complex-type.2.4.d: 发现了以元素 'd:skin' 开头的无效内容。此处不应含有子元素。balabala

解决方法:

用D:\Android\adt-bundle-windows-x86_64-20140702\sdk\tools\lib下的devices.xml文件代替

D:\Android\adt-bundle-windows-x86_64-20140702\sdk\system-images\android-22\android-wear\armeabi-v7a和

D:\Android\adt-bundle-windows-x86_64-20140702\sdk\system-images\android-22\android-wear\x86下的devices.xml文件,

重启eclipse。


/usr/bin/python3.10 /home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/minist_onnx_to_rknn.py --> Config model Config done --> Loading model W __init__: rknn-toolkit2 version: 1.6.0+81f21f4d W load_onnx: It is recommended onnx opset 19, but your onnx model opset is 13! W load_onnx: Model converted from pytorch, 'opset_version' should be set 19 in torch.onnx.export for successful convert! Loading : 100%|██████████████████████████████████████████████████| 11/11 [00:00<00:00, 30076.50it/s] E load_onnx: The input shape ['batch_size', 1, 28, 28] of 'input' is not support! Please set the 'inputs' / 'input_size_list' parameters of 'rknn.load_onnx', or set the 'dyanmic_input' parameter of 'rknn.config' to fix the input shape! W load_onnx: ===================== WARN(3) ===================== E rknn-toolkit2 version: 1.6.0+81f21f4d E load_onnx: Catch exception when loading onnx model: /home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/model_Mnist.onnx! E load_onnx: Traceback (most recent call last): E load_onnx: File "rknn/api/rknn_base.py", line 1540, in rknn.api.rknn_base.RKNNBase.load_onnx E load_onnx: File "rknn/api/rknn_base.py", line 769, in rknn.api.rknn_base.RKNNBase._create_ir_and_inputs_meta E load_onnx: File "rknn/api/rknn_log.py", line 92, in rknn.api.rknn_log.RKNNLog.e E load_onnx: ValueError: The input shape ['batch_size', 1, 28, 28] of 'input' is not support! E load_onnx: Please set the 'inputs' / 'input_size_list' parameters of 'rknn.load_onnx', or set the 'dyanmic_input' parameter of 'rknn.config' to fix the input shape! W If you can't handle this error, please try updating to the latest version of the toolkit2 and runtime from: https://console.zbox.filez.com/l/I00fc3 (Pwd: rknn) Path: RKNPU2_SDK / 1.X.X / develop / If the error still exists in the latest version, please collect the corresponding error logs and the model, convert script, and input data that can reproduce the problem, and then submit an issue on: https://redmine.rock-chips.com (Please consult our sales or FAE for the redmine account) Traceback (most recent call last): File "/home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/minist_onnx_to_rknn.py", line 49, in <module> convert_to_rknn() File "/home/a/rknn-toolkit2/rknn-toolkit2/examples/onnx/yolov5/minist_onnx_to_rknn.py", line 26, in convert_to_rknn raise RuntimeError(f"Load ONNX failed! Error code: {ret}") RuntimeError: Load ONNX failed! Error code: -1
最新发布
05-30
(RKNN) book@wyc_emb:~/rk3566/ai_tools/model_tran/tran/yolov5-face$ python onnx2rknn.py ../yolov5s-face.onnx rk3566 i8 yolov5s_face_i8_.rknn W __init__: rknn-toolkit2 version: 1.5.0+1fa95b5c --> Config model done --> Loading model Loading : 100%|████████████████████████████████████████████████| 132/132 [00:00<00:00, 14059.48it/s] done --> Building model E build: Catch exception when building RKNN model! E build: Traceback (most recent call last): E build: File "rknn/api/rknn_base.py", line 1789, in rknn.api.rknn_base.RKNNBase.build E build: File "rknn/api/graph_optimizer.py", line 787, in rknn.api.graph_optimizer.GraphOptimizer.fold_constant E build: File "rknn/api/session.py", line 32, in rknn.api.session.Session.__init__ E build: File "rknn/api/session.py", line 132, in rknn.api.session.Session.sess_build E build: File "/home/book/anaconda3/envs/RKNN/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 335, in __init__ E build: self._create_inference_session(providers, provider_options, disabled_optimizers) E build: File "/home/book/anaconda3/envs/RKNN/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session E build: sess = C.InferenceSession(session_options, self._model_bytes, False, self._read_config_from_model) E build: onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Failed to load model with error: /onnxruntime_src/onnxruntime/core/graph/model.cc:129 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, bool) Unsupported model IR version: 9, max supported IR version: 8 W If you can't handle this error, please try updating to the latest version of the toolkit2 and runtime from: https://eyun.baidu.com/s/3eTDMk6Y (Pwd: rknn) Path: RK_NPU_SDK / RK_NPU_SDK_1.X.0 / develop / If the error still exists in the latest version, please collect the corresponding error logs and the model, convert script, and input data that can reproduce the problem, and then submit an issue on: https://redmine.rock-chips.com (Please consult our sales or FAE for the redmine account) Build model failed!
03-28
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值