安装nvtop编译报错:fatal error: linux/kcmp.h: No such file or directory

安装nvtop执行make时报错

我这边安装环境nvidia显卡,不需要AMD GPU支持

所以禁用AMD相关选项。

cmake .. -DNVIDIA_SUPPORT=ON -DAMDGPU_SUPPORT=OFF -DINTEL_SUPPORT=ON
然后再次make即可成功。

运行后提示2025-06-09 09:42:30.247883572 [E:onnxruntime:Default, provider_bridge_ort.cc:2195 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1778 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.12: cannot open shared object file: No such file or directory 2025-06-09 09:42:30.247901516 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:1055 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. 2025-06-09 09:42:30,281 - OrtInferSession - WARNING: CUDAExecutionProvider is not avaiable for current env, the inference part is automatically shifted to be executed under CPUExecutionProvider. 2025-06-09 09:42:30.312673212 [E:onnxruntime:Default, provider_bridge_ort.cc:2195 TryGetProviderInfo_CUDA] /onnxruntime_src/onnxruntime/core/session/provider_bridge_ort.cc:1778 onnxruntime::Provider& onnxruntime::ProviderLibrary::Get() [ONNXRuntimeError] : 1 : FAIL : Failed to load library libonnxruntime_providers_cuda.so with error: libcublasLt.so.12: cannot open shared object file: No such file or directory 2025-06-09 09:42:30.312692328 [W:onnxruntime:Default, onnxruntime_pybind_state.cc:1055 CreateExecutionProviderInstance] Failed to create CUDAExecutionProvider. Require cuDNN 9.* and CUDA 12.*. Please install all dependencies as mentioned in the GPU requirements page (https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), make sure they're in the PATH, and that your GPU is supported. 2025-06-09 09:42:30,357 - OrtInferSession - WARNING: CUDAExecutionProvider is not avaiable for current env, the inference part is automatically shifted to be executed under CPUExecutionProvider.
06-10
ma@ma-desktop:/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo$ deepstream-app -c deepstream_app_config.txt Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode. WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors. 0:00:00.822608068 21234 0xaaaaf0574610 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo/yolov5s.engine Implicit layer support has been deprecated INFO: [Implicit Engine Info]: layers num: 0 0:00:00.822732296 21234 0xaaaaf0574610 INFO nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo/yolov5s.engine 0:00:00.897932990 21234 0xaaaaf0574610 INFO nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-7.1/sources/DeepStream-Yolo/config_infer_primary_yoloV5.txt sucessfully Runtime commands: h: Print this help q: Quit p: Pause r: Resume NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source. To go back to the tiled display, right-click anywhere on the window. ** INFO: <bus_callback:291>: Pipeline ready ** INFO: <bus_callback:277>: Pipeline running /dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:4550: => Surface type not supported for transformation NVBUF_MEM_CUDA_DEVICE nvstreammux: Successfully handled EOS for source_id=0 ERROR from src_elem: Internal data stream error. Debug info: ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstV4l2Src:src_elem: streaming stopped, reason error (-5) ** INFO: <bus_callback:334>: Received EOS. Exiting ... Quitting App run failed
06-02
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值