CUDNN_STATUS_INTERNAL_ERROR/ResourceExhaustedErro

当使用PyCharm运行包含TensorFlow的程序时,可能会遇到CUDA和cuDNN相关的错误,例如资源耗尽错误。本文介绍了这些错误的具体表现,并提供了一种解决方案:检查并释放GPU内存。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

运行pycharm时,遇到如下错误:
tensorflow/stream_executor/cuda/cuda_dnn.cc:503] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR
2018-08-06 12:38:55.926801: F tensorflow/core/kernels/conv_ops.cc:713] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithms)
或者
ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[36,64,41,41] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[Node: Conv2D = Conv2D[T=DT_FLOAT, data_format=”NCHW”, dilations=[1, 1, 1, 1], padding=”SAME”, strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device=”/job:localhost/replica:0/task:0/device:GPU:0”](Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, w_start/read)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

 [[Node: add_19/_7 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_246_add_19", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
解决方式:查看Nvidia x server settings窗口下的gpu0-(Geforce gtx 1080 ti)下的usd dedicated menmory ,此时若占用96%,则关机重启,即可解决问题。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值