深度学习问题解决:Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogra

前言

深度学习不仅理论很难,同时在使用的时候也有各种各样的坑。

正文

2018-07-05 14:39:19.028426: E T:\src\github\tensorflow\tensorflow\stream_executor\cuda\cuda_dnn.cc:455] could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED
2018-07-05 14:39:19.033177: F T:\src\github\tensorflow\tensorflow\core\kernels\conv_ops.cc:713] Check failed: stream->parent()->GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo(), &algorithm)))

)

解决方案

网上提了好几种可能性,升级cudnn版本,或者其他的东西。
目前掌握的最简单的办法是,重启你的电脑,可能开的其他界面太多了,gpu没空了。

我要的是这样的%% Horizontal UNet Architecture Diagram %% Generated from Python code with features=[64, 128, 256] flowchart LR subgraph Encoder input[Input\nHxWx1] --> enc1_conv subgraph enc1["Encoder Block 1"] enc1_conv[Conv3x3, ReLU\n64 channels] --> enc1_conv2[Conv3x3, ReLU\n64 channels] end enc1_conv2 --> pool1[MaxPool2x2]:::pool enc1_conv2 --> skip1[Skip Connection 1]:::skip pool1 --> enc2_conv subgraph enc2["Encoder Block 2"] enc2_conv[Conv3x3, ReLU\n128 channels] --> enc2_conv2[Conv3x3, ReLU\n128 channels] end enc2_conv2 --> pool2[MaxPool2x2]:::pool enc2_conv2 --> skip2[Skip Connection 2]:::skip pool2 --> enc3_conv subgraph enc3["Encoder Block 3"] enc3_conv[Conv3x3, ReLU\n256 channels] --> enc3_conv2[Conv3x3, ReLU\n256 channels] end enc3_conv2 --> pool3[MaxPool2x2]:::pool enc3_conv2 --> skip3[Skip Connection 3]:::skip end subgraph Bottleneck pool3 --> bottle_conv subgraph bottle["Bottleneck"] bottle_conv[Conv3x3, ReLU\n512 channels] --> bottle_conv2[Conv3x3, ReLU\n512 channels] end end subgraph Decoder bottle_conv2 --> dec1_up[Transposed Conv2x2\n256 channels]:::upconv skip3 --> dec1_cat[Concatenation]:::concat dec1_up --> dec1_cat dec1_cat --> dec1_conv subgraph dec1["Decoder Block 1"] dec1_conv[Conv3x3, ReLU\n256 channels] --> dec1_conv2[Conv3x3, ReLU\n256 channels] end dec1_conv2 --> dec2_up[Transposed Conv2x2\n128 channels]:::upconv skip2 --> dec2_cat[Concatenation]:::concat dec2_up --> dec2_cat dec2_cat --> dec2_conv subgraph dec2["Decoder Block 2"] dec2_conv[Conv3x3, ReLU\n128 channels] --> dec2_conv2[Conv3x3, ReLU\n128 channels] end dec2_conv2 --> dec3_up[Transposed Conv2x2\n64 channels]:::upconv skip1 --> dec3_cat[Concatenation]:::concat dec3_up --> dec3_cat dec3_cat --> dec3_conv subgraph dec3["Decoder Block 3"] dec3_conv[Conv3x3, ReLU\n64 channels] --> dec3_conv2[Conv3x3, ReLU\n64 channels] end end dec3_conv2 --> final_conv[1x1 Conv\nOutput Channels]:::final final_conv --> softmax[Softmax]:::softmax softmax --> output[Output\nHxWxN] %% Styling classDef conv fill:#9f9,stroke:#333,stroke-width:2px; classDef pool fill:#f99,stroke:#333,stroke-width:2px; classDef skip fill:#ff9,stroke:#333,stroke-width:2px,stroke-dasharray:5 5; classDef upconv fill:#99f,stroke:#333,stroke-width:2px; classDef concat fill:#f9f,stroke:#333,stroke-width:2px; classDef final fill:#f66,stroke:#333,stroke-width:2px; classDef softmax fill:#6f6,stroke:#333,stroke-width:2px; class enc1_conv,enc1_conv2,enc2_conv,enc2_conv2,enc3_conv,enc3_conv2,bottle_conv,bottle_conv2,dec1_conv,dec1_conv2,dec2_conv,dec2_conv2,dec3_conv,dec3_conv2 conv; class pool1,pool2,pool3 pool; class skip1,skip2,skip3 skip; class dec1_up,dec2_up,dec3_up upconv; class dec1_cat,dec2_cat,dec3_cat concat; class final_conv final; class softmax softmax; %% Connections linkStyle 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27 stroke:#333,stroke-width:2px;
最新发布
06-02
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值