神经网络异常调试方法

NaN值异常

  1. 训练模型时遇到这种情况,有两种可能,分别为梯度爆炸和权重参数爆炸。模型参数值过大,导致值溢出的原因可能为:

    • 学习率过大
    • 模型结构设计不合理,数据传递缺少归一化,导致模型参数值不断变大。可以通过加入合理的输入归一化,以及权重衰减来完成。
  2. 调试方法

    • 将出现异常后的模型权重保存下来。以pytorch为例,使用torch.save(model.state_dict(), path)
    • 打印模型所有参数矩阵的标准统计量
      model = ModelTransformer()
      model.load_state_dict(torch.load('runs/trans-col-64-32/models/model_380.pth')['actor'])
      stat = model.state_dict()
      for k, v in stat.items():
          try:
              print(k, v.mean(), v.std(), v.max(), v.min())
          except:
              print(k, v)
      
    1. 得到如下结果,这时候哪些层的参数值过大或者出现Nan一目了然。神经网络的参数值最好是零均值小方差,这样的网络才稳定具有鲁棒性。
      pos_embedding tensor(0.0371) tensor(1.0174) tensor(4.3571) tensor(-3.5060) cnn_in.weight tensor(-0.0011) tensor(0.3637) tensor(0.9462) tensor(-0.9275) cnn_in.bias tensor(-0.0642) tensor(0.3133) tensor(0.7116) tensor(-0.6548) backbone.1.qurey.weight tensor(0.0087) tensor(0.2556) tensor(0.7741) tensor(-0.9078) backbone.1.qurey.bias tensor(0.0074) tensor(0.3354) tensor(0.6190) tensor(-0.5703) backbone.1.key.weight tensor(-0.0115) tensor(0.2779) tensor(0.7354) tensor(-0.8145) backbone.1.key.bias tensor(0.0117) tensor(0.1524) tensor(0.3371) tensor(-0.3488) backbone.1.value.weight tensor(-0.0059) tensor(0.2957) tensor(0.9224) tensor(-1.1774) backbone.1.value.bias tensor(-0.0639) tensor(0.4249) tensor(0.7377) tensor(-0.7813) backbone.1.attention_output.0.weight tensor(0.0218) tensor(0.4867) tensor(2.1043) tensor(-1.9912) backbone.1.attention_output.0.bias tensor(0.1661) tensor(0.3701) tensor(1.5011) tensor(-0.3917) backbone.1.attention_bn.weight tensor(0.9496) tensor(0.2250) tensor(1.2941) tensor(0.4144) backbone.1.attention_bn.bias tensor(-0.2649) tensor(0.4842) tensor(0.6520) tensor(-0.7989) backbone.1.attention_bn.running_mean tensor(6.6721) tensor(14.0992) tensor(42.8759) tensor(-5.5171) backbone.1.attention_bn.running_var tensor(113.9528) tensor(136.3889) tensor(412.0240) tensor(16.5173) backbone.1.attention_bn.num_batches_tracked tensor(129409) backbone.1.intermediate.0.weight tensor(0.0136) tensor(0.2670) tensor(0.8646) tensor(-0.9685) backbone.1.intermediate.0.bias tensor(-0.1946) tensor(0.3054) tensor(0.4383) tensor(-0.7515) backbone.1.intermediate.2.weight tensor(0.0348) tensor(0.2384) tensor(1.4473) tensor(-0.7512) backbone.1.intermediate.2.bias tensor(0.0775) tensor(0.5756) tensor(1.1056) tensor(-0.6796) backbone.1.intermediate_bn.weight tensor(0.9468) tensor(0.1547) tensor(1.2045) tensor(0.6034) backbone.1.intermediate_bn.bias tensor(-0.4938) tensor(0.5198) tensor(0.5886) tensor(-1.5009) backbone.1.intermediate_bn.running_mean tensor(5.1185) tensor(8.9794) tensor(24.0950) tensor(-9.7203) backbone.1.intermediate_bn.running_var tensor(1011.0563) tensor(470.2828) tensor(2713.0378) tensor(358.0752) backbone.1.intermediate_bn.num_batches_tracked tensor(129409) backbone.2.qurey.weight tensor(-0.0056) tensor(0.2470) tensor(0.8004) tensor(-0.7559) backbone.2.qurey.bias tensor(-0.0096) tensor(0.3295) tensor(0.5715) tensor(-0.6327) backbone.2.key.weight tensor(0.0006) tensor(0.2258) tensor(0.6901) tensor(-0.6097) backbone.2.key.bias tensor(-0.0332) tensor(0.0848) tensor(0.1300) tensor(-0.1770) backbone.2.value.weight tensor(-0.0007) tensor(0.2527) tensor(0.7201) tensor(-0.6011) backbone.2.value.bias tensor(-0.1045) tensor(0.6748) tensor(1.2054) tensor(-1.1653) backbone.2.attention_output.0.weight tensor(-0.0341) tensor(0.4446) tensor(1.7916) tensor(-1.6967) backbone.2.attention_output.0.bias tensor(0.3634) tensor(0.4208) tensor(1.2611) tensor(-0.2389) backbone.2.attention_bn.weight tensor(0.9402) tensor(0.2437) tensor(1.2320) tensor(0.3045) backbone.2.attention_bn.bias tensor(-0.5632) tensor(0.3505) tensor(0.2333) tensor(-1.1272) backbone.2.attention_bn.running_mean tensor(12.4778) tensor(18.1782) tensor(54.2862) tensor(-11.5848) backbone.2.attention_bn.running_var tensor(10674.0029) tensor(16083.5127) tensor(66667.5625) tensor(103.4917) backbone.2.attention_bn.num_batches_tracked tensor(129409) backbone.2.intermediate.0.weight tensor(-0.0403) tensor(0.2965) tensor(1.0515) tensor(-1.2477) backbone.2.intermediate.0.bias tensor(0.1615) tensor(0.3484) tensor(1.0486) tensor(-0.5243) backbone.2.intermediate.2.weight tensor(0.1763) tensor(0.5655) tensor(3.5050) tensor(-0.6989) backbone.2.intermediate.2.bias tensor(0.1358) tensor(0.3553) tensor(1.0326) tensor(-0.5061) backbone.2.intermediate_bn.weight tensor(0.9396) tensor(0.1858) tensor(1.2129) tensor(0.5276) backbone.2.intermediate_bn.bias tensor(-0.9594) tensor(0.8759) tensor(0.4090) tensor(-2.3180) backbone.2.intermediate_bn.running_mean tensor(132.7210) tensor(271.3183) tensor(909.9211) tensor(-134.1343) backbone.2.intermediate_bn.running_var tensor(50689.2031) tensor(101693.7891) tensor(385731.1875) tensor(2360.2285) backbone.2.intermediate_bn.num_batches_tracked tensor(129409) backbone.3.qurey.weight tensor(-0.0111) tensor(0.1615) tensor(0.4648) tensor(-0.5382) backbone.3.qurey.bias tensor(0.0311) tensor(0.2287) tensor(0.5492) tensor(-0.3744) backbone.3.key.weight tensor(-0.0083) tensor(0.1695) tensor(0.5950) tensor(-0.5998) backbone.3.key.bias tensor(-0.0002) tensor(0.0939) tensor(0.1795) tensor(-0.1377) backbone.3.value.weight tensor(-0.0010) tensor(0.2756) tensor(0.7965) tensor(-0.7491) backbone.3.value.bias tensor(0.0012) tensor(0.5639) tensor(1.2466) tensor(-0.9816) backbone.3.attention_output.0.weight tensor(-0.0028) tensor(0.3888) tensor(1.6879) tensor(-1.6022) backbone.3.attention_output.0.bias tensor(0.2296) tensor(0.2128) tensor(0.6484) tensor(-0.3985) backbone.3.attention_bn.weight tensor(1.0290) tensor(0.1246) tensor(1.2858) tensor(0.7496) backbone.3.attention_bn.bias tensor(-0.1568) tensor(0.2669) tensor(0.3389) tensor(-0.7376) backbone.3.attention_bn.running_mean tensor(37.2192) tensor(40.5271) tensor(116.9668) tensor(-28.2566) backbone.3.attention_bn.running_var tensor(4652.9199) tensor(4654.4375) tensor(21961.5840) tensor(7.8488) backbone.3.attention_bn.num_batches_tracked tensor(129409) backbone.3.intermediate.0.weight tensor(-0.0591) tensor(0.2510) tensor(0.8361) tensor(-0.8860) backbone.3.intermediate.0.bias tensor(-0.1252) tensor(0.3657) tensor(0.9149) tensor(-0.8382) backbone.3.intermediate.2.weight tensor(0.0773) tensor(0.2543) tensor(1.0418) tensor(-0.9378) backbone.3.intermediate.2.bias tensor(-0.0811) tensor(0.4537) tensor(0.8454) tensor(-0.6803) backbone.3.intermediate_bn.weight tensor(0.0379) tensor(0.0266) tensor(0.0790) tensor(-0.0377) backbone.3.intermediate_bn.bias tensor(0.0056) tensor(0.0405) tensor(0.0874) tensor(-0.1089) backbone.3.intermediate_bn.running_mean tensor(26.1826) tensor(41.1835) tensor(83.8591) tensor(-45.0432) backbone.3.intermediate_bn.running_var tensor(8808.6387) tensor(7466.9263) tensor(24591.0547) tensor(128.9831) backbone.3.intermediate_bn.num_batches_tracked tensor(129409) ffn_back.weight tensor(-0.0009) tensor(0.0637) tensor(0.2218) tensor(-0.2111) ffn_back.bias tensor(0.0194) tensor(0.1394) tensor(0.2504) tensor(-0.2795) ffn_dense.weight tensor(-0.0477) tensor(1.1556) tensor(2.5575) tensor(-2.5009) ffn_dense.bias tensor(0.1052) tensor(0.2800) tensor(0.6631) tensor(-0.2994) header.0.weight tensor(0.0568) tensor(0.2091) tensor(3.5681) tensor(-0.2342) header.0.bias tensor(0.0098) tensor(0.1676) tensor(0.5938) tensor(-0.4586) header.1.weight tensor(0.0013) tensor(0.1546) tensor(1.6465) tensor(-1.7266) header.1.bias tensor(-0.0439) tensor(0.1949) tensor(0.4953) tensor(-0.5576) header.4.weight tensor(0.8643) tensor(0.2663) tensor(1.7314) tensor(0.0686) header.4.bias tensor(0.3132) tensor(0.2266) tensor(0.8712) tensor(-0.4215) header.4.running_mean tensor(49.2757) tensor(43.4837) tensor(147.0984) tensor(-0.9270) header.4.running_var tensor(6590.4995) tensor(7323.5889) tensor(27443.1035) tensor(0.0560) header.4.num_batches_tracked tensor(129409) header.5.weight tensor(-0.1006) tensor(0.2805) tensor(1.1018) tensor(-1.4342) header.5.bias tensor(-0.0934) tensor(0.2386) tensor(0.5517) tensor(-0.6286) header.8.weight tensor(0.0904) tensor(0.2442) tensor(0.7484) tensor(-0.5556) header.8.bias tensor(-0.0026) tensor(0.0796) tensor(0.2316) tensor(-0.3314) header.8.running_mean tensor(2.3733) tensor(2.3534) tensor(8.0551) tensor(-0.1808) header.8.running_var tensor(35.5077) tensor(40.9508) tensor(189.1192) tensor(0.0112) header.8.num_batches_tracked tensor(129405) out.weight tensor(-0.0006) tensor(0.1657) tensor(0.5594) tensor(-0.5976) out.bias tensor(-0.0328) tensor(0.2153) tensor(0.2697) tensor(-0.3124)

维度异常

这种一般在tensorflow的静态图框架中容易出现。排查方法:

  • 查看报错中的op信息,例如[[node Gradients//gradients_summary/sub_4_grad/BroadcastGradientArgs (defined at opt/tiger/tmp/forge/compiler/sail/reckon/scm/aml_lagrange_sail-2.2.0.23-1.15/venv/lib/python2.7/site-packages/tensorflow_core/python/framework/ops.py:1748) ]],其中sub_4_grad表示“sub_4”这个op的梯度回传有问题
  • 找到静态图编译生成的graph.readable文件,搜索报错节点op。上面的报错就应该搜sub_4
  • 根据节点依赖关系,查看是哪一个输入维度不符合预期。
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值