kares提取中层特征及其可视化

kares代码如下:(哈哈,我的代码有点繁琐(乱),部分函数就不具体展开啦)

  1. input = Input(shape=(-1,15, 15, 200, 1))
  2. conv1 = _conv_bn_relu(nb_filter=64, kernel_dim1=3, kernel_dim2=3, kernel_dim3=input._keras_shape[3], subsample=(1, 1, 1))(input) 
  3. spa1_output = _residual_block1(nb_filter=nb_filter, repetitions=1, is_first_layer=True, name = 'spa1')(conv1) 
  4. spa2_output = _residual_block2(nb_filter=nb_filter, repetitions=1, is_first_layer=True, name = 'spa2')(spa1_output)
  5. spa3_output = _residual_block3(nb_filter=nb_filter, repetitions=1, is_first_layer=True, name = 'spa3')(spa2_output)
  6. block = _bn_relu(spa3_output)
  7. block_norm = BatchNormalization(axis=CHANNEL_AXIS)(block)
  8. block_output = Activation("relu")(block_norm)
  9. pool = AveragePooling3D(pool_size=(block._keras_shape[CONV_DIM1],                     block._keras_shape[CONV_DIM2],block._keras_shape[CONV_DIM3],), strides=(1, 1, 1))(block_output)
  10. flatten1 = Flatten()(pool)
  11. drop1 = Dropout(0.5)(flatten1)
  12. dense = Dense(units=num_outputs, activation="softmax", kernel_initializer="he_normal")(drop1)
  13. model1 = Model(inputs=input, outputs=dense)
  14. model1.compile(loss='categorical_crossentropy', optimizer=RMSprop(lr=0.0003), metrics=['accuracy'])
  15. best_weights_RES_path_ss4 = 'models/Indian_best_1.hdf5'
  16. earlyStopping6 = kcallbacks.EarlyStopping(monitor='val_loss', patience=patience, verbose=1, mode='auto')
  17. saveBestModel6 = kcallbacks.ModelCheckpoint(best_weights_RES_path_ss4, monitor='val_loss', verbose=1,save_best_only=True, mode='auto')
  18. model1.fit(x= x_train2.reshape(x_train2.shape[0], x_train2.shape[1], x_train2.shape[2], x_train2.shape[3], 1),                      y= y_train, validation_data=([x_val1.reshape(x_val1.shape[0], x_val1.shape[1], x_val1.shape[2], x_val1.shape[3], 1),         x_val2.reshape(x_val2.shape[0], x_val2.shape[1], x_val2.shape[2], x_val2.shape[3], 1)], y_val),     batch_size=batch_size, nb_epoch=nb_epoch, shuffle=True, callbacks=[earlyStopping6, saveBestModel6])

下载模型及提取中间3层的特征:

  1. model1_out = load_model('models/Indian_best_1.hdf5')
  2. spa1_model = Model(inputs=model1_out.inputs, outputs=model1_out.get_layer('spa1').output)
  3. spa2_model = Model(inputs=model1_out.inputs, outputs=model1_out.get_layer('spa2').output)
  4. spa3_model = Model(inputs=model1_out.inputs, outputs=model1_out.get_layer('spa3').output)
  5. spa1_output = spa1_model.predict(x_test2[1].reshape(1, x_test2[1].shape[0], x_test2[1].shape[1], x_test2[1].shape[2], 1))
  6. spa2_output = spa2_model.predict(x_test2[1].reshape(1, x_test2[1].shape[0], x_test2[1].shape[1], x_test2[1].shape[2], 1))
  7. spa3_output = spa3_model.predict(x_test2[1].reshape(1, x_test2[1].shape[0], x_test2[1].shape[1], x_test2[1].shape[2], 1))

中间3层的可视化,每层展示3个feature map:

  1. plt.figure(figsize=(3, 3))
  2. for i in range(3):
  3.       plt.subplot(3, 6, 6 * i + 1)
  4.       spa1_img = spa1_output[0, :, :, 0, i]
  5.       spa1_img = (spa1_img - np.min(spa1_img)) / (np.max(spa1_img) - np.min(spa1_img))
  6.       plt.title("spa1")
  7.       plt.imshow(spa1_img)
  8.       plt.subplot(3, 6, 6 * i + 2)
  9.       spa2_img = spa2_output[0, :, :, 0, i]
  10.       spa2_img = (spa2_img - np.min(spa2_img)) / (np.max(spa2_img) - np.min(spa2_img))
  11.       plt.title("spa2")
  12.       plt.imshow(spa2_img)
  13.       plt.subplot(3, 6, 6 * i + 3)
  14.       spa3_img = spa3_output[0, :, :, 0, i]
  15.       spa3_img = (spa3_img - np.min(spa3_img)) / (np.max(spa3_img) - np.min(spa3_img))
  16.       plt.title("spa3")
  17.       plt.imshow(spa3_img)
  18.   plt.show()
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值