可视化

https://blog.youkuaiyun.com/kaixinjiuxing666/article/details/81004010

https://www.aiuai.cn/aifarm646.html#2.TensorBoardX-Graph%E5%8F%AF%E8%A7%86%E5%8C%96

https://blog.youkuaiyun.com/bigbennyguo/article/details/87956434#_add__50 

 

https://github.com/DetectionTeamUCAS/NAS_FPN_Tensorflow/blob/168476e309d6db23f9d45fbbb052e037b9ebfd2f/data/lib_coco/PythonAPI/pycocoDemo.ipynb

看COCO文件的

2.画激活情况

用于检查深层网络里的层激活与权值分布情况,避免梯度消失等

for name, param in model.named_parameters():
    writer.add_histogram(
        name, param.clone().data.numpy(), epoch_index)

Some comments on Speed

  • Matplotlib operations can be very slow as Matplotlib runs in python rather than native code, so please watch out for runtime speed. There is still a room for improvement, which will be addressed in the near future.

  • Moreover, it might be also a good idea to draw plots from the main code (rather than having a TF op) and add them as image summaries. Please use this library at your best discernment.

Thread-safety issue

Please use object-oriented matplotlib APIs (e.g. Figure, AxesSubplot) instead of pyplot APIs (i.e. matplotlib.pyplot or plt.XXX()) when creating and drawing plots. This is because pyplot APIs are not thread-safe, while the TensorFlow plot operations are usually executed in multi-threaded manners.

For example, avoid any use of pyplot (or plt):

# DON'T DO LIKE THIS !!!
def figure_heatmap(heatmap):
    fig = plt.figure()                 # <--- NO!
    plt.imshow(heatmap)
    return fig

and do it like:

def figure_heatmap(heatmap):
    fig = matplotlib.figure.Figure()   # or just `fig = tfplot.Figure()`
    ax = fig.add_subplot(1, 1, 1)      # ax: AxesSubplot
    # or, just `fig, ax = tfplot.subplots()`
    ax.imshow(heatmap)
    return fig                         # fig: Figure

For example, tfplot.subplots() is a good replacement for plt.subplots() to use inside plot functions. Alternatively, you can just take advantage of automatic injection of fig and/or ax.

# def add_heatmap(feature_maps, name):
#     '''
#
#     :param feature_maps:[B, H, W, C]
#     :return:
#     '''
#
#     def figure_attention(activation):
#         fig, ax = tfp.subplots()
#         im = ax.imshow(activation, cmap='jet')
#         fig.colorbar(im)
#         return fig
#
#     heatmap = tf.reduce_sum(feature_maps, axis=-1)
#     heatmap = tf.squeeze(heatmap, axis=0)
#     tfp.summary.plot(name, figure_attention, [heatmap])
# def make_r_gt_mask(fet_h, fet_w, img_h, img_w, gtboxes):
#     assert gtboxes.shape[1] == 8, "gtboxes shape should be (-1, 8)"
#     gtboxes = back_forward_convert(gtboxes)
#     gtboxes = np.reshape(gtboxes, [-1, 6])  # [x, y, w, h, theta, label]
#
#     areas = gtboxes[:, 2] * gtboxes[:, 3]
#     arg_areas = np.argsort(-1 * areas)  # sort from large to small
#     gtboxes = gtboxes[arg_areas]
#
#     fet_h, fet_w = int(fet_h), int(fet_w)
#     mask = np.zeros(shape=[fet_h, fet_w], dtype=np.int32)
#     for a_box in gtboxes:
#         # print(a_box)
#         box = cv2.boxPoints(((a_box[0], a_box[1]), (a_box[2], a_box[3]), a_box[4]))
#         box = np.reshape(box, [-1, ])
#         label = a_box[-1]
#         new_box = []
#         for i in range(8):
#             if i % 2 == 0:
#                 x = box[i]
#                 new_x = int(x * fet_w / float(img_w))
#                 new_box.append(new_x)
#             else:
#                 y = box[i]
#                 new_y = int(y*fet_h/float(img_h))
#                 new_box.append(new_y)
#
#         new_box = np.int0(new_box).reshape([4, 2])
#         color = int(label)
#         # print(type(color), color)
#         cv2.fillConvexPoly(mask, new_box, color=color)
#     # print (mask.dtype)
#     return mask

# def vis_mask_tfsmry(mask, name):
#     '''
#
#     :param mask:[H, W]. It's a tensor, not array
#     :return:
#     '''
#
#     def figure_attention(activation):
#         fig, ax = tfp.subplots()
#         im = ax.imshow(activation, cmap='jet')
#         fig.colorbar(im)
#         return fig
#
#     heatmap = mask*10
#
#     tfp.summary.plot(name, figure_attention, [heatmap])

 https://github.com/DetectionTeamUCAS/NAS_FPN_Tensorflow/blob/168476e309d6db23f9d45fbbb052e037b9ebfd2f/data/lib_coco/PythonAPI/pycocotools/coco.py

cv2.imshow("test", img)

 

cv2.waitKey(0)

 

def test_plt():
    from PIL import Image
    import matplotlib.pyplot as plt
    import numpy as np
    a = np.random.rand(20, 30)
    print (a.shape)
  # plt.subplot()
    b = plt.imshow(a)
    plt.show()

        tf.summary.image('positive_anchor', pos_in_img)
tf.summary.image('negative_anchors', neg_in_img)

 

 

                                tf.summary.image('Compare/gtboxes_gpu:%d' % i, gtboxes_in_img)

                                if cfgs.ADD_BOX_IN_TENSORBOARD:
                                    detections_in_img = show_box_in_tensor.draw_boxes_with_categories_and_scores(
                                        img_batch=img,
                                        boxes=outputs[0],
                                        scores=outputs[1],
                                        labels=outputs[2])
tf.summary.image('Compare/final_detection_gpu:%d' % i, detections_in_img)

7、tf.summary.merge_all

merge_all 可以将所有summary全部保存到磁盘,以便tensorboard显示。如果没有特殊要求,一般用这一句就可一显示训练时的各种信息了。格式:tf.summaries.merge_all(key='summaries')

8、tf.summary.FileWriter

指定一个文件用来保存图。格式:tf.summary.FileWritter(path,sess.graph)

可以调用其add_summary()方法将训练过程数据保存在filewriter指定的文件中

Tensorflow Summary 用法示例:

  1. tf.summary.scalar('accuracy',acc) #生成准确率标量图

  2. merge_summary = tf.summary.merge_all()

  3. train_writer = tf.summary.FileWriter(dir,sess.graph)#定义一个写入summary的目标文件,dir为写入文件地址

  4. ......(交叉熵、优化器等定义)

  5. for step in xrange(training_step): #训练循环

  6. train_summary = sess.run(merge_summary,feed_dict = {...})#调用sess.run运行图,生成一步的训练过程数据

  7. train_writer.add_summary(train_summary,step)#调用train_writer的add_summary方法将训练过程以及训练步数保存

此时开启tensorborad:

  1. tensorboard --logdir=/summary_dir 

便能看见accuracy曲线了。

另外,如果我不想保存所有定义的summary信息,也可以用tf.summary.merge方法有选择性地保存信息:

9、tf.summary.merge

格式:tf.summary.merge(inputs, collections=None, name=None)

一般选择要保存的信息还需要用到tf.get_collection()函数

示例:

  1. tf.summary.scalar('accuracy',acc) #生成准确率标量图

  2. merge_summary = tf.summary.merge([tf.get_collection(tf.GraphKeys.SUMMARIES,'accuracy'),...(其他要显示的信息)])

  3. train_writer = tf.summary.FileWriter(dir,sess.graph)#定义一个写入summary的目标文件,dir为写入文件地址

  4. ......(交叉熵、优化器等定义)

  5. for step in xrange(training_step): #训练循环

  6. train_summary = sess.run(merge_summary,feed_dict = {...})#调用sess.run运行图,生成一步的训练过程数据

  7. train_writer.add_summary(train_summary,step)#调用train_writer的add_summary方法将训练过程以及训练步数保存

使用tf.get_collection函数筛选图中summary信息中的accuracy信息,这里的 tf.GraphKeys.SUMMARIES  是summary在collection中的标志。

当然,也可以直接:

  1. acc_summary = tf.summary.scalar('accuracy',acc) #生成准确率标量图

  2. merge_summary = tf.summary.merge([acc_summary ,...(其他要显示的信息)]) #这里的[]不可省

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值