【陪你聊TensorLayer-1】TensorLayer 包含Dropout层的神经网络处理Mnist数据集,Tensorboard可视化

本文通过实战案例介绍了TensorLayer这一高级库的使用方法,并对比了两种不同的训练方式。此外,还详细展示了如何利用Tensorboard进行可视化。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >


哈喽,大家好。在经历了研一一年的摸爬滚打之后(作为实验室第一个深度学习方向的命苦,搭个Tensorflow-gpu环境都搭了一星期,Cuda各种黑屏,Cudnn不匹配),本人也算完成了深度学习的入门。那么作为一名菜鸟,向大家推荐一波Tensorlayer这一款基于TensorFlow的高级库,他的功能真的非常强,安装之类的琐碎事情大家自行问度娘就好,部分代码与资料来自于《深度学习:一起玩转TensorLayer》一书,是该库创始人所著,看了之后觉得还不错,至少比那些全本代码凑页数的无脑黑书强多了,不过美中不足的就是小半篇幅都是深度学习的入门知识,个人感觉作为一个高级库的工具书完全不必介绍入门知识,扯远了,我们先来看代码(作者github上的源码,本人往里添加了Tensorboard可视化的部分代码以及部分中文注释,大牛可以跳过):

import tensorflow as tf
import tensorlayer as tl
import time
from tensorlayer.layers import *
from tensorlayer import tl_logging as logging

X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1, 784))
sess = tf.InteractiveSession()

x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape=[None, ], name='y_')

network = InputLayer(x, name='input')
network = DropoutLayer(network, keep=0.8, name='drop1')
network = DenseLayer(network, 800, tf.nn.relu, name='Dense1')
network = DropoutLayer(network, keep=0.5, name='drop2')
network = DenseLayer(network, 800, tf.nn.relu, name='Dense2')
network = DropoutLayer(network, keep=0.5, name='drop3')
network = DenseLayer(network, 10, tf.identity, name='output')
# 关于为何最后使用tf.identity而不是softmax,在《深度学习:一起玩转TensorLayer》一书中的解释是tl.cost.cross_entropy内部实现了softmax

# 定义损失函数
y = network.outputs
y_op = tf.argmax(tf.nn.softmax(y), 1)
cost = tl.cost.cross_entropy(y, y_, name='entropy')
correct_prediction = tf.equal(tf.argmax(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# 定义优化器
train_params = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost, var_list=train_params)

# tensorboard
for param in network.all_params:
    if hasattr(tf, 'summary') and hasattr(tf.summary, 'histogram'):
        logging.info('Param name %s' % param.name)
        tf.summary.histogram(param.name, param)
acc_sum = tf.summary.scalar('acc', acc)
cost_sum = tf.summary.scalar('cost', cost)
merged = tf.summary.merge_all()
train_writer = tf.summary.FileWriter('logs/train', sess.graph)
# tensorboard --logdir=C:\Users\Administrator\Desktop\TensorLayer_learn\logs\train
val_writer = tf.summary.FileWriter('logs/validation', sess.graph)
train_writer.add_graph(sess.graph)
val_writer.add_graph(sess.graph)

# 初始化模型参数
tl.layers.initialize_global_variables(sess)

# 1. 使用数据迭代工具箱代替fit和test简化API可以真正掌握并控制深度学习训练
# 训练参数设置
batch_size = 500
n_epoch = 100
print_freq = 5
tensor_board_train_index = 0
tensor_board_val_index = 0
for epoch in range(n_epoch):
    start_time = time.time()

    # 训练一个epoch的循环
    for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=True):
        feed_dict = {x: X_train_a, y_: y_train_a}
        # 启用Dropout, 使用keep值作为DropoutLayer的概率
        feed_dict.update(network.all_drop)
        sess.run(train_op, feed_dict=feed_dict)
        # rs = sess.run(summary)
        # writer.add_summary(rs, epoch)

    # 每隔print_freq个Epoch,对训练集和验证集做一次测试
    if epoch + 1 == 1 or (epoch + 1) % print_freq == 0:
        # 打印每个Epoch所花的时间
        print("Epoch %d of %d took %fs" % (epoch+1, n_epoch, time.time()-start_time))

        # 用训练集做测试
        train_loss, train_acc, n_batch = 0, 0, 0
        for X_train_a, y_train_a in tl.iterate.minibatches(X_train, y_train, batch_size, shuffle=False):
            # 关闭Dropout, 把Dropout的keep设为1
            dp_dict = tl.utils.dict_to_one(network.all_drop)
            feed_dict = {x: X_train_a, y_: y_train_a}
            feed_dict.update(dp_dict)
            err, ac = sess.run([cost, acc], feed_dict=feed_dict)
            train_loss += err
            train_acc += ac
            n_batch += 1
            ##################################
            result = sess.run(merged, feed_dict=feed_dict)
            train_writer.add_summary(result, tensor_board_train_index)
            tensor_board_train_index += 1
            ##################################
        print(" train loss: %f" % (train_loss / n_epoch))
        print(" train acc: %f" % (train_acc / n_epoch))
        # 用验证集做测试
        val_loss, val_acc, n_batch = 0, 0, 0
        for X_val_a, y_val_a in tl.iterate.minibatches(X_val, y_val, batch_size, shuffle=False):
            # 关闭Dropout
            dp_dict = tl.utils.dict_to_one(network.all_drop)
            feed_dict = {x: X_val_a, y_: y_val_a}
            feed_dict.update(dp_dict)
            err, ac = sess.run([cost, acc], feed_dict=feed_dict)
            val_loss += err
            val_acc += ac
            n_batch += 1
            result = sess.run(merged, feed_dict=feed_dict)
            val_writer.add_summary(result, tensor_board_val_index)
            tensor_board_val_index += 1
        print(" val loss: %f" % (val_loss / n_epoch))
        print(" val acc: %f" % (val_acc / n_epoch))

# 训练完成后进行测试
test_loss, test_acc, n_batch = 0, 0, 0
for X_test_a, y_test_a in tl.iterate.minibatches(X_test, y_test, batch_size, shuffle=False):
    # 关闭Dropout
    dp_dict = tl.utils.dict_to_one(network.all_drop)
    feed_dict = {x: X_test_a, y_: y_test_a}
    feed_dict.update(dp_dict)
    err, ac = sess.run([cost, acc], feed_dict=feed_dict)
    test_loss += err
    test_acc += ac
    n_batch += 1
print(" test loss: %f" % (test_loss / n_epoch))
print(" test acc: %f" % (test_acc / n_epoch))

'''# 2.fit test
# 列出模型信息
network.print_layers()
network.print_params()

tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
             acc=acc, batch_size=500, n_epoch=100, print_freq=5,
             X_val=100, y_val=y_val, eval_train=False, tensorboard=True)
tl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost)
tl.files.save_npz(network.all_params, name='Mnist_nn.npz')
sess.close()'''

这里我就说两点,第一,TensorLayer有两种训练方式,我在文档中进行了标号,第一种是类似tflearn库那样用fit,test函数进行训练,适合新手以及仅仅将神经网络作为一种工具的开发人员;第二种与Tensorflow类似,更加接近底层,可以执行比如随着迭代次数增加修改步长的操作等等,比较适合研究人员。第二,关于Tensorboard可视化,第一种训练方式的可视化方法我已补充完整,第二种训练方式只要将fit中的tensorboard参数设置为True即可进行可视化展示。之后本人也会将日常码的源码放出与大家分享。希望与各位大牛多多交流,互相学习

评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值