参考:《Tensorflow实战》《Tensorflow 实战Google深度学习框架》
https://blog.youkuaiyun.com/sinat_16823063/article/details/53946549
https://blog.youkuaiyun.com/yaoqi_isee/article/details/77526497
https://blog.youkuaiyun.com/u012759136/article/details/52232266
学习Tensorflow,拿VGG16练练手,没有其他骚操作,只有数据集制作,训练及测试。
训练数据-17flowers,百度网盘链接: https://pan.baidu.com/s/1CXcCgC8Ch5Hdmkgde9yAww 密码: 3nc4
VGG16.npy,百度网盘链接: https://pan.baidu.com/s/1eUlM3ia 密码: 4wvq
一、网络结构,VGG16.py
#coding=utf-8
import tensorflow as tf
import numpy as np
# 加载预训练模型
data_dict = np.load('./vgg16.npy', encoding='latin1').item()
# 打印每层信息
def print_layer(t):
print t.op.name, ' ', t.get_shape().as_list(), '\n'
# 定义卷积层
"""
此处权重初始化定义了3种方式:
1.预训练模型参数
2.截尾正态,参考书上采用该方式
3.xavier,网上blog有采用该方式
通过参数finetrun和xavier控制选择哪种方式,有兴趣的可以都试试
"""
def conv(x, d_out, name, fineturn=False, xavier=False):
d_in = x.get_shape()[-1].value
with tf.name_scope(name) as scope:
# Fine-tuning
if fineturn:
kernel = tf.constant(data_dict[name][0], name="weights")
bias = tf.constant(data_dict[name][1], name="bias")
print "fineturn"
elif not xavier:
kernel = tf.Variable(tf.truncated_normal([3, 3, d_in, d_out], stddev=0.1), name='weights')
bias = tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[d_out]),
trainable=True,
name='bias')
print "truncated_normal"
else:
kernel = tf.get_variable(scope+'weights', shape=[3, 3, d_in, d_out],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer_conv2d())
bias = tf.Variable(tf.constant(0.0, dtype=tf.float32, shape=[d_out]),
trainable=True,
name='bias')
print "xa