TF-slim

原文地址:

import tensorflow.contrib.slim as slim

slim是一个使构建,训练,评估神经网络变得简单的库。它可以消除原生tensorflow里面很多重复的模板性的代码,让代码更紧凑,更具备可读性。另外slim提供了很多计算机视觉方面的著名模型(VGG, AlexNet等),我们不仅可以直接使用,甚至能以各种方式进行扩展。

子模块介绍:
(1)arg_scope : 除了基本的name_spope, varibale_scope, 又加了arg_scope,它是用来用来控制每一层的默认超参数。

(2) data、layers、losses、metrics(度量标准)、nets(包含经典的网络)、queues(文本队列管理)、regularizers、variables(变量管理机制)

slim定义模型

(1)定义一个变量:
变量分为两类: 模型变量和局部变量

# 模型变量
weights = slim.model_variable('weights', shape=[10, 10, 3, 3], initializer=tf.truncated_normal_initializer(stddev=0.1), regularizer=slim.l2_regularizer(0.05), device='/cpu:0')
model_variables = slim.get_model_variables()
# 局部变量
my_var = slim.variable(name, shape, initialize)
regular_variablees and model_variables = slim.get_variables() # get_variables是返回所有变量

(2)实现一个层
首先看tensorflow中原生的:

input = ...
with tf.name_scope('conv1_1') as scope:
  kernel = tf.Variable(tf.truncated_normal([3, 3, 64, 128], dtype=tf.float32, stddev=1e-1), name='weights')
  conv = tf.nn.conv2d(input, kernel, [1, 1, 1, 1], padding='SAME')
  biases = tf.Variable(tf.constant(0.0, shape=[128], dtype=tf.float32),trainable=True, name='biases')
  bias = tf.nn.bias_add(conv, biases)
  conv1 = tf.nn.relu(bias, name=scope)

slim中:

input = ...
# input_tensor, output, [kh, kw], stride, scope
net = slim.conv2d(input, 128, [3, 3], stride=1, scope='conv1_1')

如果定义三个相同的卷积层
slim.repeate

net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.max_pool2d(net, [2, 2], scope='pool2')

slim.stack: 则可以处理卷积和或输出不一样的情况:

# 3个卷积层:
x = slim.fully_connected(x, 32, scope='fc/fc_1')
x = slim.fully_connected(x, 64, scope='fc/fc_2')
x = slim.fully_connected(x, 128, scope='fc/fc_3')

slim:

slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')

(4)slim.arg_scope

对于网络有大量相同的参数, arg_scope的作用范围内,是定义了指定层的默认参数,若想特别指定某些层的参数,可以重新赋值(相当于重写):

net = slim.conv2d(inputs, 64, [11, 11], 4, padding='SAME',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv1')
net = slim.conv2d(net, 128, [11, 11], padding='VALID',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv2')
net = slim.conv2d(net, 256, [11, 11], padding='SAME',
                  weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                  weights_regularizer=slim.l2_regularizer(0.0005), scope='conv3')

slim:

with slim.arg_scope([slim.conv2d], padding='SAME',weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005)):

    net = slim.conv2d(inputs, 64, [11, 11], scope='conv1')
    net = slim.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2') # 对padding定重写
    net = slim.conv2d(net, 256, [11, 11], scope='conv3')
注:

slim.arg_scope可以给函数的参数自动赋初值, 使用slim.arg_scope之后,就不需要每次都重复设置参数,只有在需要修改的时候设置。

那如果除了卷积层还有其他层呢: 写两个arg_scope就行了

with slim.arg_scope([slim.conv2d, slim.fully_connected], activation_fn=tf.nn.relu,weights_initializer=tf.truncated_normal_initializer(stddev=0.01), weights_regularizer=slim.l2_regularizer(0.0005)): # 对slim.conv2d, slim.fully_connected里面涉及的参数定义

  with slim.arg_scope([slim.conv2d], stride=1, padding='SAME'): # 对slim.conv2d 参数设置

    net = slim.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
    net = slim.conv2d(net, 256, [5, 5],
                      weights_initializer=tf.truncated_normal_initializer(stddev=0.03),
                      scope='conv2')
    net = slim.fully_connected(net, 1000, activation_fn=None, scope='fc')
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值