目录
1、tf.contrib.layers.l2_regularizer
一、简介
1、模块列表
- experimental module
2、类列表
- class AveragePooling1D: 一维输入的平均池化层。
- class AveragePooling2D: 2D输入的平均池化层。
- class AveragePooling3D: 3D输入的平均池化层。
- class BatchNormalization: 批处理归一化层
- class Conv1D: 一维卷积层。
- class Conv2D: 二维卷积层。
- class Conv2DTranspose: 转置二维卷积层。
- class Conv3D: 三维卷积层。
- class Conv3DTranspose: 转置三维卷积层。
- class Dense: 密集连接层。
- class Dropout: 将Dropout应用于输入。
- class Flatten: 在保持批处理轴的同时,使输入张量变平。
- class InputSpec: 指定层的每个输入的ndim、dtype和形状。
- class Layer: 基本层。
- class MaxPooling1D: 最大池化一维输入。
- class MaxPooling2D: 最大池化二维输入。
- class MaxPooling3D: 最大池化三维输入 (e.g. volumes).
- class SeparableConv1D: 深度可分离一维卷积。
- class SeparableConv2D: 深度可分离的二维卷积。
3、函数列表
- average_pooling1d(...):一维输入的平均池化层。
- average_pooling2d(...): 二维输入的平均池化层。
- average_pooling3d(...): 三维输入的平均池化层。
- batch_normalization(...): 批处理规范化层的功能接口。
- conv1d(...): 功能接口为一维卷积层。
- conv2d(...): 功能界面为二维卷积层。
- conv2d_transpose(...): 转置二维卷积层的函数接口。
- conv3d(...): 功能界面为三维卷积层。
- conv3d_transpose(...): 转置三维卷积层的功能接口。
- dense(...): 密集连接层的功能接口。
- dropout(...): 将Dropout应用于输入。
- flatten(...): 在保持批处理轴(轴0)的同时,使输入张量变平。
- max_pooling1d(...): 一维输入的最大池化层。
- max_pooling2d(...): 用于2D输入(例如图像)的最大池化层。
- max_pooling3d(...): 用于3D输入的最大池化层。
- separable_conv1d(...): 功能接口为深度可分离的一维卷积层。
- separable_conv2d(...): 功能接口为深度可分离的二维卷积层。
二、重要的API
1、tf.contrib.layers.l2_regularizer
返回一个函数,该函数可用于对权重应用L2正则化。
tf.contrib.layers.l2_regularizer(
scale,
scope=None
)
较小的L2值有助于防止训练数据过度拟合。
参数:
scale
: 标量乘法器张量。0.0禁用正则化器。scope
: 一个可选的范围名称。
返回值:
- 一个具有l2(权重)签名的函数,它应用l2正则化。
可能产生的异常:
ValueError
: If scale is negative or if scale is not a float.
2、tf.layers.conv2d
Functional interface for the 2D convolution layer. (deprecated)
Aliases:
tf.layers.conv2d(
inputs,
filters,
kernel_size,
strides=(1, 1),
padding='valid',
data_format='channels_last',
dilation_rate=(1, 1),
activation=None,
use_bias=True,
kernel_initializer=None,
bias_initializer=tf.zeros_initializer(),
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
reuse=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.keras.layers.Conv2D instead.
This layer creates a convolution kernel that is convolved (actually cross-correlated) with the layer input to produce a tensor of outputs. If use_bias
is True (and a bias_initializer
is provided), a bias vector is created and added to the outputs. Finally, if activation
is not None
, it is applied to the outputs as well.
Arguments:
inputs
: Tensor input.filters
: Integer, the dimensionality of the output space (i.e. the number of filters in the convolution).kernel_size
: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.strides
: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the height and width. Can be a single integer to specify the same value for all spatial dimensions. Specifying any stride value != 1 is incompatible with specifying anydilation_rate
value != 1.padding
: One of"valid"
or"same"
(case-insensitive).-
data_format
: A string, one ofchannels_last
(default) orchannels_first
. The ordering of the dimensions in the inputs.channels_last
corresponds to inputs with shape(batch, height, width, channels)
whilechannels_first
corresponds to inputs with shape(batch, channels, height, width)
. -
dilation_rate
: An integer or tuple/list of 2 integers, specifying the dilation rate to use for dilated convolution. Can be a single integer to specify the same value for all spatial dimensions. Currently, specifying anydilation_rate
value != 1 is incompatible with specifying any stride value != 1. -
activation
: Activation function. Set it to None to maintain a linear activation. -
use_bias
: Boolean, whether the layer uses a bias. -
kernel_initializer
: An initializer for the convolution kernel. -
bias_initializer
: An initializer for the bias vector. If None, the default initializer will be used. -
kernel_regularizer
: Optional regularizer for the convolution kernel. -
bias_regularizer
: Optional regularizer for the bias vector. -
activity_regularizer
: Optional regularizer function for the output. -
kernel_constraint
: Optional projection function to be applied to the kernel after being updated by anOptimizer
(e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training. -
bias_constraint
: Optional projection function to be applied to the bias after being updated by anOptimizer
. -
trainable
: Boolean, ifTrue
also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable). -
name
: A string, the name of the layer. -
reuse
: Boolean, whether to reuse the weights of a previous layer by the same name.
Returns:
Output tensor.
Raises:
ValueError
: if eager execution is enabled.