tensorflow笔记【9】深度学习-几个经典网络的基本结构

tensorflow笔记【9】深度学习-几个经典网络的基本结构


前言

利用基础CNN,LeNet,

AlexNet,VGGNet,InceptionNet,RestNet实现图像识别。


提示:以下是本篇文章正文内容,下面案例可供参考

一、卷积神经网络

卷积被认为是有效提取图像特征的方法,一般用正方形的卷积核,按指定步长,在输入特征图上滑动,遍历输入特征图中的每个像素点。

CBAPD
C-卷积层

Conv2D
keras.layers.Conv2D(filters, kernel_size, strides=(1, 1), padding='valid', data_format=None, dilation_rate=(1, 1), activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)
2D 卷积层 (例如对图像的空间卷积)。
该层创建了一个卷积核, 该卷积核对层输入进行卷积, 以生成输出张量。 如果 use_bias 为 True, 则会创建一个偏置向量并将其添加到输出中。 最后,如果 activation 不是 None,它也会应用于输出。

当使用该层作为模型第一层时,需要提供 input_shape 参数 (整数元组,不包含样本表示的轴),例如, input_shape=(128, 128, 3) 表示 128x128 RGB 图像, 在 data_format="channels_last" 时。

**参数**

filters: 整数,输出空间的维度 (即卷积中滤波器的输出数量)。
kernel_size: 一个整数,或者 2 个整数表示的元组或列表, 指明 2D 卷积窗口的宽度和高度。 可以是一个整数,为所有空间维度指定相同的值。
strides: 一个整数,或者 2 个整数表示的元组或列表, 指明卷积沿宽度和高度方向的步长。 可以是一个整数,为所有空间维度指定相同的值。 指定任何 stride 值 != 1 与指定 dilation_rate 值 != 1 两者不兼容。
padding: "valid""same" (大小写敏感)。
data_format: 字符串, channels_last (默认) 或 channels_first 之一,表示输入中维度的顺序。 channels_last 对应输入尺寸为 (batch, height, width, channels), channels_first 对应输入尺寸为 (batch, channels, height, width)。 它默认为从 Keras 配置文件 ~/.keras/keras.json 中 找到的 image_data_format 值。 如果你从未设置它,将使用 channels_last。
dilation_rate: 一个整数或 2 个整数的元组或列表, 指定膨胀卷积的膨胀率。 可以是一个整数,为所有空间维度指定相同的值。 当前,指定任何 dilation_rate 值 != 1 与 指定 stride 值 != 1 两者不兼容。
activation: 要使用的激活函数 (详见 activations)。 如果你不指定,则不使用激活函数 (即线性激活: a(x) = x)。
use_bias: 布尔值,该层是否使用偏置向量。
kernel_initializer: kernel 权值矩阵的初始化器 (详见 initializers)。
bias_initializer: 偏置向量的初始化器 (详见 initializers)。
kernel_regularizer: 运用到 kernel 权值矩阵的正则化函数 (详见 regularizer)。
bias_regularizer: 运用到偏置向量的正则化函数 (详见 regularizer)。
activity_regularizer: 运用到层输出(它的激活值)的正则化函数 (详见 regularizer)。
kernel_constraint: 运用到 kernel 权值矩阵的约束函数
bias_constraint: 运用到偏置向量的约束函数 (详见constraints)。

```markup


B:批标准化层

BatchNormalization
keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None)```
批量标准化层 (Ioffe and Szegedy, 2014)。

在每一个批次的数据中标准化前一层的激活项, 即,应用一个维持激活项平均值接近 0,标准差接近 1 的转换。

参数

axis: 整数,需要标准化的轴 (通常是特征轴)。
例如data_format="channels_first" 的 Conv2D 层之后, 在 BatchNormalization 中设置 axis=1。
momentum: 移动均值和移动方差的动量。
epsilon: 增加到方差的小的浮点数,以避免除以零。
center: 如果为 True,把 beta 的偏移量加到标准化的张量上。 如果为 False, beta 被忽略。
scale: 如果为 True,乘以 gamma。 如果为 False,gamma 不使用。 当下一层为线性层(或者例如 nn.relu), 这可以被禁用,因为缩放将由下一层完成。
beta_initializer: beta 权重的初始化方法。
gamma_initializer: gamma 权重的初始化方法。
moving_mean_initializer: 移动均值的初始化方法。
moving_variance_initializer: 移动方差的初始化方法。
beta_regularizer: 可选的 beta 权重的正则化方法。
gamma_regularizer: 可选的 gamma 权重的正则化方法。
beta_constraint: 可选的 beta 权重的约束方法。
gamma_constraint: 可选的 gamma 权重的约束方法。
输入尺寸可以是任意的。如果将这一层作为模型的第一层, 则需要指定 input_shape 参数 (整数元组,不包含样本数量的维度)。
输出尺寸与输入相同。

```markup

A–激活函数
P-池化层,最大池化和均值池化
D-舍弃层。让一部分神经元不参与训练

二.经典网络结构

1.自制CNN

代码如下:

# 1.导入相关模块---import
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout,Flatten,Dense
from tensorflow.keras import Model

import os
import numpy as np
import matplotlib.pyplot as plt

np.set_printoptions(threshold=np.inf)

# 2.指定数据集的测试集和训练集---(x_train,y_train),(x_test, y_test)
cifar10 = tf.keras.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train/255, x_test/255

# 3.搭建网络模型---class
class Baseline(Model):
    def __init__(self):
        super(Baseline, self).__init__()
        self.c1 = Conv2D(filters=6, kernel_size=(5, 5), padding='same')
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        self.p1 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.d1 = Dropout(0.2)
        
        self.flatten = Flatten()
        self.f1 = Dense(128, activation='relu')
        self.d2 = Dropout(0.2)
        self.f2 = Dense(10, activation='softmax')
        
    def call(self, x):
        x = self.c1(x)
        x = self.b1(x)
        x = self.a1(x)
        x = self.p1(x)
        x = self.d1(x)
        
        x = self.flatten(x)
        x = self.f1(x)
        x = self.d2(x)
        y = self.f2(x)
        return y
    
model = Baseline()

# 3.为网络模型配置参数---compile
model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['sparse_categorical_accuracy'])

# -----------断点续训,保存最优模型-----------------
checkpoint_save_path = './checkpoint/Baseline.ckpt'
if os.path.exists(checkpoint_save_path + '.index'):
    print('------------load the model----------------')
    model.load_weights(checkpoint_save_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_save_path,
                                                 save_weights_only=True,
                                                 save_best_only=True)
# 4.传入数据集训练---fit
history = model.fit(x_train, y_train, batch_size=32, epochs=50,
                    validation_data=(x_test, y_test),
                    validation_freq=1,
                    callbacks=[cp_callback])

# 5.打印网络模型和参数---summary
model.summary()

# -------------------------打印参数------------------------
f = open('./weights,txt', 'w')
for v in model.trainable_variables:
    f.write(str(v.name) + '\n')
    f.write(str(v.shape) + '\n')
    f.write(str(v.numpy()) + '\n')
f.close()

# -------------------acc/loss 可视化----------------------

acc = history.history['sparse_categorical_accuracy']
val_acc = history.history['val_sparse_categorical_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']

plt.subplot(1, 2, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.title('Training ang Validation Accuracy')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(loss, label='Training loss')
plt.plot(val_loss, label='Validation loss')
plt.title('Training ang Validation loss')
plt.legend()

plt.show()

2.LeNet

# 1.导入相关模块----import
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Model

# 2.LeNet---2层卷积和3层全连接
class LeNet5(Model):
    def __init__(self):
        super(LeNet5, self).__init__()
        self.c1 = Conv2D(filters=6, kernel_size=(5, 5), activation='sigmoid')
        self.p1 = MaxPool2D(pool_size=(2, 2), strides=2)
        
        self.c2 = Conv2D(filters=16, kernel_size=(5, 5), activation='sigmoid')
        self.p2 = MaxPool2D(pool_size=(2, 2), strides=2)
        
        self.flatten = Flatten()
        self.f1 = Dense(120, activation='sigmoid')
        self.f2 = Dense(84, activation='sigmoid')
        self.f3 = Dense(10, activation='softmax')
        
    def call(self, x):
        x = self.c1(x)
        x = self.p1(x)
        x = self.c2(x)
        x = self.p2(x)
        
        x = self.flatten(x)
        x = self.f1(x)
        x = self.f2(x)
        y = self.f3(x)
        return y
    
model = LeNet5()

3.AlexNet

# 1.导入相关模块----import
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Model

# 2.Alexnet8---5层卷积,3层全连接
class AlexNet8(Model):
    def __init__(self):
        super(AlexNet8, self).__init__()
        self.c1 = Conv2D(filters=96, kernel_size=(3, 3))
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        self.p1 = MaxPool2D(pool_size=(3, 3), strides=2)
        
        self.c2 = Conv2D(filters=256, kernel_size=(3, 3))
        self.b2 = BatchNormalization()
        self.a2 = Activation('relu')
        self.p2 = MaxPool2D(pool_size=(3, 3), strides=2)
        
        self.c3 = Conv2D(filters=384, kernel_size=(3, 3), padding='same', activation='relu')
        
        self.c4 = Conv2D(filters=384, kernel_size=(3, 3), padding='same', activation='relu')
        
        self.c5 = Conv2D(filters=256, kernel_size=(3, 3), padding='same', activation='relu')
        self.p3 = MaxPool2D(pool_size=(3, 3), strides=2)
        
        self.flatten = Flatten()
        self.f1 = Dense(2048, activation='relu')
        self.d1 = Dropout(0.5)
        self.f2 = Dense(2048, activation='relu')
        self.d2 = Dense(0.5)
        self.f3 = Dense(10, activation='softmax')
        
        
        def call(self, x):
            x = self.c1(x)
            x = self.b1(x)
            x = self.a1(x)
            x = self.p1(x)
            
            x = self.c2(x)
            x = self.b2(x)
            x = self.a2(x)
            x = self.p2(x)
            
            x = self.c3(x)
            
            x = self.c4(x)
            
            x = self.c5(x)
            x = self.p3(x)
            
            x = self.flatten(x)
            x = self.f1(x)
            x = self.d1(x)
            
            x = self.f2(x)
            x = self.d2(x)
            
            y = self.f3(x)
            return y
           
model = AlexNet8()

4.VGGNet

# 1.导入相关模块----import
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense
from tensorflow.keras import Model

# 2.VGGNet16---13层卷积,3层全连接
class VGG16(Model):
    def __init__(self):
        super(VGG16, self).__init__()
        self.c1 = Conv2D(filters=64, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3))
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        self.c2 = Conv2D(filters=64, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3))
        self.b2 = BatchNormalization()
        self.a2 = Activation('relu')
        self.p1 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.d1 = Dropout(0.2)

        self.c3 = Conv2D(filters=128, kernel_size=(3, 3), strides=1, padding='same')
        self.b3 = BatchNormalization()
        self.a3 = Activation('relu')
        self.c4 = Conv2D(filters=128, kernel_size=(3, 3), strides=1, padding='same')
        self.b4 = BatchNormalization()
        self.a4 = Activation('relu')
        self.p2 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.d2 = Dropout(0.2)

        self.c5 = Conv2D(filters=256, kernel_size=(3, 3), strides=1, padding='same')
        self.b5 = BatchNormalization()
        self.a5 = Activation('relu')
        self.c6 = Conv2D(filters=256, kernel_size=(3, 3), strides=1, padding='same')
        self.b6 = BatchNormalization()
        self.a6 = Activation('relu')
        self.c7 = Conv2D(filters=256, kernel_size=(3, 3), strides=1, padding='same')
        self.b7 = BatchNormalization()
        self.a7 = Activation('relu')
        self.p3 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.d3 = Dropout(0.2)

        self.c8 = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding='same' )
        self.b8 = BatchNormalization()
        self.a8 = Activation('relu')
        self.c9 = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding='same' )
        self.b9 = BatchNormalization()
        self.a9 = Activation('relu')
        self.c10 = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3))
        self.b10 = BatchNormalization()
        self.a10 = Activation('relu')
        self.p4 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.d4 = Dropout(0.2)

        self.c11 = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3))
        self.b11 = BatchNormalization()
        self.a11 = Activation('relu')
        self.c12 = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3))
        self.b12 = BatchNormalization()
        self.a12 = Activation('relu')
        self.c13 = Conv2D(filters=512, kernel_size=(3, 3), strides=1, padding='same', input_shape=(32, 32, 3))
        self.b13 = BatchNormalization()
        self.a13 = Activation('relu')
        self.p5 = MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
        self.d5 = Dropout(0.2)
        
        self.flatten = Flatten()
        self.f1 = Dense(512, activation='relu')
        self.d6 = Dropout(0.2)
        self.f2 = Dense(512, activation='relu')
        self.d7 = Dropout(0.2)
        self.f3 = Dense(10, activation='softmax')
        
        def call(self, x):
            x = self.c1(x)
            x = self.b1(x)
            x = self.a1(x)
            x = self.c2(x)
            x = self.b2(x)
            x = self.a2(x)
            x = self.p1(x)
            x = self.d1(x)

            x = self.c3(x)
            x = self.b3(x)
            x = self.a3(x)
            x = self.c4(x)
            x = self.b4(x)
            x = self.a4(x)
            x = self.p2(x)
            x = self.d2(x)

            x = self.c5(x)
            x = self.b5(x)
            x = self.a5(x)
            x = self.c6(x)
            x = self.b6(x)
            x = self.a6(x)
            x = self.c7(x)
            x = self.b7(x)
            x = self.a7(x)
            x = self.p3(x)
            x = self.d3(x)

            x = self.c8(x)
            x = self.b8(x)
            x = self.a8(x)
            x = self.c9(x)
            x = self.b9(x)
            x = self.a9(x)
            x = self.c10(x)
            x = self.b10(x)
            x = self.a10(x)
            x = self.p4(x)
            x = self.d4(x)

            x = self.c11(x)
            x = self.b11(x)
            x = self.a11(x)
            x = self.c12(x)
            x = self.b12(x)
            x = self.a12(x)
            x = self.c13(x)
            x = self.b13(x)
            x = self.a13(x)
            x = self.p5(x)
            x = self.d5(x)
            
            x = self.flatten(x)
            x = self.f1(x)
            x = self.d6(x)
            x = self.f2(x)
            x = self.d7(x)
            y = self.f3(x)
            return y
            
model = VGG16()

5.InceptionNet

# 1.导入相关模块----import
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense, GlobalAveragePooling2D
from tensorflow.keras import Model

# 2.InceptionNet10---4个分支堆叠形成1个block

# 2.1 将CBA结构封装在一起,形成一个新的ConvBNRelu类
class ConvBNRelu(Model):
    def __init__(self, ch, kernelsz=3, strides=1, padding='same'):
        super(ConvBNRelu, self).__init__()
        self.model = tf.keras.models.Sequential([
            Conv2D(ch, kernelsz, strides=strides, padding=padding),
            BatchNormalization(),
            Activation('relu')
        ])
    
    def call(self, x):
        x = self.model(x)
        return x
 
 
# 2.2 搭建Inception10的基本单元,构成1个block块
class InceptionBlk(Model):
    def __init__(self, ch, strides=1):
        super(InceptionBlk, self).__init__()
        self.ch = ch
        self.strides = strides
        
        self.c1 = ConvBNRelu(ch,kernel_size=1, strides=strides)
        
        self.c2_1 = ConvBNRelu(ch,kernelsz=1, strides=strides)
        self.c2_2 = ConvBNRelu(ch, kernelsz=3, strides=1)
        
        self.c3_1 = ConvBNRelu(ch, kernelsz=1, strides=strides)
        self.c3_2 = ConvBNRelu(ch, kernelsz=5, strides=1)
        
        self.p4_1 = MaxPool2D(3, strides=1,padding='same')
        self.c4_2 = ConvBNRelu(ch, kernelsz=1, strides=strides)
        
    def call(self, x):
        x1 = self.c1(x)
        
        x2_1 = self.c2_1(x)
        x2_2 = self.c2_2(x2_1)
        
        x3_1 = self.c3_1(x)
        x3_2 = self.c3_2(x3_1)
        
        x4_1 = self.p4_1(x)
        x4_2 = self.c4_2(x4_1)
        
        x = tf.concat([x1, x2_2, x3_2, x4_2], axis=3)
        return x
    

# 2.3调用之前搭建的Block块,搭建一个真正的Inception v1网络模型
class Inception10(Model):
    def __init__(self, num_blocks, num_classes, init_ch=16, **kwargs):
        super(Inception10, self).__init__(**kwargs)
        self.in_channels = init_ch
        self.out_channels = init_ch
        self.num_blocks = num_blocks
        self.init_ch = init_ch
        
        self.c1 = ConvBNRelu(init_ch)
        
        self.blocks = tf.keras.models.Sequential()
        for block_id in range(num_blocks):
            for lay_id in range(2):
                if lay_id == 0:
                    block = InceptionBlk(self.out_channels, strides=2)
                else:
                    block = InceptionBlk(self.out_channels, strides=1)
                
                self.blocks.add(block)
                self.out_channels *= 2
        
        self.p1 = GlobalAveragePooling2D()
        
        self.f1 = Dense(num_classes, activation='softmax')
        
    def call(self, x):
        x = self.c1(x)
        x = self.blocks(x)
        x = self.p1(x)
        y = self.f1(x)
        return y
        
model =Inception10(num_blocks=2, num_classes=10)

6.ResNet

# 1.导入相关模块----import
import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Dropout, Flatten, Dense, GlobalAveragePooling2D
from tensorflow.keras import Model

# 2.ResNet18----18层网络结构:1层卷积+8个ResNet块+1层全连接

# 2.1 搭建ResNet里的残差结构:维度相同,维度不同
class ResnetBlock(Model):
    def __init__(self, filters, strides=1, residual_path=False):
        super(ResnetBlock, self).__init__()
        self.filters = filters
        self.strides = strides
        self.residual_path = residual_path
        
        self.c1 = Conv2D(filters, (3, 3), padding='same', use_bias=False)
        self.b1 = BatchNormalization()
        self.a1 = Activation('relu')
        
        self.c2 = Conv2D(filters, (3, 3), strides=strides, padding='same', use_bias=False)
        self.b2 = BatchNormalization()
        
        if residual_path:
            self.down_c1 = Conv2D(filters, (1, 1), strides=strides, padding='same', use_bias=False)
            self.down_b1 = BatchNormalization()
            
        self.a2 = Activation('relu')
        
    def call(self, inputs):
        residual = inputs
        x = self.c1(inputs)
        x = self.b1(x)
        x = self.a1(x)
        
        x = self.c2(x)
        y = self.b2(x)
        
        if self.residual_path:
            residual = self.down_c1(inputs)
            residual = self.down_b1(residual)
            
        out = self.a2(y+residual)
        return out
    
# 2.2 ResNet18模型的实现
class ResNet18(Model):
    def __init__(self, block_list, inital_filters=64):
        super(ResNet18, self).__init__()
        self.num_blocks = len(block_list)
        self.block_list = block_list
        self.out_filters = inital_filters
        
        self.c1 = Conv2D(self.out_filters, (3, 3), strides=1, padding='same',
                         use_bias=False,
                         kernel_initializer='he_normal')
        self.b1 = tf.keras.layers.BatchNormalization()
        self.a1 = Activation('relu')
        
        self.blocks = tf.keras.models.Sequential()
        for block_id in range(len(block_list)):
            for layer_id in range(block_list[block_id]):
                
                if block_id != 0 and layer_id == 0:
                    block = ResnetBlock(self.out_filters, strides=2, residual_path=True)
                    
                else:
                    block = ResnetBlock(self.out_filters, residual_path=False)
                    
                self.blocks.add(block)
                
            self.out_filters *= 2
        
        self.p1 = GlobalAveragePooling2D()
        self.f1 = Dense(10)
        
        
        def call(self, inputs):
            x = self.c1(inputs)
            x = self.b1(x)
            x = self.a1(x)
            
            x = self.blocks(x)
            x = self.p1(x)
            y = self.f1(x)
            
            return y
model = ResNet18([2, 2, 2, 2])

总结

从各个角度增强训练准确率
1.数据增强
2.学习率策略
3.batch_size大小的设置
4.模型参数初始化

训练方法和超参数的设定

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值