Tensorflow2.0入门教程16:使用经典CNN网络进行迁移学习和微调

Keras中预定义的经典卷积神经网络结构,如:

Xception

VGG16

VGG19

ResNet, ResNetV2

InceptionV3

InceptionResNetV2

MobileNet

MobileNetV2

DenseNet

NASNet

下载预训练模型:tf.keras.applications

tf.keras.applications 中有一些预定义好的经典卷积神经网络结构,如 VGG16 、 VGG19 、 ResNet 、 MobileNet、InceptionV3 等。我们可以直接调用这些经典的卷积神经网络结构(甚至载入预训练的参数),而无需手动定义网络结构。

例如,我们可以使用以下代码来实例化一个 MobileNetV2 网络结构:

import tensorflow as tf

导入模型

model = tf.keras.applications.MobileNetV2()

当执行以上代码时,TensorFlow 会自动从网络上下载 MobileNetV2 网络结构,因此在第一次执行代码时需要具备网络连接。每个网络结构具有自己特定的详细参数设置,一些共通的常用参数如下:

  • input_shape :输入张量的形状(不含第一维的 Batch);

  • include_top :在网络的最后是否包含全连接层,默认为 True ;

  • weights :预训练权值,默认为 ‘imagenet’ ,即为当前模型载入在 ImageNet 数据集上预训练的权值。如需随机初始化变量可设为 None ;

  • classes :分类数,默认为 1000。修改该参数需要 include_top 参数为 False,再接Dense层自定义分类数。

其他网络结构参数可参见keras文档https://keras.io/applications/

示例:使用预训练模型在 tf_flowers 五分类数据集上进行训练

import cv2
import os
import tensorflow as tf
import numpy as np
path=r'C:\Users\zhuxi\Desktop\tf2.0\tf2.0_2\flower_photos/'
w=100
h=100
c=3
def read_img(path):
    imgs=[]
    labels=[]
    cate=[path+x for x in os.listdir(path) if os.path.isdir(path+x)]
    for idx,i in enumerate(cate):
        for j in os.listdir(i):
            im = cv2.imread(i+'/'+j)
            img = cv2.resize(im, (w, h))/255.
            #print('reading the images:%s'%(i+'/'+j))
            imgs.append(img)
            labels.append(idx)
    return np.asarray(imgs,np.float32),np.asarray(labels,np.int32)
data,label=read_img(path)
num_example=data.shape[0] # data.shape是(3029, 100, 100, 3)
arr=np.arange(num_example)# 创建等差数组 0,1,...,3028
np.random.shuffle(arr)# 打乱顺序
data=data[arr]
label=label[arr]
print(label)
[1 1 1 ... 0 2 4]

标签one-hot处理

def to_one_hot(label):
    return tf.one_hot(label,5)
label_oh = to_one_hot(label)  
ratio=0.8
s=np.int(num_example*ratio)
x_train=data[:s]
y_train=label_oh.numpy()[:s]
x_val=data[s:]
y_val=label_oh.numpy()[s:]
根据Google开发的MobileNet V2模型创建基本模型。这在ImageNet数据集上进行了预训练,该图像数据集是一个由140万张图像和1000个类别组成的大型数据集。
base_model = tf.keras.applications.MobileNetV2(include_top=False,weights='imagenet')

一、迁移训练:预训练模型当作提取特征层

通过设置layer.trainable = False可防止在训练期间更新给定层中的权重。MobileNet V2具有许多层,因此将整个模型的可训练标记设置为False将冻结所有层。

base_model.trainable = False
model1 = tf.keras.Sequential([
  base_model,
  tf.keras.layers.GlobalAveragePooling2D(),
  tf.keras.layers.Dense(5,activation="softmax")
])
model1.summary()
Model: "sequential_5"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
mobilenetv2_1.00_224 (Model) (None, None, None, 1280)  2257984   
_________________________________________________________________
global_average_pooling2d_5 ( (None, 1280)              0         
_________________________________________________________________
dense_5 (Dense)              (None, 5)                 6405      
=================================================================
Total params: 2,264,389
Trainable params: 6,405
Non-trainable params: 2,257,984
_________________________________________________________________

使用 keras.layers.GlobalAveragePooling2D() 层对提取的特征进行平均池化,从而将该特征转化为长度为1280的向量。不关心图片尺寸,可以在 h * w 纬度上平均尺化。

model1.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
              loss = tf.keras.losses.categorical_crossentropy,
              metrics=[tf.keras.metrics.categorical_accuracy])
%%time
history = model1.fit(x_train, y_train, batch_size=64, epochs=10, validation_split=0.1)
Train on 2642 samples, validate on 294 samples
Epoch 1/10
2642/2642 [==============================] - 37s 14ms/sample - loss: 1.2908 - categorical_accuracy: 0.6045 - val_loss: 4.7284 - val_categorical_accuracy: 0.3776
Epoch 2/10
2642/2642 [==============================] - 35s 13ms/sample - loss: 0.7311 - categorical_accuracy: 0.7525 - val_loss: 3.4591 - val_categorical_accuracy: 0.5068
Epoch 3/10
2642/2642 [==============================] - 35s 13ms/sample - loss: 0.6268 - categorical_accuracy: 0.7914 - val_loss: 3.0395 - val_categorical_accuracy: 0.5204
Epoch 4/10
2642/2642 [==============================] - 36s 14ms/sample - loss: 0.6175 - categorical_accuracy: 0.7933 - val_loss: 3.8630 - val_categorical_accuracy: 0.4184
Epoch 5/10
2642/2642 [==============================] - 35s 13ms/sample - loss: 0.6130 - categorical_accuracy: 0.7960 - val_loss: 2.9677 - val_categorical_accuracy: 0.5612
Epoch 6/10
2642/2642 [==============================] - 35s 13ms/sample - loss: 0.5879 - categorical_accuracy: 0.8138 - val_loss: 2.7863 - val_categorical_accuracy: 0.5238
Epoch 7/10
2642/2642 [==============================] - 34s 13ms/sample - loss: 0.6200 - categorical_accuracy: 0.8100 - val_loss: 3.6565 - val_categorical_accuracy: 0.5510
Epoch 8/10
2642/2642 [==============================] - 34s 13ms/sample - loss: 0.5799 - categorical_accuracy: 0.8255 - val_loss: 5.5935 - val_categorical_accuracy: 0.4796
Epoch 9/10
2642/2642 [==============================] - 34s 13ms/sample - loss: 0.5111 - categorical_accuracy: 0.8297 - val_loss: 4.0487 - val_categorical_accuracy: 0.5068
Epoch 10/10
2642/2642 [==============================] - 34s 13ms/sample - loss: 0.4591 - categorical_accuracy: 0.8486 - val_loss: 4.2671 - val_categorical_accuracy: 0.4898
Wall time: 5min 48s
model1.evaluate(x_val,y_val,verbose=2)
734/1 - 2s - loss: 4.8298 - categorical_accuracy: 0.4891





[4.444968402872943, 0.4891008]

二、微调

上面使用MobileNet V2模型效果欠佳,下面使用InceptionV3模型进行微调,提供两种方法:

1.使用预训练模型全部层提取特征,并跟随后续任务进行训练

IMG_SHAPE = (w,h,c)
inception_model1 = tf.keras.applications.InceptionV3(input_shape=IMG_SHAPE,
                                               include_top=False,weights='imagenet')
model2 = tf.keras.Sequential([
  inception_model1,
  tf.keras.layers.GlobalAveragePooling2D(),
  tf.keras.layers.Dense(5,activation="softmax")
])
model2.summary()
Model: "sequential_8"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
inception_v3 (Model)         (None, 1, 1, 2048)        21802784  
_________________________________________________________________
global_average_pooling2d_8 ( (None, 2048)              0         
_________________________________________________________________
dense_8 (Dense)              (None, 5)                 10245     
=================================================================
Total params: 21,813,029
Trainable params: 21,778,597
Non-trainable params: 34,432
_________________________________________________________________
model2.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
              loss = tf.keras.losses.categorical_crossentropy,
              metrics=[tf.keras.metrics.categorical_accuracy])
%%time
history = model2.fit(x_train, y_train, batch_size=64, epochs=10, validation_split=0.2)
Train on 2348 samples, validate on 588 samples
Epoch 1/10
2348/2348 [==============================] - 149s 63ms/sample - loss: 1.4799 - categorical_accuracy: 0.3897 - val_loss: 1.1898 - val_categorical_accuracy: 0.5391
Epoch 2/10
2348/2348 [==============================] - 136s 58ms/sample - loss: 0.9350 - categorical_accuracy: 0.6780 - val_loss: 0.9219 - val_categorical_accuracy: 0.6650
Epoch 3/10
2348/2348 [==============================] - 136s 58ms/sample - loss: 0.4966 - categorical_accuracy: 0.8437 - val_loss: 0.7618 - val_categorical_accuracy: 0.7075
Epoch 4/10
2348/2348 [==============================] - 139s 59ms/sample - loss: 0.2652 - categorical_accuracy: 0.9225 - val_loss: 0.7039 - val_categorical_accuracy: 0.7296
Epoch 5/10
2348/2348 [==============================] - 142s 60ms/sample - loss: 0.1593 - categorical_accuracy: 0.9608 - val_loss: 0.6753 - val_categorical_accuracy: 0.7568
Epoch 6/10
2348/2348 [==============================] - 140s 60ms/sample - loss: 0.0949 - categorical_accuracy: 0.9727 - val_loss: 0.6741 - val_categorical_accuracy: 0.7704
Epoch 7/10
2348/2348 [==============================] - 140s 60ms/sample - loss: 0.0679 - categorical_accuracy: 0.9838 - val_loss: 0.6919 - val_categorical_accuracy: 0.7738
Epoch 8/10
2348/2348 [==============================] - 137s 58ms/sample - loss: 0.0418 - categorical_accuracy: 0.9889 - val_loss: 0.7670 - val_categorical_accuracy: 0.7619
Epoch 9/10
2348/2348 [==============================] - 138s 59ms/sample - loss: 0.0383 - categorical_accuracy: 0.9911 - val_loss: 0.7812 - val_categorical_accuracy: 0.7568
Epoch 10/10
2348/2348 [==============================] - 137s 58ms/sample - loss: 0.0219 - categorical_accuracy: 0.9953 - val_loss: 0.8412 - val_categorical_accuracy: 0.7534
Wall time: 23min 12s
%%time
model2.evaluate(x_val,y_val,verbose=2)
734/1 - 11s - loss: 0.9320 - categorical_accuracy: 0.7561
Wall time: 11.1 s
[0.9241332522205176, 0.7561308]

2.冻结预训练模型的基础层(基础层提取的图片特征基本适用于大部分图片,所以不微调这部分),微调它的后面一些重要的分类层,对它们进行解冻然后训练。

IMG_SHAPE = (w,h,c)
inception_model2 = tf.keras.applications.InceptionV3(input_shape=IMG_SHAPE,
                                               include_top=False,weights='imagenet')
print(len(inception_model2.layers))
311

设置前面100层为不可训练,后面层可训练

inception_model2.trainable = True

# Fine-tune from this layer onwards
fine_tune_at = 100

# Freeze all the layers before the `fine_tune_at` layer
for layer in inception_model2.layers[:fine_tune_at]:
    layer.trainable =  False
model3 = tf.keras.Sequential([
  inception_model2,
  tf.keras.layers.GlobalAveragePooling2D(),
  tf.keras.layers.Dense(5,activation="softmax")
])
model3.summary()
Model: "sequential_9"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
inception_v3 (Model)         (None, 1, 1, 2048)        21802784  
_________________________________________________________________
global_average_pooling2d_9 ( (None, 2048)              0         
_________________________________________________________________
dense_9 (Dense)              (None, 5)                 10245     
=================================================================
Total params: 21,813,029
Trainable params: 19,636,613
Non-trainable params: 2,176,416
_________________________________________________________________
model3.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0001),
              loss = tf.keras.losses.categorical_crossentropy,
              metrics=[tf.keras.metrics.categorical_accuracy])
%%time
history = model3.fit(x_train, y_train, batch_size=64, epochs=10, validation_split=0.2)
Train on 2348 samples, validate on 588 samples
Epoch 1/10
2348/2348 [==============================] - 130s 55ms/sample - loss: 1.4864 - categorical_accuracy: 0.3859 - val_loss: 1.3920 - val_categorical_accuracy: 0.4847
Epoch 2/10
2348/2348 [==============================] - 123s 52ms/sample - loss: 0.9260 - categorical_accuracy: 0.6819 - val_loss: 1.0338 - val_categorical_accuracy: 0.6071
Epoch 3/10
2348/2348 [==============================] - 120s 51ms/sample - loss: 0.5384 - categorical_accuracy: 0.8271 - val_loss: 0.9815 - val_categorical_accuracy: 0.6241
Epoch 4/10
2348/2348 [==============================] - 123s 53ms/sample - loss: 0.3136 - categorical_accuracy: 0.9118 - val_loss: 0.9590 - val_categorical_accuracy: 0.6463
Epoch 5/10
2348/2348 [==============================] - 121s 51ms/sample - loss: 0.1793 - categorical_accuracy: 0.9591 - val_loss: 0.8964 - val_categorical_accuracy: 0.6735
Epoch 6/10
2348/2348 [==============================] - 121s 52ms/sample - loss: 0.1124 - categorical_accuracy: 0.9749 - val_loss: 0.9307 - val_categorical_accuracy: 0.6922
Epoch 7/10
2348/2348 [==============================] - 122s 52ms/sample - loss: 0.0731 - categorical_accuracy: 0.9847 - val_loss: 0.9577 - val_categorical_accuracy: 0.7058
Epoch 8/10
2348/2348 [==============================] - 122s 52ms/sample - loss: 0.0611 - categorical_accuracy: 0.9834 - val_loss: 0.9927 - val_categorical_accuracy: 0.6837
Epoch 9/10
2348/2348 [==============================] - 122s 52ms/sample - loss: 0.0361 - categorical_accuracy: 0.9923 - val_loss: 1.1642 - val_categorical_accuracy: 0.6922
Epoch 10/10
2348/2348 [==============================] - 122s 52ms/sample - loss: 0.0379 - categorical_accuracy: 0.9928 - val_loss: 1.0999 - val_categorical_accuracy: 0.6956
Wall time: 20min 26s
%%time
model3.evaluate(x_val,y_val,verbose=2)
734/1 - 17s - loss: 0.9806 - categorical_accuracy: 0.6649
Wall time: 16.6 s
[1.2492227434137537, 0.6648501]
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值