迁移学习的实现,使用Xception预训练网络

迁移学习的实现,使用Xception预训练网络

——以猫狗数据集为例

说明:运行环境在我的第一个博客中已经说明过,主要是在TensorFlow2.0及以后的版本中,代码是在jupyter notebook上写的。下面是数据集的网盘分享连接:
链接:https://pan.baidu.com/s/1M4hvpaStFVKtFALSHFlvGg
提取码:sztk

// An highlighted block
var foo = 'bar';
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline   # 主要是用于在jupyter notebook上显示
import numpy as np
import glob
import os

# 第一步:构建训练数据
# 1.1 导入数据
train_image_path = glob.glob('./dc_2000/train/*/*.jpg') # 已经将数据放在的代码的文件夹中。
train_image_label = [int(path.split('\\')[1]=='cat') for path in train_image_path]  # 使用列表表达式。根据路径的特点,得出每个路径的图片对应的标签

# 1.2 定义加载预处理数据的函数
def load_preprocess_image(path, label):
    image = tf.io.read_file(path)
    image = tf.image.decode_jpeg(image, channels = 3)
    image = tf.image.resize(image, [256, 256])
    image = tf.cast(image, tf.float32)
    image = image / 255 
    label = tf.reshape(label, [1])   # 将标签 从一维 变成 二维。 [1 , 1, 0 , 1] -> [[1], [1], [0], [1]]
    return image, label
def load_preprocess_test_image(path,label):
    image = tf.io.read_file(path)
    image = tf.image.decode_jpeg(image)
    image = tf.image.resize(image, [256, 256])
    image = tf.cast(image, tf.float32)
    image = image /255
    label = tf.reshape(label, [1])
    return image, label
# 1.3 使用路径和标签,创建数据集
train_image_ds = tf.data.Dataset.from_tensor_slices((train_image_path, train_image_label))

AUTOTUNE = tf.data.experimental.AUTOTUNE
train_image_ds = train_image_ds.map(load_preprocess_image, num_parallel_calls=AUTOTUNE)  # AUTOTUNE 自动根据CPU进行并行运算

# 1.4 再对训练数据做一些处理
BATCH_SIZE = 16
train_count = len(train_image_path)
train_image_ds = train_image_ds.shuffle(train_count).batch(BATCH_SIZE)

# 第二步: 构建测试数据

test_image_path = glob.glob('./dc_2000/test/*/*.jpg')
test_image_label = [int(path.split('\\')[1] == 'cat') for path in test_image_path]

test_image_ds = tf.data.Dataset.from_tensor_slices((test_image_path, test_image_label))
test_image_ds = test_image_ds.map(load_preprocess_test_image, num_parallel_calls = AUTOTUNE )
test_image_ds = test_image_ds.repeat().batch(BATCH_SIZE)  # repeat??

test_count = len(test_image_path)

# 第三步: keras内置的Xception网络的实现:
covn_base = keras.applications.xception.Xception(
    weights = 'imagenet',
    include_top = False,
    input_shape = (256, 256, 3),
    pooling = 'avg'
)
covn_base.trainable = False
# covn_base.summary()
# 3.2 建立model
model = keras.Sequential()
model.add(covn_base)
model.add(layers.Dense(256, activation = 'relu'))
model.add(layers.Dense(1, activation = 'sigmoid'))
# 3.3 model 编译
model.compile(
    optimizer= keras.optimizers.Adam(lr = 0.0005),
    loss = 'binary_crossentropy',
    metrics = ['accuracy']
)
initial_epochs = 5
# 3.3 model 的训练
history  = model.fit(
    train_image_ds,
    steps_per_epoch= int(train_count / BATCH_SIZE),  # 或者改为双斜杠
    epochs = initial_epochs,
    validation_data =test_image_ds,
    validation_steps =  int(test_count/ BATCH_SIZE)  # 或者改为双斜杠
)

# 仅第一个批次 ,验证数据集精度就达到了 99% 以上

# 第4步:微调
# 4.1 解冻
covn_base.trainable = True  
# len(covn_base.layers)   # 133
# 4.2 打开部分层,关闭前面的层
fine_tune_at = -33
for layer in covn_base.layers[:fine_tune_at]:
    layer.trainable= False
# 4.3 编译
model.compile(
    optimizer = keras.optimizers.Adam(lr = 0.0005 /10),
    loss = 'binary_crossentropy',
    metrics = ['accuracy']
)

# 4.4 准备参数,开始训练
fine_tune_epochs = 5
total_epochs = initial_epochs + fine_tune_epochs
history = model.fit(
    train_image_ds,
    steps_per_epoch = int(train_count / BATCH_SIZE),  # 或者改为双斜杠
    epochs = total_epochs,
    initial_epoch = initial_epochs,
    validation_data = test_image_ds,
    validation_steps = int(test_count/ BATCH_SIZE)  # 或者改为双斜杠
)
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值