【人工智能项目】深度学习实现10类猴子细粒度识别

【人工智能项目】深度学习实现10类猴子细粒度识别

在这里插入图片描述

任务说明

本次比赛需要选手准确识别10种猴子,数据集只有图片,没有boundbox等标注数据。

环境说明

!nvidia-smi
Fri Mar 27 11:01:18 2020       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64.00    Driver Version: 418.67       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla P100-PCIE...  Off  | 00000000:00:04.0 Off |                    0 |
| N/A   39C    P0    27W / 250W |      0MiB / 16280MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

下载数据集

!wget https://static.leiphone.com/48monkey.zip
--2020-03-27 11:01:28--  https://static.leiphone.com/48monkey.zip
Resolving static.leiphone.com (static.leiphone.com)... 47.246.19.234, 47.246.19.229, 47.246.19.231, ...
Connecting to static.leiphone.com (static.leiphone.com)|47.246.19.234|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 573224419 (547M) [application/zip]
Saving to: ‘48monkey.zip’

48monkey.zip        100%[===================>] 546.67M  31.7MB/s    in 16s     

2020-03-27 11:01:51 (33.9 MB/s) - ‘48monkey.zip’ saved [573224419/573224419]
!unzip 48monkey.zip
import os

train_set_dir = "train/"

test_set_dir = "test/"

print(len(os.listdir(train_set_dir)))

print(len(os.listdir(test_set_dir)))
1096
274

1. 探索数据

import os

bird_dir = "./"
x_train_path = os.path.join(bird_dir,"train")
x_test_path = os.path.join(bird_dir,"test")


y_train_path = os.path.join(bird_dir,"train.csv")

import pandas as pd

y_train_df = pd.read_csv(y_train_path)
y_train_df.head()
filenamelabel
00.jpg9
11.jpg3
22.jpg0
33.jpg1
44.jpg5
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline

sns.countplot(y_train_df["label"])
plt.xlabel("Label")
plt.title("Monkey")
Text(0.5, 1.0, 'Monkey')

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6mcA58yx-1635431897088)(output_12_1.png)]

x_train_img_path = y_train_df["filename"]
y_train = y_train_df["label"] 


print(x_train_img_path[:5])
print(y_train[:5])


0    0.jpg
1    1.jpg
2    2.jpg
3    3.jpg
4    4.jpg
Name: filename, dtype: object
0    9
1    3
2    0
3    1
4    5
Name: label, dtype: int64

2.加载数据

# 定义读取图片函数
import cv2
import numpy as np

def get_img(file_path,img_rows,img_cols):
  
    img = cv2.imread(file_path)
    img = cv2.resize(img,(img_rows,img_cols))
    if img.shape[2] == 1:
      img = np.dstack([img,img,img])
    else:
      img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    img = img.astype(np.float32)
    
    return img
# 加载训练集
x_train = []
for img_name in x_train_img_path:
    img = get_img(os.path.join(x_train_path,img_name),296,296)
    x_train.append(img)

x_train = np.array(x_train,np.float32)
# 加载预测集
import re

x_test_img_path = os.listdir(x_test_path)
x_test_img_path = sorted(x_test_img_path,key = lambda i:int(re.match(r"(\d+)",i).group()))

print(x_test_img_path)

x_test = []
for img_name in x_test_img_path:
    img = get_img(os.path.join(x_test_path,img_name),296,296)
    x_test.append(img)

x_test = np.array(x_test,np.float32)
['0.jpg', '1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg', '7.jpg', '8.jpg', '9.jpg', '10.jpg', '11.jpg', '12.jpg', '13.jpg', '14.jpg', '15.jpg', '16.jpg', '17.jpg', '18.jpg', '19.jpg', '20.jpg', '21.jpg', '22.jpg', '23.jpg', '24.jpg', '25.jpg', '26.jpg', '27.jpg', '28.jpg', '29.jpg', '30.jpg', '31.jpg', '32.jpg', '33.jpg', '34.jpg', '35.jpg', '36.jpg', '37.jpg', '38.jpg', '39.jpg', '40.jpg', '41.jpg', '42.jpg', '43.jpg', '44.jpg', '45.jpg', '46.jpg', '47.jpg', '48.jpg', '49.jpg', '50.jpg', '51.jpg', '52.jpg', '53.jpg', '54.jpg', '55.jpg', '56.jpg', '57.jpg', '58.jpg', '59.jpg', '60.jpg', '61.jpg', '62.jpg', '63.jpg', '64.jpg', '65.jpg', '66.jpg', '67.jpg', '68.jpg', '69.jpg', '70.jpg', '71.jpg', '72.jpg', '73.jpg', '74.jpg', '75.jpg', '76.jpg', '77.jpg', '78.jpg', '79.jpg', '80.jpg', '81.jpg', '82.jpg', '83.jpg', '84.jpg', '85.jpg', '86.jpg', '87.jpg', '88.jpg', '89.jpg', '90.jpg', '91.jpg', '92.jpg', '93.jpg', '94.jpg', '95.jpg', '96.jpg', '97.jpg', '98.jpg', '99.jpg', '100.jpg', '101.jpg', '102.jpg', '103.jpg', '104.jpg', '105.jpg', '106.jpg', '107.jpg', '108.jpg', '109.jpg', '110.jpg', '111.jpg', '112.jpg', '113.jpg', '114.jpg', '115.jpg', '116.jpg', '117.jpg', '118.jpg', '119.jpg', '120.jpg', '121.jpg', '122.jpg', '123.jpg', '124.jpg', '125.jpg', '126.jpg', '127.jpg', '128.jpg', '129.jpg', '130.jpg', '131.jpg', '132.jpg', '133.jpg', '134.jpg', '135.jpg', '136.jpg', '137.jpg', '138.jpg', '139.jpg', '140.jpg', '141.jpg', '142.jpg', '143.jpg', '144.jpg', '145.jpg', '146.jpg', '147.jpg', '148.jpg', '149.jpg', '150.jpg', '151.jpg', '152.jpg', '153.jpg', '154.jpg', '155.jpg', '156.jpg', '157.jpg', '158.jpg', '159.jpg', '160.jpg', '161.jpg', '162.jpg', '163.jpg', '164.jpg', '165.jpg', '166.jpg', '167.jpg', '168.jpg', '169.jpg', '170.jpg', '171.jpg', '172.jpg', '173.jpg', '174.jpg', '175.jpg', '176.jpg', '177.jpg', '178.jpg', '179.jpg', '180.jpg', '181.jpg', '182.jpg', '183.jpg', '184.jpg', '185.jpg', '186.jpg', '187.jpg', '188.jpg', '189.jpg', '190.jpg', '191.jpg', '192.jpg', '193.jpg', '194.jpg', '195.jpg', '196.jpg', '197.jpg', '198.jpg', '199.jpg', '200.jpg', '201.jpg', '202.jpg', '203.jpg', '204.jpg', '205.jpg', '206.jpg', '207.jpg', '208.jpg', '209.jpg', '210.jpg', '211.jpg', '212.jpg', '213.jpg', '214.jpg', '215.jpg', '216.jpg', '217.jpg', '218.jpg', '219.jpg', '220.jpg', '221.jpg', '222.jpg', '223.jpg', '224.jpg', '225.jpg', '226.jpg', '227.jpg', '228.jpg', '229.jpg', '230.jpg', '231.jpg', '232.jpg', '233.jpg', '234.jpg', '235.jpg', '236.jpg', '237.jpg', '238.jpg', '239.jpg', '240.jpg', '241.jpg', '242.jpg', '243.jpg', '244.jpg', '245.jpg', '246.jpg', '247.jpg', '248.jpg', '249.jpg', '250.jpg', '251.jpg', '252.jpg', '253.jpg', '254.jpg', '255.jpg', '256.jpg', '257.jpg', '258.jpg', '259.jpg', '260.jpg', '261.jpg', '262.jpg', '263.jpg', '264.jpg', '265.jpg', '266.jpg', '267.jpg', '268.jpg', '269.jpg', '270.jpg', '271.jpg', '272.jpg', '273.jpg']
print(x_train.shape)
print(y_train.shape)

print(x_test.shape)
(1096, 296, 296, 3)
(1096,)
(274, 296, 296, 3)

3.查看数据

import matplotlib.pyplot as plt
%matplotlib inline

plt.imshow(x_train[0]/255)
print(y_train[0])
9

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Xf3SOo0g-1635431897090)(output_20_1.png)]

X_train = x_train
Y_train = y_train

print(X_train.shape)
print(Y_train.shape)


print(x_test.shape)
(1096, 296, 296, 3)
(1096,)
(274, 296, 296, 3)
sum = np.unique(y_train)
n_classes = len(sum)
# 直方图来显示图像训练集的各个类别的分别情况
def plot_y_train_hist():
  fig = plt.figure(figsize=(15,5))
  ax = fig.add_subplot(1,1,1)
  hist = ax.hist(Y_train,bins=n_classes)
  ax.set_title("the frequentcy of monkey")
  ax.set_xlabel("monkey")
  ax.set_ylabel("frequency")
  plt.show()
  return hist

hist = plot_y_train_hist()

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-eSOwpAKA-1635431897092)(output_23_0.png)]

# 对标签数据进行one-hot编码

from keras.utils import np_utils
#Y_train = np_utils.to_categorical(Y_train,n_classes)
y_train = np_utils.to_categorical(y_train,n_classes)

print("Shape after one-hot encoding:",y_train.shape)
Y_train = y_train
Using TensorFlow backend.

The default version of TensorFlow in Colab will switch to TensorFlow 2.x on the 27th of March, 2020.
We recommend you upgrade now or ensure your notebook will continue to use TensorFlow 1.x via the %tensorflow_version 1.x magic: more info.

Shape after one-hot encoding: (1096, 10)

# 划分数据集
from sklearn.model_selection import train_test_split

x_train,x_valid,y_train,y_valid = train_test_split(X_train,Y_train,test_size=0.2,random_state=2019)



print(x_train.shape)
print(y_train.shape)

print(x_valid.shape)
print(y_valid.shape)

print(x_test.shape)
(876, 296, 296, 3)
(876, 10)
(220, 296, 296, 3)
(220, 10)
(274, 296, 296, 3)

4.定义模型

# 导入开发需要的库
from keras import optimizers, Input
from keras.applications import  imagenet_utils

from keras.preprocessing.image import ImageDataGenerator
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import *
from keras.applications import *

from sklearn.preprocessing import *
from sklearn.model_selection import *
from sklearn.metrics import *
# 绘制训练过程中的 loss 和 acc 变化曲线
import matplotlib.pyplot as plt
%matplotlib inline

def history_plot(history_fit):
    plt.figure(figsize=(12,6))
    
    # summarize history for accuracy
    plt.subplot(121)
    plt.plot(history_fit.history["acc"])
    plt.plot(history_fit.history["val_acc"])
    plt.title("model accuracy")
    plt.ylabel("accuracy")
    plt.xlabel("epoch")
    plt.legend(["train", "valid"], loc="upper left")
    
    # summarize history for loss
    plt.subplot(122)
    plt.plot(history_fit.history["loss"])
    plt.plot(history_fit.history["val_loss"])
    plt.title("model loss")
    plt.ylabel("loss")
    plt.xlabel("epoch")
    plt.legend(["train", "test"], loc="upper left")
    
    plt.show()
# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):
    '''
    discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式
    
    MODEL:传入的模型,VGG16, ResNet50, ...

    optimizer: fine-tune all layers 的优化器, first part默认用adadelta
    batch_size: 每一批的尺寸,建议32/64/128
    epochs: fine-tune all layers的代数
    freeze_num: first part冻结卷积层的数量
    '''

    # datagen = ImageDataGenerator(
    #     rescale=1.255,
    #     # shear_range=0.2,
    #     # zoom_range=0.2,
    #     # horizontal_flip=True,
    #     # vertical_flip=True,
    #     # fill_mode="nearest"
    #   )
    
    # datagen.fit(X_train)
    
    
    # first: 仅训练全连接层(权重随机初始化的)
    # 冻结所有卷积层
    
    for layer in model.layers[:freeze_num]:
        layer.trainable = False
    
    model.compile(optimizer=optimizer, 
                  loss="categorical_crossentropy",
                  metrics=["accuracy"])

    # model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),
    #                     steps_per_epoch=len(x_train)/32,
    #                     epochs=3,
    #                     shuffle=True,
    #                     verbose=1,
    #                     datagen.flow(x_valid, y_valid))
    model.fit(x_train,
         y_train,
         batch_size=batch_size,
         epochs=10,
         shuffle=True,
         verbose=1,
         validation_data=(x_valid,y_valid)
        )
    print('Finish step_1')
    
    
    # second: fine-tune all layers
    for layer in model.layers[freeze_num:]:
        layer.trainable = True
    
    rc = ReduceLROnPlateau(monitor="val_acc",
                factor=0.2,
                patience=4,
                verbose=1,
                mode='max')

    model_name = model.name  + ".hdf5"
    # mc = ModelCheckpoint(model_name, 
    #            monitor="val_acc", 
    #            save_best_only=True,
    #            verbose=1,
    #            mode='max')
    # el = EarlyStopping(monitor="val_acc",
    #           min_delta=0,
    #           patience=5,
    #           verbose=1,
    #           restore_best_weights=True)
    mc = ModelCheckpoint(model_name,
                                                     monitor="val_loss",
                                                     verbose=1,
                                                     save_best_only=True,
                                                     mode="min")
    el = EarlyStopping(monitor="val_loss",
                                                      patience=5,
                                                      verbose=1,
                                                      restore_best_weights=True,
                                                      mode="min")
    reduce_lr = ReduceLROnPlateau(monitor="val_loss",
                                                        factor=0.5,
                                                        patience=4,
                                                        verbose=1,
                                                        mode="min")
    model.compile(optimizer=optimizer, 
           loss='categorical_crossentropy', 
           metrics=["accuracy"])

    # history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),
    #                                  steps_per_epoch=len(x_train)/32,
    #                                  epochs=epochs,
    #                                  shuffle=True,
    #                                  verbose=1,
    #                                  callbacks=[mc,rc,el],
    #                                  datagen.flow(x_valid, y_valid))
    history_fit = model.fit(x_train,
                 y_train,
                 batch_size=batch_size,
                 epochs=epochs,
                 shuffle=True,
                 verbose=1,
                 validation_data=(x_valid,y_valid),
                 callbacks=[mc,rc,el])
    
    print('Finish fine-tune')
    return history_fit

5.VGG16模型

# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):
    '''
    discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式
    
    MODEL:传入的模型,VGG16, ResNet50, ...

    optimizer: fine-tune all layers 的优化器, first part默认用adadelta
    batch_size: 每一批的尺寸,建议32/64/128
    epochs: fine-tune all layers的代数
    freeze_num: first part冻结卷积层的数量
    '''

    # datagen = ImageDataGenerator(
    #     rescale=1.255,
    #     # shear_range=0.2,
    #     # zoom_range=0.2,
    #     # horizontal_flip=True,
    #     # vertical_flip=True,
    #     # fill_mode="nearest"
    #   )
    
    # datagen.fit(X_train)
    
    
    # first: 仅训练全连接层(权重随机初始化的)
    # 冻结所有卷积层
    
    for layer in model.layers[:freeze_num]:
        layer.trainable = False
    
    model.compile(optimizer=optimizer, 
                  loss="categorical_crossentropy",
                  metrics=["accuracy"])

    # model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),
    #                     steps_per_epoch=len(x_train)/32,
    #                     epochs=3,
    #                     shuffle=True,
    #                     verbose=1,
    #                     datagen.flow(x_valid, y_valid))
    model.fit(x_train,
         y_train,
         batch_size=batch_size,
         epochs=10,
         shuffle=True,
         verbose=1,
         validation_data=(x_valid,y_valid)
        )
    print('Finish step_1')
    
    
    # second: fine-tune all layers
    for layer in model.layers[freeze_num:]:
        layer.trainable = True
    
    rc = ReduceLROnPlateau(monitor="val_acc",
                factor=0.2,
                patience=4,
                verbose=1,
                mode='max')

    model_name = model.name  + ".hdf5"
    # mc = ModelCheckpoint(model_name, 
    #            monitor="val_acc", 
    #            save_best_only=True,
    #            verbose=1,
    #            mode='max')
    # el = EarlyStopping(monitor="val_acc",
    #           min_delta=0,
    #           patience=5,
    #           verbose=1,
    #           restore_best_weights=True)
    mc = ModelCheckpoint(model_name,
                                                     monitor="val_loss",
                                                     verbose=1,
                                                     save_best_only=True,
                                                     mode="min")
    el = EarlyStopping(monitor="val_loss",
                                                      patience=5,
                                                      verbose=1,
                                                      restore_best_weights=True,
                                                      mode="min")
    reduce_lr = ReduceLROnPlateau(monitor="val_loss",
                                                        factor=0.5,
                                                        patience=4,
                                                        verbose=1,
                                                        mode="min")
    model.compile(optimizer=optimizer, 
           loss='categorical_crossentropy', 
           metrics=["accuracy"])

    # history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),
    #                                  steps_per_epoch=len(x_train)/32,
    #                                  epochs=epochs,
    #                                  shuffle=True,
    #                                  verbose=1,
    #                                  callbacks=[mc,rc,el],
    #                                  datagen.flow(x_valid, y_valid))
    history_fit = model.fit(x_train,
                 y_train,
                 batch_size=batch_size,
                 epochs=epochs,
                 shuffle=True,
                 verbose=1,
                 validation_data=(x_valid,y_valid),
                 callbacks=[mc,rc,el])
    
    print('Finish fine-tune')
    return history_fit
# 定义一个VGG16的模型
def vgg16_model(img_rows,img_cols):
  x = Input(shape=(img_rows, img_cols, 3))
  x = Lambda(imagenet_utils.preprocess_input)(x)
  base_model = VGG16(input_tensor=x,weights="imagenet",include_top=False, pooling='avg')
  x = base_model.output
  x = Dense(1024,activation="relu",name="fc1")(x)
  x = Dropout(0.5)(x)
  predictions = Dense(n_classes,activation="softmax",name="predictions")(x)

  vgg16_model = Model(inputs=base_model.input,outputs=predictions,name="vgg16")
  
  return vgg16_model
# 创建VGG16模型
img_rows, img_cols = 296, 296
vgg16_model = vgg16_model(img_rows,img_cols)
for i,layer in enumerate(vgg16_model.layers):
  print(i,layer.name)
0 input_2
1 lambda_2
2 block1_conv1
3 block1_conv2
4 block1_pool
5 block2_conv1
6 block2_conv2
7 block2_pool
8 block3_conv1
9 block3_conv2
10 block3_conv3
11 block3_pool
12 block4_conv1
13 block4_conv2
14 block4_conv3
15 block4_pool
16 block5_conv1
17 block5_conv2
18 block5_conv3
19 block5_pool
20 global_average_pooling2d_2
21 fc1
22 dropout_2
23 predictions
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 21


%time vgg16_history = fine_tune_model(vgg16_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 6s 7ms/step - loss: 4.1137 - acc: 0.3094 - val_loss: 0.4585 - val_acc: 0.8273
Epoch 2/10
876/876 [==============================] - 5s 6ms/step - loss: 1.1727 - acc: 0.6826 - val_loss: 0.1822 - val_acc: 0.9455
Epoch 3/10
876/876 [==============================] - 5s 6ms/step - loss: 0.5507 - acc: 0.8333 - val_loss: 0.1218 - val_acc: 0.9591
Epoch 4/10
876/876 [==============================] - 5s 6ms/step - loss: 0.3395 - acc: 0.9007 - val_loss: 0.0884 - val_acc: 0.9727
Epoch 5/10
876/876 [==============================] - 5s 6ms/step - loss: 0.2719 - acc: 0.9144 - val_loss: 0.0710 - val_acc: 0.9818
Epoch 6/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1892 - acc: 0.9372 - val_loss: 0.0703 - val_acc: 0.9636
Epoch 7/10
876/876 [==============================] - 5s 6ms/step - loss: 0.2021 - acc: 0.9326 - val_loss: 0.0604 - val_acc: 0.9864
Epoch 8/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1327 - acc: 0.9566 - val_loss: 0.0595 - val_acc: 0.9818
Epoch 9/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1064 - acc: 0.9635 - val_loss: 0.0528 - val_acc: 0.9864
Epoch 10/10
876/876 [==============================] - 5s 6ms/step - loss: 0.1019 - acc: 0.9658 - val_loss: 0.0577 - val_acc: 0.9773
Finish step_1
Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 6s 6ms/step - loss: 0.1953 - acc: 0.9498 - val_loss: 0.0706 - val_acc: 0.9682

Epoch 00001: val_loss improved from inf to 0.07063, saving model to vgg16.hdf5
Epoch 2/30
876/876 [==============================] - 5s 6ms/step - loss: 0.1035 - acc: 0.9600 - val_loss: 0.0395 - val_acc: 0.9864

Epoch 00002: val_loss improved from 0.07063 to 0.03949, saving model to vgg16.hdf5
Epoch 3/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0705 - acc: 0.9772 - val_loss: 0.0377 - val_acc: 0.9909

Epoch 00003: val_loss improved from 0.03949 to 0.03771, saving model to vgg16.hdf5
Epoch 4/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0386 - acc: 0.9920 - val_loss: 0.0146 - val_acc: 0.9909

Epoch 00004: val_loss improved from 0.03771 to 0.01462, saving model to vgg16.hdf5
Epoch 5/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0206 - acc: 0.9932 - val_loss: 0.0203 - val_acc: 0.9955

Epoch 00005: val_loss did not improve from 0.01462
Epoch 6/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0165 - acc: 0.9954 - val_loss: 0.0195 - val_acc: 0.9955

Epoch 00006: val_loss did not improve from 0.01462
Epoch 7/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0183 - acc: 0.9943 - val_loss: 0.0233 - val_acc: 0.9955

Epoch 00007: val_loss did not improve from 0.01462
Epoch 8/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0119 - acc: 0.9966 - val_loss: 0.0165 - val_acc: 0.9955

Epoch 00008: val_loss did not improve from 0.01462
Epoch 9/30
876/876 [==============================] - 5s 6ms/step - loss: 0.0091 - acc: 0.9966 - val_loss: 0.0150 - val_acc: 0.9909

Epoch 00009: val_loss did not improve from 0.01462

Epoch 00009: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Restoring model weights from the end of the best epoch
Epoch 00009: early stopping
Finish fine-tune
CPU times: user 39.3 s, sys: 12.6 s, total: 51.9 s
Wall time: 1min 41s
history_plot(vgg16_history)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Hpkh9xLg-1635431897095)(output_37_0.png)]

6.EfficientNetB4

!pip install -U efficientnet
Requirement already up-to-date: efficientnet in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied, skipping upgrade: keras-applications<=1.0.8,>=1.0.7 in /usr/local/lib/python3.6/dist-packages (from efficientnet) (1.0.8)
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from efficientnet) (0.16.2)



Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (46.0.0)
# 导入Efficient模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):
    '''
    discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式
    
    MODEL:传入的模型,VGG16, ResNet50, ...

    optimizer: fine-tune all layers 的优化器, first part默认用adadelta
    batch_size: 每一批的尺寸,建议32/64/128
    epochs: fine-tune all layers的代数
    freeze_num: first part冻结卷积层的数量
    '''

    # datagen = ImageDataGenerator(
    #     rescale=1.255,
    #     # shear_range=0.2,
    #     # zoom_range=0.2,
    #     # horizontal_flip=True,
    #     # vertical_flip=True,
    #     # fill_mode="nearest"
    #   )
    
    # datagen.fit(X_train)
    
    
    # first: 仅训练全连接层(权重随机初始化的)
    # 冻结所有卷积层
    
    for layer in model.layers[:freeze_num]:
        layer.trainable = False
    
    model.compile(optimizer=optimizer, 
                  loss="categorical_crossentropy",
                  metrics=["accuracy"])

    # model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),
    #                     steps_per_epoch=len(x_train)/32,
    #                     epochs=3,
    #                     shuffle=True,
    #                     verbose=1,
    #                     datagen.flow(x_valid, y_valid))
    model.fit(x_train,
         y_train,
         batch_size=batch_size,
         epochs=10,
         shuffle=True,
         verbose=1,
         validation_data=(x_valid,y_valid)
        )
    print('Finish step_1')
    
    
    # second: fine-tune all layers
    for layer in model.layers[:]:
        layer.trainable = True
    
    rc = ReduceLROnPlateau(monitor="val_acc",
                factor=0.2,
                patience=4,
                verbose=1,
                mode='max')

    model_name = model.name  + ".hdf5"
    # mc = ModelCheckpoint(model_name, 
    #            monitor="val_acc", 
    #            save_best_only=True,
    #            verbose=1,
    #            mode='max')
    # el = EarlyStopping(monitor="val_acc",
    #           min_delta=0,
    #           patience=5,
    #           verbose=1,
    #           restore_best_weights=True)
    mc = ModelCheckpoint(model_name,
                                                     monitor="val_loss",
                                                     verbose=1,
                                                     save_best_only=True,
                                                     mode="min")
    el = EarlyStopping(monitor="val_loss",
                                                      patience=5,
                                                      verbose=1,
                                                      restore_best_weights=True,
                                                      mode="min")
    reduce_lr = ReduceLROnPlateau(monitor="val_loss",
                                                        factor=0.5,
                                                        patience=4,
                                                        verbose=1,
                                                        mode="min")
    model.compile(optimizer=optimizer, 
           loss='categorical_crossentropy', 
           metrics=["accuracy"])

    # history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),
    #                                  steps_per_epoch=len(x_train)/32,
    #                                  epochs=epochs,
    #                                  shuffle=True,
    #                                  verbose=1,
    #                                  callbacks=[mc,rc,el],
    #                                  datagen.flow(x_valid, y_valid))
    history_fit = model.fit(x_train,
                 y_train,
                 batch_size=batch_size,
                 epochs=epochs,
                 shuffle=True,
                 verbose=1,
                 validation_data=(x_valid,y_valid),
                 callbacks=[mc,rc,el])
    
    print('Finish fine-tune')
    return history_fit
# 定义一个EfficientNet模型
def efficient_model(img_rows,img_cols):
  K.clear_session()
  x = Input(shape=(img_rows,img_cols,3))
  x = Lambda(imagenet_utils.preprocess_input)(x)
  
  base_model = EfficientNetB4(input_tensor=x,weights="imagenet",include_top=False,pooling="avg")
  x = base_model.output
  x = Dense(1024,activation="relu",name="fc1")(x)
  x = Dropout(0.5)(x)
  predictions = Dense(n_classes,activation="softmax",name="predictions")(x)

  eB_model = Model(inputs=base_model.input,outputs=predictions,name="eB4")

  return eB_model
# 创建Efficient模型
img_rows,img_cols=296,296
eB_model = efficient_model(img_rows,img_cols)
Downloading data from https://github.com/Callidior/keras-applications/releases/download/efficientnet/efficientnet-b4_weights_tf_dim_ordering_tf_kernels_autoaugment_notop.h5
71892992/71892840 [==============================] - 1s 0us/step
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
for i,layer in enumerate(eB_model.layers):
  print(i,layer.name)
0 input_1
1 lambda_1
2 stem_conv
3 stem_bn

470 dropout_1
471 predictions
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 469
eB_model_history  = fine_tune_model(eB_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 21s 24ms/step - loss: 0.0795 - acc: 0.9726 - val_loss: 0.0368 - val_acc: 0.9864
Epoch 2/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0553 - acc: 0.9840 - val_loss: 0.0366 - val_acc: 0.9909
Epoch 3/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0533 - acc: 0.9840 - val_loss: 0.0345 - val_acc: 0.9864
Epoch 4/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0541 - acc: 0.9829 - val_loss: 0.0366 - val_acc: 0.9864

Epoch 10/10
876/876 [==============================] - 7s 8ms/step - loss: 0.0341 - acc: 0.9932 - val_loss: 0.0270 - val_acc: 0.9909
Finish step_1
Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 73s 83ms/step - loss: 0.1622 - acc: 0.9612 - val_loss: 0.1712 - val_acc: 0.9545

Epoch 00001: val_loss improved from inf to 0.17119, saving model to eB4.hdf5
Epoch 2/30
876/876 [==============================] - 30s 35ms/step - loss: 0.1159 - acc: 0.9658 - val_loss: 0.1020 - val_acc: 0.9682

Epoch 00002: val_loss improved from 0.17119 to 0.10196, saving model to eB4.hdf5
Epoch 3/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0481 - acc: 0.9829 - val_loss: 0.1050 - val_acc: 0.9773

Epoch 00003: val_loss did not improve from 0.10196
Epoch 4/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0186 - acc: 0.9943 - val_loss: 0.0807 - val_acc: 0.9818

Epoch 00004: val_loss improved from 0.10196 to 0.08069, saving model to eB4.hdf5
Epoch 5/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0079 - acc: 0.9989 - val_loss: 0.0913 - val_acc: 0.9773

Epoch 00005: val_loss did not improve from 0.08069


Epoch 00010: val_loss did not improve from 0.04365
Epoch 11/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0113 - acc: 0.9943 - val_loss: 0.0868 - val_acc: 0.9773

Epoch 00011: val_loss did not improve from 0.04365

Epoch 00011: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 12/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0165 - acc: 0.9932 - val_loss: 0.0814 - val_acc: 0.9818

Epoch 00012: val_loss did not improve from 0.04365
Restoring model weights from the end of the best epoch
Epoch 00012: early stopping
Finish fine-tune
history_plot(eB_model_history)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Kub6ycfg-1635431897097)(output_46_0.png)]

7.efficientnet-with-attention

!pip install -U efficientnet
Collecting efficientnet
  Downloading https://files.pythonhosted.org/packages/97/82/f3ae07316f0461417dc54affab6e86ab188a5a22f33176d35271628b96e0/efficientnet-1.0.0-py3-none-any.whl
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from efficientnet) (0.15.0)
Requirement already satisfied, skipping upgrade: keras-applications<=1.0.8,>=1.0.7 in /usr/local/lib/python3.6/dist-packages (from efficientnet) (1.0.8)
Requirement already satisfied, skipping upgrade: PyWavelets>=0.4.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->efficientnet) (1.1.1)
Requirement already satisfied, skipping upgrade: matplotlib!=3.0.0,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-image->efficientnet) (3.1.2)

Requirement already satisfied, skipping upgrade: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx>=2.0->scikit-image->efficientnet) (4.4.1)
Requirement already satisfied, skipping upgrade: six in /usr/local/lib/python3.6/dist-packages (from h5py->keras-applications<=1.0.8,>=1.0.7->efficientnet) (1.12.0)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (42.0.1)
Installing collected packages: efficientnet
Successfully installed efficientnet-1.0.0
# 导入模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
# 定义一个加入Attention模块的Efficient网络架构即efficientnet-with-attention

def efficient_attention_model(img_rows,img_cols):
  K.clear_session()
  
  in_lay = Input(shape=(img_rows,img_cols,3))
  base_model = EfficientNetB4(input_shape=(img_rows,img_cols,3),weights="imagenet",include_top=False)

  pt_depth = base_model.get_output_shape_at(0)[-1]

  pt_features = base_model(in_lay)
  bn_features = BatchNormalization()(pt_features)

  # here we do an attention mechanism to turn pixels in the GAP on an off
  atten_layer = Conv2D(64,kernel_size=(1,1),padding="same",activation="relu")(Dropout(0.5)(bn_features))
  atten_layer = Conv2D(16,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)
  atten_layer = Conv2D(8,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)
  atten_layer = Conv2D(1,kernel_size=(1,1),padding="valid",activation="sigmoid")(atten_layer)# H,W,1
  # fan it out to all of the channels
  up_c2_w = np.ones((1,1,1,pt_depth)) #1,1,C
  up_c2 = Conv2D(pt_depth,kernel_size=(1,1),padding="same",activation="linear",use_bias=False,weights=[up_c2_w])
  up_c2.trainable = False
  atten_layer = up_c2(atten_layer)# H,W,C

  mask_features = multiply([atten_layer,bn_features])# H,W,C

  gap_features = GlobalAveragePooling2D()(mask_features)# 1,1,C
  # gap_mask = GlobalAveragePooling2D()(atten_layer)# 1,1,C

  # # to account for missing values from the attention model
  # gap = Lambda(lambda x:x[0]/x[1],name="RescaleGAP")([gap_features,gap_mask])
  gap_dr = Dropout(0.25)(gap_features)
  dr_steps = Dropout(0.25)(Dense(1000,activation="relu")(gap_dr))
  out_layer = Dense(n_classes,activation="softmax")(dr_steps)
  eb_atten_model = Model(inputs=[in_lay],outputs=[out_layer])

  return eb_atten_model
img_rows,img_cols = 296,296
eB_atten_model = efficient_attention_model(img_rows,img_cols)
for i,layer in enumerate(eB_atten_model.layers):
  print(i,layer.name)
0 input_1
1 efficientnet-b4
2 batch_normalization_1
3 dropout_1
4 conv2d_1
5 conv2d_2
6 conv2d_3
7 conv2d_4
8 conv2d_5
9 multiply_1
10 global_average_pooling2d_1
11 dropout_2
12 dense_1
13 dropout_3
14 dense_2
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 12
eB_atten_model_history  = fine_tune_model(eB_atten_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 19s 22ms/step - loss: 0.4855 - acc: 0.9304 - val_loss: 0.3256 - val_acc: 0.9409
Epoch 2/10
876/876 [==============================] - 7s 8ms/step - loss: 0.4319 - acc: 0.9372 - val_loss: 0.2806 - val_acc: 0.9455
Epoch 3/10
876/876 [==============================] - 7s 8ms/step - loss: 0.3744 - acc: 0.9349 - val_loss: 0.2577 - val_acc: 0.9455

Epoch 8/10
876/876 [==============================] - 7s 8ms/step - loss: 0.2456 - acc: 0.9475 - val_loss: 0.1869 - val_acc: 0.9591
Epoch 9/10
876/876 [==============================] - 7s 8ms/step - loss: 0.2243 - acc: 0.9612 - val_loss: 0.1847 - val_acc: 0.9591
Epoch 10/10
876/876 [==============================] - 7s 8ms/step - loss: 0.2205 - acc: 0.9658 - val_loss: 0.1764 - val_acc: 0.9591
Finish step_1
Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 74s 84ms/step - loss: 0.1646 - acc: 0.9463 - val_loss: 0.0211 - val_acc: 0.9909

Epoch 00001: val_loss improved from inf to 0.02109, saving model to model_1.hdf5
Epoch 2/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0971 - acc: 0.9715 - val_loss: 0.0082 - val_acc: 0.9955

Epoch 00002: val_loss improved from 0.02109 to 0.00816, saving model to model_1.hdf5
Epoch 3/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0676 - acc: 0.9829 - val_loss: 0.0254 - val_acc: 0.9864

Epoch 00003: val_loss did not improve from 0.00816
Epoch 4/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0334 - acc: 0.9932 - val_loss: 0.0175 - val_acc: 0.9909

Epoch 00004: val_loss did not improve from 0.00816
Epoch 5/30
876/876 [==============================] - 30s 34ms/step - loss: 0.0242 - acc: 0.9909 - val_loss: 0.0157 - val_acc: 0.9909

Epoch 00005: val_loss did not improve from 0.00816
Epoch 6/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0090 - acc: 0.9977 - val_loss: 0.0139 - val_acc: 0.9909

Epoch 00006: val_loss did not improve from 0.00816

Epoch 00006: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 7/30
876/876 [==============================] - 30s 34ms/step - loss: 0.0155 - acc: 0.9954 - val_loss: 0.0111 - val_acc: 0.9955

Epoch 00007: val_loss did not improve from 0.00816
Restoring model weights from the end of the best epoch
Epoch 00007: early stopping
Finish fine-tune
history_plot(eB_atten_model_history)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-togdoMdW-1635431897099)(output_55_0.png)]

8.EfficientNetB3 with attention v2

!pip install -U efficientnet
Collecting efficientnet
  Downloading https://files.pythonhosted.org/packages/97/82/f3ae07316f0461417dc54affab6e86ab188a5a22f33176d35271628b96e0/efficientnet-1.0.0-py3-none-any.whl

Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (42.0.1)
Installing collected packages: efficientnet
Successfully installed efficientnet-1.0.0
# 导入模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
import tensorflow as tf
from keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, Reshape, Dense, multiply, Permute, Concatenate, Conv2D, Add, Activation, Lambda
from keras import backend as K
from keras.activations import sigmoid

def attach_attention_module(net, attention_module):
  if attention_module == 'se_block': # SE_block
    net = se_block(net)
  elif attention_module == 'cbam_block': # CBAM_block
    net = cbam_block(net)
  else:
    raise Exception("'{}' is not supported attention module!".format(attention_module))

  return net

def se_block(input_feature, ratio=8):
	"""Contains the implementation of Squeeze-and-Excitation(SE) block.
	As described in https://arxiv.org/abs/1709.01507.
	"""
	
	channel_axis = 1 if K.image_data_format() == "channels_first" else -1
	channel = input_feature._keras_shape[channel_axis]

	se_feature = GlobalAveragePooling2D()(input_feature)
	se_feature = Reshape((1, 1, channel))(se_feature)
	assert se_feature._keras_shape[1:] == (1,1,channel)
	se_feature = Dense(channel // ratio,
					   activation='relu',
					   kernel_initializer='he_normal',
					   use_bias=True,
					   bias_initializer='zeros')(se_feature)
	assert se_feature._keras_shape[1:] == (1,1,channel//ratio)
	se_feature = Dense(channel,
					   activation='sigmoid',
					   kernel_initializer='he_normal',
					   use_bias=True,
					   bias_initializer='zeros')(se_feature)
	assert se_feature._keras_shape[1:] == (1,1,channel)
	if K.image_data_format() == 'channels_first':
		se_feature = Permute((3, 1, 2))(se_feature)

	se_feature = multiply([input_feature, se_feature])
	return se_feature

def cbam_block(cbam_feature, ratio=8):
	"""Contains the implementation of Convolutional Block Attention Module(CBAM) block.
	As described in https://arxiv.org/abs/1807.06521.
	"""
	
	cbam_feature = channel_attention(cbam_feature, ratio)
	cbam_feature = spatial_attention(cbam_feature)
	return cbam_feature

def channel_attention(input_feature, ratio=8):
	
	channel_axis = 1 if K.image_data_format() == "channels_first" else -1
	channel = input_feature._keras_shape[channel_axis]
	
	shared_layer_one = Dense(channel//ratio,
							 activation='relu',
							 kernel_initializer='he_normal',
							 use_bias=True,
							 bias_initializer='zeros')
	shared_layer_two = Dense(channel,
							 kernel_initializer='he_normal',
							 use_bias=True,
							 bias_initializer='zeros')
	
	avg_pool = GlobalAveragePooling2D()(input_feature)    
	avg_pool = Reshape((1,1,channel))(avg_pool)
	assert avg_pool._keras_shape[1:] == (1,1,channel)
	avg_pool = shared_layer_one(avg_pool)
	assert avg_pool._keras_shape[1:] == (1,1,channel//ratio)
	avg_pool = shared_layer_two(avg_pool)
	assert avg_pool._keras_shape[1:] == (1,1,channel)
	
	max_pool = GlobalMaxPooling2D()(input_feature)
	max_pool = Reshape((1,1,channel))(max_pool)
	assert max_pool._keras_shape[1:] == (1,1,channel)
	max_pool = shared_layer_one(max_pool)
	assert max_pool._keras_shape[1:] == (1,1,channel//ratio)
	max_pool = shared_layer_two(max_pool)
	assert max_pool._keras_shape[1:] == (1,1,channel)
	
	cbam_feature = Add()([avg_pool,max_pool])
	cbam_feature = Activation('sigmoid')(cbam_feature)
	
	if K.image_data_format() == "channels_first":
		cbam_feature = Permute((3, 1, 2))(cbam_feature)
	
	return multiply([input_feature, cbam_feature])

def spatial_attention(input_feature):
	kernel_size = 7
	
	if K.image_data_format() == "channels_first":
		channel = input_feature._keras_shape[1]
		cbam_feature = Permute((2,3,1))(input_feature)
	else:
		channel = input_feature._keras_shape[-1]
		cbam_feature = input_feature
	
	avg_pool = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(cbam_feature)
	assert avg_pool._keras_shape[-1] == 1
	max_pool = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(cbam_feature)
	assert max_pool._keras_shape[-1] == 1
	concat = Concatenate(axis=3)([avg_pool, max_pool])
	assert concat._keras_shape[-1] == 2
	cbam_feature = Conv2D(filters = 1,
					kernel_size=kernel_size,
					strides=1,
					padding='same',
					activation='sigmoid',
					kernel_initializer='he_normal',
					use_bias=False)(concat)	
	assert cbam_feature._keras_shape[-1] == 1
	
	if K.image_data_format() == "channels_first":
		cbam_feature = Permute((3, 1, 2))(cbam_feature)
		
	return multiply([input_feature, cbam_feature])
# 定义一个EfficientNet模型
def efficient__atten2_model(img_rows,img_cols):
  K.clear_session()
  
  in_lay = Input(shape=(img_rows,img_cols,3))
  base_model = EfficientNetB4(input_shape=(img_rows,img_cols,3),weights="imagenet",include_top=False)
  pt_features = base_model(in_lay)
  bn_features = BatchNormalization()(pt_features)

  atten_features = attach_attention_module(bn_features,"se_block")
  gap_features = GlobalAveragePooling2D()(atten_features)

  gap_dr = Dropout(0.25)(gap_features)
  dr_steps = Dropout(0.25)(Dense(1000,activation="relu")(gap_dr))
  out_layer = Dense(n_classes,activation="softmax")(dr_steps)
  eb_atten_model = Model(inputs=[in_lay],outputs=[out_layer])

  return eb_atten_model
img_rows,img_cols = 296,296
eB_atten2_model = efficient__atten2_model(img_rows,img_cols)
for i,layer in enumerate(eB_atten2_model.layers):
  print(i,layer.name)
0 input_1
1 efficientnet-b4
2 batch_normalization_1
3 global_average_pooling2d_1
4 reshape_1
5 dense_1
6 dense_2
7 multiply_1
8 global_average_pooling2d_2
9 dropout_1
10 dense_3
11 dropout_2
12 dense_4
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 19
eB_atten2_model_history  = fine_tune_model(eB_atten2_model,optimizer,batch_size,epochs,freeze_num)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1033: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.

Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 17s 19ms/step - loss: 2.3097 - acc: 0.1096 - val_loss: 2.9290 - val_acc: 0.0955

Epoch 7/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3012 - acc: 0.1210 - val_loss: 2.9290 - val_acc: 0.0955
Epoch 8/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3046 - acc: 0.1233 - val_loss: 2.9290 - val_acc: 0.0955
Epoch 9/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3167 - acc: 0.1050 - val_loss: 2.9290 - val_acc: 0.0955
Epoch 10/10
876/876 [==============================] - 6s 7ms/step - loss: 2.3035 - acc: 0.1267 - val_loss: 2.9290 - val_acc: 0.0955
Finish step_1


Train on 876 samples, validate on 220 samples
Epoch 1/30
876/876 [==============================] - 67s 76ms/step - loss: 0.7490 - acc: 0.8242 - val_loss: 0.0197 - val_acc: 0.9955

Epoch 00001: val_loss improved from inf to 0.01974, saving model to model_1.hdf5
Epoch 2/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0931 - acc: 0.9692 - val_loss: 0.0393 - val_acc: 0.9818

Epoch 00002: val_loss did not improve from 0.01974
Epoch 3/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0332 - acc: 0.9920 - val_loss: 0.0197 - val_acc: 0.9909

Epoch 00004: val_loss did not improve from 0.01296
Epoch 5/30
672/876 [======================>.......] - ETA: 6s - loss: 0.0311 - acc: 0.9926
Epoch 00005: ReduceLROnPlateau reducing learning rate to 1.9999999494757503e-05.
Epoch 6/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0179 - acc: 0.9977 - val_loss: 0.0092 - val_acc: 0.9955

Epoch 00006: val_loss improved from 0.00997 to 0.00916, saving model to model_1.hdf5
Epoch 7/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0099 - acc: 0.9989 - val_loss: 0.0098 - val_acc: 1.0000

Epoch 00007: val_loss did not improve from 0.00916
Epoch 8/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0226 - acc: 0.9920 - val_loss: 0.0091 - val_acc: 1.0000

Epoch 00008: val_loss improved from 0.00916 to 0.00912, saving model to model_1.hdf5
Epoch 9/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0073 - acc: 0.9989 - val_loss: 0.0091 - val_acc: 1.0000

Epoch 00009: val_loss improved from 0.00912 to 0.00906, saving model to model_1.hdf5
Epoch 10/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0187 - acc: 0.9943 - val_loss: 0.0068 - val_acc: 1.0000



Epoch 00012: val_loss improved from 0.00619 to 0.00562, saving model to model_1.hdf5
Epoch 13/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0132 - acc: 0.9954 - val_loss: 0.0053 - val_acc: 1.0000

Epoch 00013: val_loss improved from 0.00562 to 0.00527, saving model to model_1.hdf5
Epoch 14/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0086 - acc: 0.9989 - val_loss: 0.0053 - val_acc: 1.0000

Epoch 00014: val_loss did not improve from 0.00527
Epoch 15/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0096 - acc: 0.9977 - val_loss: 0.0051 - val_acc: 1.0000

Epoch 00015: val_loss improved from 0.00527 to 0.00509, saving model to model_1.hdf5

Epoch 00015: ReduceLROnPlateau reducing learning rate to 7.999999979801942e-07.
Epoch 16/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0110 - acc: 0.9966 - val_loss: 0.0052 - val_acc: 1.0000

Epoch 00016: val_loss did not improve from 0.00509
Epoch 17/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0061 - acc: 0.9989 - val_loss: 0.0052 - val_acc: 1.0000

Epoch 00017: val_loss did not improve from 0.00509
Epoch 18/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0129 - acc: 0.9954 - val_loss: 0.0052 - val_acc: 1.0000

Epoch 00018: val_loss did not improve from 0.00509
Epoch 19/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0150 - acc: 0.9966 - val_loss: 0.0053 - val_acc: 1.0000

Epoch 00019: val_loss did not improve from 0.00509

Epoch 00019: ReduceLROnPlateau reducing learning rate to 1.600000018697756e-07.
Epoch 20/30
876/876 [==============================] - 31s 35ms/step - loss: 0.0069 - acc: 0.9989 - val_loss: 0.0054 - val_acc: 1.0000

Epoch 00020: val_loss did not improve from 0.00509
Restoring model weights from the end of the best epoch
Epoch 00020: early stopping
Finish fine-tune
history_plot(eB_atten2_model_history)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BTUjNifY-1635431897101)(output_65_0.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-uhYN6nNr-1635431897102)(output_65_1.png)]

9.双线性EfficientNet

!pip install -U efficientnet
Requirement already up-to-date: efficientnet in /usr/local/lib/python3.6/dist-packages (1.1.0)
Requirement already satisfied, skipping upgrade: scikit-image in /usr/local/lib/python3.6/dist-packages (from efficientnet) (0.16.2)
matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (2.4.6)
Requirement already satisfied, skipping upgrade: setuptools in /usr/local/lib/python3.6/dist-packages (from kiwisolver>=1.0.1->matplotlib!=3.0.0,>=2.0.0->scikit-image->efficientnet) (46.0.0)
# 导入开发需要的库
from keras import optimizers, Input
from keras.applications import  imagenet_utils

from keras.preprocessing.image import ImageDataGenerator
from keras.models import *
from keras.layers import *
from keras.optimizers import *
from keras.callbacks import *
from keras.applications import *

from sklearn.preprocessing import *
from sklearn.model_selection import *
from sklearn.metrics import *
# 导入模块
from efficientnet.keras import EfficientNetB4
import keras.backend as K
import tensorflow as tf
from keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, Reshape, Dense, multiply, Permute, Concatenate, Conv2D, Add, Activation, Lambda
from keras import backend as K
from keras.activations import sigmoid

def attach_attention_module(net, attention_module):
  if attention_module == 'se_block': # SE_block
    net = se_block(net)
  elif attention_module == 'cbam_block': # CBAM_block
    net = cbam_block(net)
  else:
    raise Exception("'{}' is not supported attention module!".format(attention_module))

  return net

def se_block(input_feature, ratio=8):
	"""Contains the implementation of Squeeze-and-Excitation(SE) block.
	As described in https://arxiv.org/abs/1709.01507.
	"""
	
	channel_axis = 1 if K.image_data_format() == "channels_first" else -1
	channel = input_feature._keras_shape[channel_axis]

	se_feature = GlobalAveragePooling2D()(input_feature)
	se_feature = Reshape((1, 1, channel))(se_feature)
	assert se_feature._keras_shape[1:] == (1,1,channel)
	se_feature = Dense(channel // ratio,
					   activation='relu',
					   kernel_initializer='he_normal',
					   use_bias=True,
					   bias_initializer='zeros')(se_feature)
	assert se_feature._keras_shape[1:] == (1,1,channel//ratio)
	se_feature = Dense(channel,
					   activation='sigmoid',
					   kernel_initializer='he_normal',
					   use_bias=True,
					   bias_initializer='zeros')(se_feature)
	assert se_feature._keras_shape[1:] == (1,1,channel)
	if K.image_data_format() == 'channels_first':
		se_feature = Permute((3, 1, 2))(se_feature)

	se_feature = multiply([input_feature, se_feature])
	return se_feature

def cbam_block(cbam_feature, ratio=8):
	"""Contains the implementation of Convolutional Block Attention Module(CBAM) block.
	As described in https://arxiv.org/abs/1807.06521.
	"""
	
	cbam_feature = channel_attention(cbam_feature, ratio)
	cbam_feature = spatial_attention(cbam_feature)
	return cbam_feature

def channel_attention(input_feature, ratio=8):
	
	channel_axis = 1 if K.image_data_format() == "channels_first" else -1
	channel = input_feature._keras_shape[channel_axis]
	
	shared_layer_one = Dense(channel//ratio,
							 activation='relu',
							 kernel_initializer='he_normal',
							 use_bias=True,
							 bias_initializer='zeros')
	shared_layer_two = Dense(channel,
							 kernel_initializer='he_normal',
							 use_bias=True,
							 bias_initializer='zeros')
	
	avg_pool = GlobalAveragePooling2D()(input_feature)    
	avg_pool = Reshape((1,1,channel))(avg_pool)
	assert avg_pool._keras_shape[1:] == (1,1,channel)
	avg_pool = shared_layer_one(avg_pool)
	assert avg_pool._keras_shape[1:] == (1,1,channel//ratio)
	avg_pool = shared_layer_two(avg_pool)
	assert avg_pool._keras_shape[1:] == (1,1,channel)
	
	max_pool = GlobalMaxPooling2D()(input_feature)
	max_pool = Reshape((1,1,channel))(max_pool)
	assert max_pool._keras_shape[1:] == (1,1,channel)
	max_pool = shared_layer_one(max_pool)
	assert max_pool._keras_shape[1:] == (1,1,channel//ratio)
	max_pool = shared_layer_two(max_pool)
	assert max_pool._keras_shape[1:] == (1,1,channel)
	
	cbam_feature = Add()([avg_pool,max_pool])
	cbam_feature = Activation('sigmoid')(cbam_feature)
	
	if K.image_data_format() == "channels_first":
		cbam_feature = Permute((3, 1, 2))(cbam_feature)
	
	return multiply([input_feature, cbam_feature])

def spatial_attention(input_feature):
	kernel_size = 7
	
	if K.image_data_format() == "channels_first":
		channel = input_feature._keras_shape[1]
		cbam_feature = Permute((2,3,1))(input_feature)
	else:
		channel = input_feature._keras_shape[-1]
		cbam_feature = input_feature
	
	avg_pool = Lambda(lambda x: K.mean(x, axis=3, keepdims=True))(cbam_feature)
	assert avg_pool._keras_shape[-1] == 1
	max_pool = Lambda(lambda x: K.max(x, axis=3, keepdims=True))(cbam_feature)
	assert max_pool._keras_shape[-1] == 1
	concat = Concatenate(axis=3)([avg_pool, max_pool])
	assert concat._keras_shape[-1] == 2
	cbam_feature = Conv2D(filters = 1,
					kernel_size=kernel_size,
					strides=1,
					padding='same',
					activation='sigmoid',
					kernel_initializer='he_normal',
					use_bias=False)(concat)	
	assert cbam_feature._keras_shape[-1] == 1
	
	if K.image_data_format() == "channels_first":
		cbam_feature = Permute((3, 1, 2))(cbam_feature)
		
	return multiply([input_feature, cbam_feature])
# 定义一个双线性EfficientNet Attention模型
def blinear_efficient__atten_model(img_rows,img_cols):
  K.clear_session()
  
  in_lay = Input(shape=(img_rows,img_cols,3))
  base_model = EfficientNetB4(input_shape=(img_rows,img_cols,3),weights="imagenet",include_top=False)
  
  pt_depth = base_model.get_output_shape_at(0)[-1]

  cnn_features_a = base_model(in_lay)
  cnn_bn_features_a = BatchNormalization()(cnn_features_a)
  
  # attention mechanism
  # here we do an attention mechanism to turn pixels in the GAP on an off
  atten_layer = Conv2D(64,kernel_size=(1,1),padding="same",activation="relu")(Dropout(0.5)(cnn_bn_features_a))
  atten_layer = Conv2D(16,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)
  atten_layer = Conv2D(8,kernel_size=(1,1),padding="same",activation="relu")(atten_layer)
  atten_layer = Conv2D(1,kernel_size=(1,1),padding="valid",activation="sigmoid")(atten_layer)# H,W,1
  # fan it out to all of the channels
  up_c2_w = np.ones((1,1,1,pt_depth)) #1,1,C
  up_c2 = Conv2D(pt_depth,kernel_size=(1,1),padding="same",activation="linear",use_bias=False,weights=[up_c2_w])
  up_c2.trainable = True
  atten_layer = up_c2(atten_layer)# H,W,C

  cnn_atten_out_a = multiply([atten_layer,cnn_bn_features_a])# H,W,C

  cnn_atten_out_b = cnn_atten_out_a

  cnn_out_dot = multiply([cnn_atten_out_a,cnn_atten_out_b])
  gap_features = GlobalAveragePooling2D()(cnn_out_dot)
  gap_dr = Dropout(0.25)(gap_features)
  dr_steps = Dropout(0.25)(Dense(1000,activation="relu")(gap_dr))
  out_layer = Dense(n_classes,activation="softmax")(dr_steps)
  
  b_eff_atten_model = Model(inputs=[in_lay],outputs=[out_layer],name="blinear_efficient_atten")

  return b_eff_atten_model
# 创建双线性EfficientNet Attention模型
img_rows,img_cols = 296,296
befficient_model = blinear_efficient__atten_model(img_rows,img_cols)
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 16
epochs = 30
freeze_num = 19
befficient_model_history  = fine_tune_model(befficient_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 14s 16ms/step - loss: 2.3208 - acc: 0.1084 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 2/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3244 - acc: 0.1005 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 3/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3251 - acc: 0.1062 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 4/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3316 - acc: 0.0936 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 5/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3185 - acc: 0.1039 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 6/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3270 - acc: 0.1005 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 7/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3250 - acc: 0.0993 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 8/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3289 - acc: 0.1005 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 9/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3284 - acc: 0.0913 - val_loss: 2.8606 - val_acc: 0.1136
Epoch 10/10
876/876 [==============================] - 7s 8ms/step - loss: 2.3308 - acc: 0.1027 - val_loss: 2.8606 - val_acc: 0.1136
Finish step_1
Train on 876 samples, validate on 220 samples

Epoch 5/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0747 - acc: 0.9749 - val_loss: 0.0289 - val_acc: 0.9864



Epoch 00024: ReduceLROnPlateau reducing learning rate to 3.199999980552093e-08.
Epoch 25/30
876/876 [==============================] - 30s 35ms/step - loss: 0.0078 - acc: 0.9977 - val_loss: 0.0138 - val_acc: 0.9955

Epoch 00025: val_loss did not improve from 0.01312
Restoring model weights from the end of the best epoch
Epoch 00025: early stopping
Finish fine-tune
history_plot(befficient_model_history)

10.双线性VGG16模型

# fine-tune 模型
def fine_tune_model(model, optimizer, batch_size, epochs, freeze_num):
    '''
    discription: 对指定预训练模型进行fine-tune,并保存为.hdf5格式
    
    MODEL:传入的模型,VGG16, ResNet50, ...

    optimizer: fine-tune all layers 的优化器, first part默认用adadelta
    batch_size: 每一批的尺寸,建议32/64/128
    epochs: fine-tune all layers的代数
    freeze_num: first part冻结卷积层的数量
    '''

    # datagen = ImageDataGenerator(
    #     rescale=1.255,
    #     # shear_range=0.2,
    #     # zoom_range=0.2,
    #     # horizontal_flip=True,
    #     # vertical_flip=True,
    #     # fill_mode="nearest"
    #   )
    
    # datagen.fit(X_train)
    
    
    # first: 仅训练全连接层(权重随机初始化的)
    # 冻结所有卷积层
    
    for layer in model.layers[:freeze_num]:
        layer.trainable = False
    
    model.compile(optimizer=optimizer, 
                  loss="categorical_crossentropy",
                  metrics=["accuracy"])

    # model.fit_generator(datagen.flow(x_train,y_train,batch_size=batch_size),
    #                     steps_per_epoch=len(x_train)/32,
    #                     epochs=3,
    #                     shuffle=True,
    #                     verbose=1,
    #                     datagen.flow(x_valid, y_valid))
    model.fit(x_train,
         y_train,
         batch_size=batch_size,
         epochs=10,
         shuffle=True,
         verbose=1,
         validation_data=(x_valid,y_valid)
        )
    print('Finish step_1')
    
    
    # second: fine-tune all layers
    for layer in model.layers[freeze_num:]:
        layer.trainable = True
    
    rc = ReduceLROnPlateau(monitor="val_acc",
                factor=0.2,
                patience=4,
                verbose=1,
                mode='max')

    model_name = model.name  + ".hdf5"
    # mc = ModelCheckpoint(model_name, 
    #            monitor="val_acc", 
    #            save_best_only=True,
    #            verbose=1,
    #            mode='max')
    # el = EarlyStopping(monitor="val_acc",
    #           min_delta=0,
    #           patience=5,
    #           verbose=1,
    #           restore_best_weights=True)
    mc = ModelCheckpoint(model_name,
                                                     monitor="val_loss",
                                                     verbose=1,
                                                     save_best_only=True,
                                                     mode="min")
    el = EarlyStopping(monitor="val_loss",
                                                      patience=5,
                                                      verbose=1,
                                                      restore_best_weights=True,
                                                      mode="min")
    reduce_lr = ReduceLROnPlateau(monitor="val_loss",
                                                        factor=0.5,
                                                        patience=4,
                                                        verbose=1,
                                                        mode="min")
    model.compile(optimizer=optimizer, 
           loss='categorical_crossentropy', 
           metrics=["accuracy"])

    # history_fit = model.fit_generator(datagen.flow(x_train,y_train,batch_size=32),
    #                                  steps_per_epoch=len(x_train)/32,
    #                                  epochs=epochs,
    #                                  shuffle=True,
    #                                  verbose=1,
    #                                  callbacks=[mc,rc,el],
    #                                  datagen.flow(x_valid, y_valid))
    history_fit = model.fit(x_train,
                 y_train,
                 batch_size=batch_size,
                 epochs=epochs,
                 shuffle=True,
                 verbose=1,
                 validation_data=(x_valid,y_valid),
                 callbacks=[mc,rc,el])
    
    print('Finish fine-tune')
    return history_fit
# 定义双线性VGG16模型

from keras import backend as K

def batch_dot(cnn_ab):
    return K.batch_dot(cnn_ab[0], cnn_ab[1], axes=[1, 1])

def sign_sqrt(x):
    return K.sign(x) * K.sqrt(K.abs(x) + 1e-10)

def l2_norm(x):
    return K.l2_normalize(x, axis=-1)
 
 
def bilinear_vgg16(img_rows,img_cols):
    input_tensor = Input(shape=(img_rows,img_cols,3))
    input_tensor = Lambda(imagenet_utils.preprocess_input)(input_tensor)

    model_vgg16 = VGG16(include_top=False, weights="imagenet",
                        input_tensor=input_tensor,pooling="avg")
    
    cnn_out_a = model_vgg16.layers[-2].output
    cnn_out_shape = model_vgg16.layers[-2].output_shape
    cnn_out_a = Reshape([cnn_out_shape[1]*cnn_out_shape[2],
                         cnn_out_shape[-1]])(cnn_out_a)

    cnn_out_b = cnn_out_a

    cnn_out_dot = Lambda(batch_dot)([cnn_out_a, cnn_out_b])
    cnn_out_dot = Reshape([cnn_out_shape[-1]*cnn_out_shape[-1]])(cnn_out_dot)
 
    sign_sqrt_out = Lambda(sign_sqrt)(cnn_out_dot)
    l2_norm_out = Lambda(l2_norm)(sign_sqrt_out)
    
    fc1 = Dense(1024,activation="relu",name="fc1")(l2_norm_out)
    dropout = Dropout(0.5)(fc1)
    output = Dense(n_classes, activation="softmax",name="output")(dropout)
    bvgg16_model = Model(inputs=model_vgg16.input, outputs=output,name="bvgg16")

    return bvgg16_model

# 创建双线性VGG16模型
img_rows,img_cols = 296,296
bvgg16_model = bilinear_vgg16(img_rows,img_cols)
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4267: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
for i,layer in enumerate(bvgg16_model.layers):
  print(i,layer.name)
0 input_3
1 lambda_1
2 block1_conv1
3 block1_conv2
4 block1_pool
5 block2_conv1
6 block2_conv2
7 block2_pool
8 block3_conv1
9 block3_conv2
10 block3_conv3

23 lambda_3
24 lambda_4
25 fc1
26 dropout_4
27 output
optimizer = optimizers.Adam(lr=0.0001)
batch_size = 32
epochs = 100
freeze_num = 25
bvgg16_history = fine_tune_model(bvgg16_model,optimizer,batch_size,epochs,freeze_num)
Train on 876 samples, validate on 220 samples
Epoch 1/10
876/876 [==============================] - 22s 26ms/step - loss: 1.9859 - acc: 0.6233 - val_loss: 1.6014 - val_acc: 0.9636
Epoch 2/10
876/876 [==============================] - 8s 9ms/step - loss: 1.2607 - acc: 0.9680 - val_loss: 1.0053 - val_acc: 0.9864
history_plot(bvgg16_history)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-IVUHPW4Z-1635431897105)(output_82_0.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ct7tehGn-1635431897106)(output_82_1.png)]

加载

xception_model.load_weights("xception.hdf5")

预测

predict = bvgg16_model.predict(x_test)
predict=np.argmax(predict,axis=1)
predict = predict
predict.shape
(274,)
print(predict[:5])
[1 9 8 0 2]
print(x_test_img_path)
['0.jpg', '1.jpg', '2.jpg', '3.jpg', '4.jpg', '5.jpg', '6.jpg', '7.jpg', '8.jpg', '9.jpg', '10.jpg', '11.jpg', '12.jpg', '13.jpg', '14.jpg', '15.jpg', '16.jpg', '17.jpg', '18.jpg', '19.jpg', '20.jpg', '21.jpg', '22.jpg', '23.jpg', '24.jpg', '25.jpg', '26.jpg', '27.jpg', '28.jpg', '29.jpg', '30.jpg', '31.jpg', '32.jpg', '33.jpg', '34.jpg', '35.jpg', '36.jpg', '37.jpg', '38.jpg', '39.jpg', '40.jpg', '41.jpg', '42.jpg', '43.jpg', '44.jpg', '45.jpg', '46.jpg', '47.jpg', '48.jpg', '49.jpg', '50.jpg', '51.jpg', '52.jpg', '53.jpg', '54.jpg', '55.jpg', '56.jpg', '57.jpg', '58.jpg', '59.jpg', '60.jpg', '61.jpg', '62.jpg', '63.jpg', '64.jpg', '65.jpg', '66.jpg', '67.jpg', '68.jpg', '69.jpg', '70.jpg', '71.jpg', '72.jpg', '73.jpg', '74.jpg', '75.jpg', '76.jpg', '77.jpg', '78.jpg', '79.jpg', '80.jpg', '81.jpg', '82.jpg', '83.jpg', '84.jpg', '85.jpg', '86.jpg', '87.jpg', '88.jpg', '89.jpg', '90.jpg', '91.jpg', '92.jpg', '93.jpg', '94.jpg', '95.jpg', '96.jpg', '97.jpg', '98.jpg', '99.jpg', '100.jpg', '101.jpg', '102.jpg', '103.jpg', '104.jpg', '105.jpg', '106.jpg', '107.jpg', '108.jpg', '109.jpg', '110.jpg', '111.jpg', '112.jpg', '113.jpg', '114.jpg', '115.jpg', '116.jpg', '117.jpg', '118.jpg', '119.jpg', '120.jpg', '121.jpg', '122.jpg', '123.jpg', '124.jpg', '125.jpg', '126.jpg', '127.jpg', '128.jpg', '129.jpg', '130.jpg', '131.jpg', '132.jpg', '133.jpg', '134.jpg', '135.jpg', '136.jpg', '137.jpg', '138.jpg', '139.jpg', '140.jpg', '141.jpg', '142.jpg', '143.jpg', '144.jpg', '145.jpg', '146.jpg', '147.jpg', '148.jpg', '149.jpg', '150.jpg', '151.jpg', '152.jpg', '153.jpg', '154.jpg', '155.jpg', '156.jpg', '157.jpg', '158.jpg', '159.jpg', '160.jpg', '161.jpg', '162.jpg', '163.jpg', '164.jpg', '165.jpg', '166.jpg', '167.jpg', '168.jpg', '169.jpg', '170.jpg', '171.jpg', '172.jpg', '173.jpg', '174.jpg', '175.jpg', '176.jpg', '177.jpg', '178.jpg', '179.jpg', '180.jpg', '181.jpg', '182.jpg', '183.jpg', '184.jpg', '185.jpg', '186.jpg', '187.jpg', '188.jpg', '189.jpg', '190.jpg', '191.jpg', '192.jpg', '193.jpg', '194.jpg', '195.jpg', '196.jpg', '197.jpg', '198.jpg', '199.jpg', '200.jpg', '201.jpg', '202.jpg', '203.jpg', '204.jpg', '205.jpg', '206.jpg', '207.jpg', '208.jpg', '209.jpg', '210.jpg', '211.jpg', '212.jpg', '213.jpg', '214.jpg', '215.jpg', '216.jpg', '217.jpg', '218.jpg', '219.jpg', '220.jpg', '221.jpg', '222.jpg', '223.jpg', '224.jpg', '225.jpg', '226.jpg', '227.jpg', '228.jpg', '229.jpg', '230.jpg', '231.jpg', '232.jpg', '233.jpg', '234.jpg', '235.jpg', '236.jpg', '237.jpg', '238.jpg', '239.jpg', '240.jpg', '241.jpg', '242.jpg', '243.jpg', '244.jpg', '245.jpg', '246.jpg', '247.jpg', '248.jpg', '249.jpg', '250.jpg', '251.jpg', '252.jpg', '253.jpg', '254.jpg', '255.jpg', '256.jpg', '257.jpg', '258.jpg', '259.jpg', '260.jpg', '261.jpg', '262.jpg', '263.jpg', '264.jpg', '265.jpg', '266.jpg', '267.jpg', '268.jpg', '269.jpg', '270.jpg', '271.jpg', '272.jpg', '273.jpg']
id = np.arange(0,predict.shape[0])
import pandas as pd

df = pd.DataFrame({"img_path":id, "tags":predict})
df.to_csv("bvgg16_model.csv",index=None,header=None)

小结

在这里插入图片描述

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

mind_programmonkey

你的鼓励是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值