tensorflow分类——卷积神经网络猫狗分类问题练习

本文介绍如何处理在使用Keras训练狗与猫数据集时遇到的TypeError,通过降级Pillow版本并优化模型结构以提高准确率。教程涵盖了数据预处理、模型构建和训练过程,以及可视化训练结果。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

一、数据集下载
下载链接:https://www.kaggle.com/datasets/biaiscience/dogs-vs-cats
在这里插入图片描述
二、数据集准备
我们使用的训练集将cat和dog放在一个文件夹里面,所以后面我将文件名全部读入放在dataframe,使用flow_from_dataframe,
三、训练需遇到了bug出现:TypeError: array() takes 1 positional argument but 2 were given
tf.keras.preprocessing.image.load_img内部使用Pillow 8.3.0,你需要将pillow这个包降到8.2.0.0就可以解决问题了
四、完成代码
简单跑了一下模型,并保存了,后面也加上了绘图方便观察
train_datagen.flow_from_dataframe里面的target_size=(128,128),keras.models.Sequential里面的input——shape(128,128,3)是对应的,如果你的电脑性能不好就将输出的尺寸调小一点,如;(32,32),太大电脑可能会卡死,epochs数字小点训练时间会断点,毕竟我们只是看一过程

import os
import pandas as pd
import warnings
import tensorflow as tf
from tensorflow.python import keras
from keras.preprocessing.image import ImageDataGenerator
import keras.optimizers as op
from  tensorflow.python.keras import layers
from sklearn.model_selection import train_test_split

import matplotlib.pyplot as plt
#制定好数据的路径(训练和验证的数据集)
base_dir=r'G:\pycharm\tensorflow_learning\dogandcat\dogs-vs-cats'
test_dir=os.path.join(base_dir,'test1')
train_dir=os.path.join(base_dir,'train')

#读取测试集的所有图片,分类打上标签
filenames = os.listdir(train_dir)
categories = []
for filename in filenames:
    category = filename.split('.')[0]
    if category == 'dog':
        categories.append(str(1))
    else:
        categories.append(str(0))

df= pd.DataFrame({
    'filename': filenames,
    'category': categories
})

#从测试集上划分训练集和验证集
x_train, x_val = train_test_split(df, test_size=0.2, random_state=2)
x_train = x_train.reset_index(drop=True)
x_val = x_val.reset_index(drop=True)



#数据预处理操作,读取数据,利用现有的图片数据,进行数加强,增加样本量
train_datagen = ImageDataGenerator(
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    fill_mode='nearest',
    rescale=1./255)


valid_datagen = ImageDataGenerator(
    rotation_range=40,
    width_shift_range=0.2,
    height_shift_range=0.2,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True,
    fill_mode='nearest',
    rescale=1./255)

train_generator = train_datagen.flow_from_dataframe(
    x_train,
    train_dir,
    x_col='filename',
    y_col='category',
    target_size=(128,128),
    class_mode="categorical",
 )

valid_generator = valid_datagen.flow_from_dataframe(
    x_val,
    train_dir,
    x_col='filename',
    y_col='category',
    target_size=(128,128),
    class_mode="categorical",
 )


#架构网络模型,使用卷积网络进行分类
'''
#准确率只有0.6
model=keras.models.Sequential([
    layers.Conv2D(32,(3,3),activation='relu',input_shape=(128,128,3)),
    layers.MaxPool2D(2,2),

    layers.Conv2D(64,(3,3),activation='relu'),
    layers.MaxPool2D(2,2),

    layers.Conv2D(128,(3,3),activation='relu'),
    layers.MaxPool2D(2,2),
    #为全连接层做准备
    layers.Flatten(),
    layers.Dense(512,activation='relu'),
    #使用sigmoid进行二分类
    layers.Dense(2,activation='sigmoid')
                               ])
'''
#模型优化:准确率达到0.71,上个模型会发生过拟合状况,所用我们使用了dropout
model = keras.models.Sequential([
                         keras.layers.Conv2D(filters=64, kernel_size=3, strides=(1,1), padding='valid',activation= 'relu', input_shape=(128,128,3)),
                         keras.layers.MaxPooling2D(pool_size=(2,2)),
                         keras.layers.Conv2D(filters=128, kernel_size=3, strides=(2,2), padding='same', activation='relu'),
                         keras.layers.MaxPooling2D(pool_size=(2,2)),
                         keras.layers.Conv2D(filters=64, kernel_size=3, strides=(2,2), padding='same', activation='relu'),
                         keras.layers.MaxPooling2D(pool_size=(2,2)),
                         keras.layers.Flatten(),
                         keras.layers.Dense(units=128, activation='relu'),
                         keras.layers.Dropout(0.25),
                         keras.layers.Dense(units=256, activation='relu'),
                         keras.layers.Dropout(0.5),
                         keras.layers.Dense(units=256, activation='relu'),
                         keras.layers.Dropout(0.25),
                         keras.layers.Dense(units=128, activation='relu'),
                         keras.layers.Dropout(0.10),
                         keras.layers.Dense(units=2, activation='softmax')
])
#model.summary()

model.compile(loss='categorical_crossentropy',
 optimizer='adam',
 metrics=['acc'])

history = model.fit(train_generator, epochs=30, verbose=1, validation_data=valid_generator)


#保存模型
model.save('cat_vs_dog.h5')


#将准确率和损失率画图显示出来
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(acc) + 1)
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值