跑猫狗大战数据集但是验证精度一直维持在0.5

刚学习神经网络,在用tensorflow做猫狗图像识别的时候遇到了一个问题,验证集精度直接全为0.5

代码如下:

def createModel():
    inputs = keras.Input(shape=(180,180,3))
    x = layers.Rescaling(1./255)(inputs)
    x = layers.Conv2D(filters=32,kernel_size=3,activation="relu")(x)
    x = layers.MaxPooling2D(pool_size=2)(x)
    x = layers.Conv2D(filters=64,kernel_size=3,activation="relu")(x)
    x = layers.MaxPooling2D(pool_size=2)(x)
    x = layers.Conv2D(filters=128,kernel_size=3,activation="relu")(x)
    x = layers.MaxPooling2D(pool_size=2)(x)
    x = layers.Conv2D(filters=256,kernel_size=3,activation="relu")(x)
    x = layers.MaxPooling2D(pool_size=2)(x)
    x = layers.Conv2D(filters=256,kernel_size=3,activation="relu")(x)
    x = layers.Flatten()(x)
    outputs = layers.Dense(1,activation="sigmoid")(x)
    model = keras.Model(inputs = inputs,outputs = outputs)
    model.compile(loss="binary_crossentropy",optimizer="rmsprop",metrics=["accuracy"])
    return model

train_dataset = image_dataset_from_directory(
    new_base_Dir/"train",
    image_size=(180,180),
    batch_size=32,
    shuffle=False
)
validation_dataset = image_dataset_from_directory(
    new_base_Dir/"validation",
    image_size=(180,180),
    batch_size=32,
    shuffle=False
)
test_dataset = image_dataset_from_directory(
    new_base_Dir/"test",
    image_size=(180,180),
    batch_size=32
)


callbacks = [keras.callbacks.ModelCheckpoint(
    filepath="convet_from_scratch.keras",
    save_best_only=True,
    monitor="val_loss"
)]
history = createModel().fit(train_dataset,epochs=30,validation_data=validation_dataset,callbacks=callbacks)

发现问题了。。。我太蠢了,shuffle属性要设置为True,让他打乱

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值