1. 早停
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
这个是叠callback的写法:
early_stopping_callback = EarlyStopping(monitor='val_loss', patience=epochs_to_wait_for_improve)
checkpoint_callback = ModelCheckpoint(model_name+'.h5', monitor='val_loss', verbose=1, save_best_only=True, mode='min')
history = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
steps_per_epoch=len(X_train) / batch_size, validation_data=(X_test, y_test),
epochs=n_epochs, callbacks=[early_stopping_callback, checkpoint_callback])
手动停止模型训练:在Callback(keras.callbacks.Callback)里面设置self.model.stop_training=True,一般在on_batch_end()中设置
设置在某个监视指标达到阈值后停止训练,可以参考:
def on_batch_end(self, epoch, logs=None):
current = self.get_monitor_value(logs)
if current is None:
return
if self.monitor_op(current - self.min_delta, self.best):
self.best = current
self.wait = 0
if self.restore_best_weights:
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
if self.restore_best_weights:
if self.verbose > 0:
print('Restoring model weights from the end of '
'the best epoch')
self.model.set_weights(self.best_weights)
2. 获取预测值和真实值的数据
待补。参考资料:Keras 在fit-generator中获取验证数据的y_true和y_preds_valueerror: output of generator should be a tuple -优快云博客 keras使用中fit_generator的一些问题 - 知乎
本文撰写过程中参考的网络资料
- python - Is there a way in Keras to immediately stop training? - Stack Overflow
- python - Keras: early stopping model saving - Stack Overflow
- python - How to tell Keras stop training based on loss value? - Stack Overflow:这个是希望限制loss小于指定值后就停止训练
- Keras笔记——ModelCheckpoint-优快云博客:这个是关于模型checkpoint保存的
- keras的fit_generator与callback函数 - 简书
- https://github.com/keras-team/keras/blob/master/keras/callbacks.py
- 回调函数 Callbacks - Keras 中文文档
本文介绍了如何在Keras中使用EarlyStopping回调来监控验证损失并自动停止训练,以及如何设置ModelCheckpoint保存最佳模型。还探讨了获取预测值和真实值数据的方法。

被折叠的 条评论
为什么被折叠?



