ChatGPT解决这个技术问题 Extra ChatGPT

如何根据损失值告诉 Keras 停止训练?

目前我使用以下代码:

callbacks = [
    EarlyStopping(monitor='val_loss', patience=2, verbose=0),
    ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
      shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
      callbacks=callbacks)

当损失在 2 个 epoch 内没有改善时,它告诉 Keras 停止训练。但是我想在损失变得小于某个恒定的“THR”后停止训练:

if val_loss < THR:
    break

我在文档中看到有可能进行自己的回调:http://keras.io/callbacks/ 但没有找到如何停止训练过程。我需要一个建议。


Z
ZFTurbo

我找到了答案。我查看了 Keras 的源代码并找到了 EarlyStopping 的代码。我做了自己的回调,基于它:

class EarlyStoppingByLossVal(Callback):
    def __init__(self, monitor='val_loss', value=0.00001, verbose=0):
        super(Callback, self).__init__()
        self.monitor = monitor
        self.value = value
        self.verbose = verbose

    def on_epoch_end(self, epoch, logs={}):
        current = logs.get(self.monitor)
        if current is None:
            warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)

        if current < self.value:
            if self.verbose > 0:
                print("Epoch %05d: early stopping THR" % epoch)
            self.model.stop_training = True

和用法:

callbacks = [
    EarlyStoppingByLossVal(monitor='val_loss', value=0.00001, verbose=1),
    # EarlyStopping(monitor='val_loss', patience=2, verbose=0),
    ModelCheckpoint(kfold_weights_path, monitor='val_loss', save_best_only=True, verbose=0),
]
model.fit(X_train.astype('float32'), Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
      shuffle=True, verbose=1, validation_data=(X_valid, Y_valid),
      callbacks=callbacks)

只要它对某人有用——在我的例子中,我使用了 monitor='loss',它运行良好。
看来 Keras 已经更新了。 EarlyStopping 回调函数现在已内置 min_delta。不再需要破解源代码了,耶! stackoverflow.com/a/41459368/3345375
重新阅读问题和答案后,我需要纠正自己:min_delta 的意思是“如果每个时期(或每个时期)没有足够的改进,请尽早停止。”但是,OP询问如何“在损失低于一定水平时尽早停止”。
NameError: name 'Callback' is not defined...我将如何解决它?
以利亚试试这个:from keras.callbacks import Callback
d
devin

keras.callbacks.EarlyStopping 回调确实有一个 min_delta 参数。来自 Keras 文档:

min_delta:被监控量的最小变化被认为是改进,即小于 min_delta 的绝对变化,将被视为没有改进。


作为参考,这里是早期版本的 Keras (1.1.0) 的文档,其中尚未包含 min_delta 参数:faroit.github.io/keras-docs/1.1.0/callbacks/#earlystopping
我怎样才能让它在 min_delta 持续多个时期之前不会停止?
EarlyStopping 的另一个参数称为耐心:没有改进的 epoch 数,之后训练将停止。
虽然 min_delta 可能有用,但它并不能完全解决按绝对值提前停止的问题。相反, min_delta 用作值之间的差异
1
1''

一种解决方案是在 for 循环中调用 model.fit(nb_epoch=1, ...),然后您可以在 for 循环中放置一个 break 语句并执行您想要的任何其他自定义控制流。


如果他们制作了一个回调来接受可以做到这一点的单个函数,那就太好了。
N
Nicolas Gervais

我使用自定义回调解决了同样的问题。

在以下自定义回调代码中,为 THR 分配您要停止训练并将回调添加到模型的值。

from keras.callbacks import Callback

class stopAtLossValue(Callback):

        def on_batch_end(self, batch, logs={}):
            THR = 0.03 #Assign THR with the value at which you want to stop training.
            if logs.get('loss') <= THR:
                 self.model.stop_training = True

S
Suvo

在我使用 TensorFlow in practice specialization 的过程中,我学到了一种非常优雅的技术。对已接受的答案稍作修改。

让我们以我们最喜欢的 MNIST 数据为例。

import tensorflow as tf

class new_callback(tf.keras.callbacks.Callback):
    def epoch_end(self, epoch, logs={}): 
        if(logs.get('accuracy')> 0.90): # select the accuracy
            print("\n !!! 90% accuracy, no further training !!!")
            self.model.stop_training = True

mnist = tf.keras.datasets.mnist

(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0 #normalize

callbacks = new_callback()

# model = tf.keras.models.Sequential([# define your model here])

model.compile(optimizer=tf.optimizers.Adam(),
          loss='sparse_categorical_crossentropy',
          metrics=['accuracy'])
model.fit(x_train, y_train, epochs=10, callbacks=[callbacks])

所以,在这里我设置了 metrics=['accuracy'],因此在回调类中,条件设置为 'accuracy'> 0.90

您可以选择任何指标并像此示例一样监控训练。最重要的是,您可以为不同的指标设置不同的条件并同时使用它们。

希望这会有所帮助!


函数名应该是 on_epoch_end
J
Juan Antonio Barragan

对我来说,如果我在将 stop_training 参数设置为 True 后添加了 return 语句,模型只会停止训练,因为我是在 self.model.evaluate 之后调用的。所以要么确保将 stop_training = True 放在函数的末尾,要么添加一个 return 语句。

def on_epoch_end(self, batch, logs):
        self.epoch += 1
        self.stoppingCounter += 1
        print('\nstopping counter \n',self.stoppingCounter)

        #Stop training if there hasn't been any improvement in 'Patience' epochs
        if self.stoppingCounter >= self.patience:
            self.model.stop_training = True
            return

        # Test on additional set if there is one
        if self.testingOnAdditionalSet:
            evaluation = self.model.evaluate(self.val2X, self.val2Y, verbose=0)
            self.validationLoss2.append(evaluation[0])
            self.validationAcc2.append(evaluation[1])enter code here

N
Nicolas Gervais

如果您使用的是自定义训练循环,则可以使用 collections.deque,它是一个可以附加的“滚动”列表,当列表长度超过 maxlen 时,左侧项目会弹出。这是行:

loss_history = deque(maxlen=early_stopping + 1)

for epoch in range(epochs):
    fit(epoch)
    loss_history.append(test_loss.result().numpy())
    if len(loss_history) > early_stopping and loss_history.popleft() < min(loss_history)
            break

这是一个完整的例子:

import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.keras.layers import Dense
from collections import deque

data, info = tfds.load('iris', split='train', as_supervised=True, with_info=True)

data = data.map(lambda x, y: (tf.cast(x, tf.int32), y))

train_dataset = data.take(120).batch(4)
test_dataset = data.skip(120).take(30).batch(4)

model = tf.keras.models.Sequential([
    Dense(8, activation='relu'),
    Dense(16, activation='relu'),
    Dense(info.features['label'].num_classes)])

loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)

train_loss = tf.keras.metrics.Mean()
test_loss = tf.keras.metrics.Mean()

train_acc = tf.keras.metrics.SparseCategoricalAccuracy()
test_acc = tf.keras.metrics.SparseCategoricalAccuracy()

opt = tf.keras.optimizers.Adam(learning_rate=1e-3)


@tf.function
def train_step(inputs, labels):
    with tf.GradientTape() as tape:
        logits = model(inputs, training=True)
        loss = loss_object(labels, logits)

    gradients = tape.gradient(loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))
    train_loss(loss)
    train_acc(labels, logits)


@tf.function
def test_step(inputs, labels):
    logits = model(inputs, training=False)
    loss = loss_object(labels, logits)
    test_loss(loss)
    test_acc(labels, logits)


def fit(epoch):
    template = 'Epoch {:>2} Train Loss {:.3f} Test Loss {:.3f} ' \
               'Train Acc {:.2f} Test Acc {:.2f}'

    train_loss.reset_states()
    test_loss.reset_states()
    train_acc.reset_states()
    test_acc.reset_states()

    for X_train, y_train in train_dataset:
        train_step(X_train, y_train)

    for X_test, y_test in test_dataset:
        test_step(X_test, y_test)

    print(template.format(
        epoch + 1,
        train_loss.result(),
        test_loss.result(),
        train_acc.result(),
        test_acc.result()
    ))


def main(epochs=50, early_stopping=10):
    loss_history = deque(maxlen=early_stopping + 1)

    for epoch in range(epochs):
        fit(epoch)
        loss_history.append(test_loss.result().numpy())
        if len(loss_history) > early_stopping and loss_history.popleft() < min(loss_history):
            print(f'\nEarly stopping. No validation loss '
                  f'improvement in {early_stopping} epochs.')
            break

if __name__ == '__main__':
    main(epochs=250, early_stopping=10)
Epoch  1 Train Loss 1.730 Test Loss 1.449 Train Acc 0.33 Test Acc 0.33
Epoch  2 Train Loss 1.405 Test Loss 1.220 Train Acc 0.33 Test Acc 0.33
Epoch  3 Train Loss 1.173 Test Loss 1.054 Train Acc 0.33 Test Acc 0.33
Epoch  4 Train Loss 1.006 Test Loss 0.935 Train Acc 0.33 Test Acc 0.33
Epoch  5 Train Loss 0.885 Test Loss 0.846 Train Acc 0.33 Test Acc 0.33
...
Epoch 89 Train Loss 0.196 Test Loss 0.240 Train Acc 0.89 Test Acc 0.87
Epoch 90 Train Loss 0.195 Test Loss 0.239 Train Acc 0.89 Test Acc 0.87
Epoch 91 Train Loss 0.195 Test Loss 0.239 Train Acc 0.89 Test Acc 0.87
Epoch 92 Train Loss 0.194 Test Loss 0.239 Train Acc 0.90 Test Acc 0.87

Early stopping. No validation loss improvement in 10 epochs.