ChatGPT解决这个技术问题 Extra ChatGPT

How to return history of validation loss in Keras

Using Anaconda Python 2.7 Windows 10.

I am training a language model using the Keras exmaple:

print('Build model...')
model = Sequential()
model.add(GRU(512, return_sequences=True, input_shape=(maxlen, len(chars))))
model.add(Dropout(0.2))
model.add(GRU(512, return_sequences=False))
model.add(Dropout(0.2))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

def sample(a, temperature=1.0):
    # helper function to sample an index from a probability array
    a = np.log(a) / temperature
    a = np.exp(a) / np.sum(np.exp(a))
    return np.argmax(np.random.multinomial(1, a, 1))


# train the model, output generated text after each iteration
for iteration in range(1, 3):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    model.fit(X, y, batch_size=128, nb_epoch=1)
    start_index = random.randint(0, len(text) - maxlen - 1)

    for diversity in [0.2, 0.5, 1.0, 1.2]:
        print()
        print('----- diversity:', diversity)

        generated = ''
        sentence = text[start_index: start_index + maxlen]
        generated += sentence
        print('----- Generating with seed: "' + sentence + '"')
        sys.stdout.write(generated)

        for i in range(400):
            x = np.zeros((1, maxlen, len(chars)))
            for t, char in enumerate(sentence):
                x[0, t, char_indices[char]] = 1.

            preds = model.predict(x, verbose=0)[0]
            next_index = sample(preds, diversity)
            next_char = indices_char[next_index]

            generated += next_char
            sentence = sentence[1:] + next_char

            sys.stdout.write(next_char)
            sys.stdout.flush()
        print()

According to Keras documentation, the model.fit method returns a History callback, which has a history attribute containing the lists of successive losses and other metrics.

hist = model.fit(X, y, validation_split=0.2)
print(hist.history)

After training my model, if I run print(model.history) I get the error:

 AttributeError: 'Sequential' object has no attribute 'history'

How do I return my model history after training my model with the above code?

UPDATE

The issue was that:

The following had to first be defined:

from keras.callbacks import History 
history = History()

The callbacks option had to be called

model.fit(X_train, Y_train, nb_epoch=5, batch_size=16, callbacks=[history])

But now if I print

print(history.History)

it returns

{}

even though I ran an iteration.

Could you specify if you run this code from console or do you run your script from command line (or IDE)? Do you have access to hist variable after training?
I'm running it off Anaconda. I have found a solution that lets me access the hist variable. But it always returns an empty curly bracket.
is there a way to retrieve it after the model is fit. I.e. I trained the model but did not create a new variable model.fit(). Can I obtain the loss history somehow or do I have to repeat the whole training process

S
Sahil Mittal

Just an example started from

history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=0)

You can use

print(history.history.keys())

to list all data in history.

Then, you can print the history of validation loss like this:

print(history.history['val_loss'])

When I do this, I only get 'acc' and 'loss', I do not see 'val_loss'
@taga You would get both a "train_loss" and a "val_loss" if you had given the model both a training and a validation set to learn from: the training set would be used to fit the model, and the validation set could be used e.g. to evaluate the model on unseen data after each epoch and stop fitting if the validation loss ceases to decrease.
c
craymichael

It's been solved.

The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option.

so instead of doing 4 iterations I now have

model.fit(......, nb_epoch = 4)

Now it returns the loss for each epoch run:

print(hist.history)
{'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, 1.3758836857303727]}

R
Rami Alloush

The following simple code works great for me:

    seqModel =model.fit(x_train, y_train,
          batch_size      = batch_size,
          epochs          = num_epochs,
          validation_data = (x_test, y_test),
          shuffle         = True,
          verbose=0, callbacks=[TQDMNotebookCallback()]) #for visualization

Make sure you assign the fit function to an output variable. Then you can access that variable very easily

# visualizing losses and accuracy
train_loss = seqModel.history['loss']
val_loss   = seqModel.history['val_loss']
train_acc  = seqModel.history['acc']
val_acc    = seqModel.history['val_acc']
xc         = range(num_epochs)

plt.figure()
plt.plot(xc, train_loss)
plt.plot(xc, val_loss)

Hope this helps. source: https://keras.io/getting-started/faq/#how-can-i-record-the-training-validation-loss-accuracy-at-each-epoch


M
Marcin Możejko

The dictionary with histories of "acc", "loss", etc. is available and saved in hist.history variable.


If I type "hist" into the console it only gives me the code I've run this session.
And how about hist.history?
Hi Marcin, I solved it. The issue was that the losses only save over epochs whilst I was running external iterations. So with each iteration my history cleared
R
Roozbeh Zabihollahi

I have also found that you can use verbose=2 to make keras print out the Losses:

history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=2)

And that would print nice lines like this:

Epoch 1/1
 - 5s - loss: 0.6046 - acc: 0.9999 - val_loss: 0.4403 - val_acc: 0.9999

According to their documentation:

verbose: 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch.

h
horseshoe

For plotting the loss directly the following works:

import matplotlib.pyplot as plt
...    
model_ = model.fit(X, Y, epochs= ..., verbose=1 )
plt.plot(list(model_.history.values())[0],'k-o')

J
Jimmy

Another option is CSVLogger: https://keras.io/callbacks/#csvlogger. It creates a csv file appending the result of each epoch. Even if you interrupt training, you get to see how it evolved.


R
Raven Cheuk

Actually, you can also do it with the iteration method. Because sometimes we might need to use the iteration method instead of the built-in epochs method to visualize the training results after each iteration.

history = [] #Creating a empty list for holding the loss later
for iteration in range(1, 3):
    print()
    print('-' * 50)
    print('Iteration', iteration)
    result = model.fit(X, y, batch_size=128, nb_epoch=1) #Obtaining the loss after each training
    history.append(result.history['loss']) #Now append the loss after the training to the list.
    start_index = random.randint(0, len(text) - maxlen - 1)
print(history)

This way allows you to get the loss you want while maintaining your iteration method.


T
Timus

Thanks to Alloush,

Following parameter must be included in model.fit():

validation_data = (x_test, y_test)

If it is not defined, val_acc and val_loss will not be exist at output.


Welcome to SO! When you are about to answer an old question (this one is over 4 years old) that already has an accepted answer (this is the case here) please ask yourself: Do I really have a substantial improvement to offer? If not, consider refraining from answering.
Respectfully, @Timus, code changes significantly over 4 years, and previous solutions that may have worked fine back in 2016 are not guaranteed to work in 2020 on different versions of Tensorflow. So answering an old question in such a way that it works with the latest version of a framework, I would argue, actually does offer a substantial improvement.
@JohnnyUtah I didn't judge the offered solution, downvoting never crossed my mind (I don't have the knowledge)! I just wanted to point out that the answer should actually offer something new.
E
Engr Ali

Those who got still error like me:

Convert model.fit_generator() to model.fit()


A
Ali karimi

you can get loss and metrics like below: returned history object is dictionary and you can access model loss( val_loss) or accuracy(val_accuracy) like below:

model_hist=model.fit(train_data,train_lbl,epochs=my_epoch,batch_size=sel_batch_size,validation_data=val_data)

acc=model_hist.history['accuracy']

val_acc=model_hist.history['val_accuracy']

loss=model_hist.history['loss']

val_loss=model_hist.history['val_loss']

dont forget that for getting val_loss or val_accuracy you should specify validation data in the "fit" function.


How is this different from the code that the asker included? Can you explain why this should work while it didn't for the asker?
@aaossa I edited code for more clarity: in first part of question the questioner accessed history in a wrong way, and in the update part questioner did not include validation_data in "fit" function which cause the val_loss be NULL. you can try the mentioned solution to check that it works.