ChatGPT解决这个技术问题 Extra ChatGPT

How to log Keras loss output to a file

When you run a Keras neural network model you might see something like this in the console:

Epoch 1/3
   6/1000 [..............................] - ETA: 7994s - loss: 5111.7661

As time goes on the loss hopefully improves. I want to log these losses to a file over time so that I can learn from them. I have tried:

logging.basicConfig(filename='example.log', filemode='w', level=logging.DEBUG)

but this doesn't work. I am not sure what level of logging I need in this situation.

I have also tried using a callback like in:

def generate_train_batch():
    while 1:
        for i in xrange(0,dset_X.shape[0],3):
            yield dset_X[i:i+3,:,:,:],dset_y[i:i+3,:,:]

class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.losses = []

    def on_batch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
logloss=LossHistory()
colorize.fit_generator(generate_train_batch(),samples_per_epoch=1000,nb_epoch=3,callbacks=['logloss'])

but obviously this isn't writing to a file. Whatever the method, through a callback or the logging module or anything else, I would love to hear your solutions for logging loss of a keras neural network to a file. Thanks!

A more complex solution might be to use TensorFlow backend and output logs that can be analyzed with TensorBoard. But that's a different question :-)

M
Moffee

You can use CSVLogger callback.

as example:

from keras.callbacks import CSVLogger

csv_logger = CSVLogger('log.csv', append=True, separator=';')
model.fit(X_train, Y_train, callbacks=[csv_logger])

Look at: Keras Callbacks


Thanks! I was looking for a way to checkpoint the training status that didn't rely on the training to actually finish (if something fails along the way or you run out of compute time on an HPC you never get the history object and you can't recover from that), and this is precisely it.
A little more detail (not included in Keras docs): I get output in the following order per line of the produced csv file: "epoch, train_loss, learning_rate, train_metric1, train_metric2, val_loss, val_metric1, val_metric2, ...", where loss was specified in model.compile() and the metric1, metric2, metric3 et. are the metrics passed to the metrics argument: e.g. model.compile(loss='mse', metrics=[metric1, metric2, metric3], ... ) @jjs - to save weights of models during training, not just logs, you could have a look at the Keras ModelCheckPoint callback. It works similarly to the CSVLogger.
Hi! is there a way to save logs for only N epochs not all? Thanks. @Alex
@UpasanaMittal , of course you can do this with LambdaCallback
M
Moffee

There is a simple solution to your problem. Every time any of the fit methods are used - as a result the special callback called History Callback is returned. It has a field history which is a dictionary of all metrics registered after every epoch. So to get list of loss function values after every epoch you can easly do:

history_callback = model.fit(params...)
loss_history = history_callback.history["loss"]

It's easy to save such list to a file (e.g. by converting it to numpy array and using savetxt method).

UPDATE:

Try:

import numpy
numpy_loss_history = numpy.array(loss_history)
numpy.savetxt("loss_history.txt", numpy_loss_history, delimiter=",")

UPDATE 2:

The solution to the problem of recording a loss after every batch is written in Keras Callbacks Documentation in a Create a Callback paragraph.


Hmm. can you show how to integrate this into the code in the question? I tried this and no file was generated. Perhaps this only will fill the log file after the training has completed? I want something that can log loss through the training process so that I can learn from it without waiting for the entire training to complete.
ok with np.savetxt("loss_history.txt", numpy_loss_history, delimiter = ","), it works. unfortunately it only logs the loss after each epoch. I wonder if I can get it to do so after each batch. any ideas?
is it really called callback? According to your code, it is just a return value of method (function).
ValueError: Expected 1D or 2D array, got 0D array instead
can i have a log in image format instead? such as .png file?
B
Benjamin Striner

Old question, but here goes. Keras history output perfectly matches pandas DataSet input.

If you want the entire history to csv in one line: pandas.DataFrame(model.fit(...).history).to_csv("history.csv")

Cheers


By mistake, few months ago, I downvoted this comment. Although, this answer is completly right and it was very useful to me. Do you know how can I change this from down to upvote? Sorry! Meanwhile, I leave the note: this answer is OK! Give it a try!
N
Nagabhushan Baddi

You can redirect the sys.stdout object to a file before the model.fit method and reassign it to the standard console after model.fit method as follows:

import sys
oldStdout = sys.stdout
file = open('logFile', 'w')
sys.stdout = file
model.fit(Xtrain, Ytrain)
sys.stdout = oldStdout

Wont this produce a lot of garbage if verbose is set to True as by default?
R
Rishabh Jain

So In TensorFlow 2.0, it is quite easy to get Loss and Accuracy of each epoch because it returns a History object. Its History.history attribute is a record of training loss values and metrics values at successive epochs, as well as validation loss values and validation metrics values

If you have validation Data

History = model.fit(trainX,trainY,validation_data = (testX,testY),batch_size= 100, epochs = epochs,verbose = 1)
train_loss = History.history['loss']
val_loss   = History.history['val_loss']
acc = History.history['accuracy']
val_acc = History.history['val_accuracy']

If you don't have validation Data

History = model.fit(trainX,trainY,batch_size= 100, epochs = epochs,verbose = 1)
train_loss = History.history['loss']
acc = History.history['accuracy']

Then to save list data into text file use the below code

import numpy as np
train_loss = np.array(loss_history)
np.savetxt("train_loss.txt", train_loss, delimiter=",")

S
Sam Ritchie

Best is to create a LambdaCallback:

from keras.callbacks import LambdaCallback

txt_log = open('loss_log.txt', mode='wt', buffering=1)

save_op_callback = LambdaCallback(
  on_epoch_end = lambda epoch, logs: txt_log.write(
    {'epoch': epoch, 'loss': logs['loss']} + '\n'),
  on_train_end = lambda logs: txt_log.close()
)

Now,Just add it like this in the model.fit function:

model.fit(...,callbacks = [save_op_callback])