ChatGPT解决这个技术问题 Extra ChatGPT

Keras - Plot training, validation and test set accuracy

I want to plot the output of this simple neural network:

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)

model.test_on_batch(x_test, y_test)
model.metrics_names

I have plotted accuracy and loss of training and validation:

print(history.history.keys())
#  "Accuracy"
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()

Now I want to add and plot test set's accuracy from model.test_on_batch(x_test, y_test), but from model.metrics_names I obtain the same value 'acc' utilized for plotting accuracy on training data plt.plot(history.history['acc']). How could I plot test set's accuracy?

Probable source of the original code: Display Deep Learning Model Training History in Keras

a
adiga
import keras
from matplotlib import pyplot as plt
history = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

https://i.stack.imgur.com/FmcMs.png

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

https://i.stack.imgur.com/3HL0C.png


Just a small addition: In updated Keras and Tensorflow 2.0, the keyword acc and val_acc have been changed to accuracy and val_accuracy accordingly. So, plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) should be changed to plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) (N.B. I am using Keras version 2.2.4)
I realized this and came back here to comment the same and I see you have already done that. That is SO is so great !
@EMT It does not depend on the Tensorflow version to use 'accuracy' or 'acc'. It depends on your own naming. tf.version.VERSION gives me '2.4.1'. I used 'accuracy' as the key and still got KeyError: 'accuracy', but 'acc' worked. If you use metrics=["acc"], you will need to call history.history['acc']. If you use metrics=["categorical_accuracy"] in case of loss="categorical_crossentropy", you would have to call history.history['categorical_accuracy'], and so on. See history.history.keys() for all options.
D
Dr. Snoopy

It is the same because you are training on the test set, not on the train set. Don't do that, just train on the training set:

history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)

Change into:

history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2, shuffle=True)

I'm sorry, I always have utilized training set to train the NN, it's been an oversight. I am new in machine learning, and I am little bit confused about the result of model.fit( ... ), I get loss, acc, val__loss and val__acc, I suppose that values represent loss and accuracy on training and validation, but where can I find the value of loss about the test?
@Simone You can use model.evaluate on the test set to get the loss and metrics over the test set. Just make you sure use the right variables.
I have used model.evaluete, and I get accuracy and loss, but I can't plot them because I can't distinguish accuracy obtained on training, from accuracy obtained on test.
@Simone What do you mean can't distinguish?
I should have an accuracy on training, an accuracy on validation, and an accuracy on test; but I get only two values: val__acc and acc, respectively for validation and training. From model.evaluate(x_test, y_test) model.metrics_names I get acc, the same of training. What am I doing wrong?
q
questionto42standswithUkraine

Try

pd.DataFrame(history.history).plot(figsize=(8,5))
plt.show()

This builds a graph with the available metrics of the history for all datasets of the history. Example:

https://i.stack.imgur.com/LfFC3.png


A
Ashok Kumar Jayaraman

Validate the model on the test data as shown below and then plot the accuracy and loss

model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, nb_epoch=10, validation_data=(X_test, y_test), shuffle=True)

T
Tim Seed

You could do it this way also ....

regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics=['accuracy'])
earlyStopCallBack = EarlyStopping(monitor='loss', patience=3)
history=regressor.fit(X_train, y_train, validation_data=(X_test, y_test), epochs = EPOCHS, batch_size = BATCHSIZE, callbacks=[earlyStopCallBack])

For the plotting - I like plotly ... so

import plotly.graph_objects as go
from plotly.subplots import make_subplots

# Create figure with secondary y-axis
fig = make_subplots(specs=[[{"secondary_y": True}]])

# Add traces
fig.add_trace(
    go.Scatter( y=history.history['val_loss'], name="val_loss"),
    secondary_y=False,
)

fig.add_trace(
    go.Scatter( y=history.history['loss'], name="loss"),
    secondary_y=False,
)

fig.add_trace(
    go.Scatter( y=history.history['val_accuracy'], name="val accuracy"),
    secondary_y=True,
)

fig.add_trace(
    go.Scatter( y=history.history['accuracy'], name="val accuracy"),
    secondary_y=True,
)

# Add figure title
fig.update_layout(
    title_text="Loss/Accuracy of LSTM Model"
)

# Set x-axis title
fig.update_xaxes(title_text="Epoch")

# Set y-axes titles
fig.update_yaxes(title_text="<b>primary</b> Loss", secondary_y=False)
fig.update_yaxes(title_text="<b>secondary</b> Accuracy", secondary_y=True)

fig.show()

https://i.stack.imgur.com/dDpga.png

Nothing wrong with either of the proceeding methods. Please note the Plotly graph has two scales , 1 for loss the other for accuracy.


关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now