我想绘制这个简单神经网络的输出:
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)
model.test_on_batch(x_test, y_test)
model.metrics_names
我已经绘制了训练和验证的准确性和损失:
print(history.history.keys())
# "Accuracy"
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
# "Loss"
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.show()
现在我想从 model.test_on_batch(x_test, y_test)
添加和绘制测试集的准确度,但从 model.metrics_names
我获得了用于绘制训练数据 plt.plot(history.history['acc'])
准确度的相同值 'acc'。我如何绘制测试集的准确性?
import keras
from matplotlib import pyplot as plt
history = model1.fit(train_x, train_y,validation_split = 0.1, epochs=50, batch_size=4)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
https://i.stack.imgur.com/FmcMs.png
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
https://i.stack.imgur.com/3HL0C.png
这是一样的,因为你是在测试集上训练,而不是在训练集上。不要那样做,只需在训练集上训练:
history = model.fit(x_test, y_test, nb_epoch=10, validation_split=0.2, shuffle=True)
变成:
history = model.fit(x_train, y_train, nb_epoch=10, validation_split=0.2, shuffle=True)
model.fit( ... )
的结果有点困惑,我得到 loss、acc、val__loss 和val__acc,我想这些值代表了训练和验证的损失和准确性,但是我在哪里可以找到关于测试的损失值?
model.evaluate(x_test, y_test)
model.metrics_names
我得到 acc,与训练相同。我究竟做错了什么?
尝试
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.show()
这将构建一个图表,其中包含历史的所有数据集的可用历史指标。例子:
https://i.stack.imgur.com/LfFC3.png
如下所示在测试数据上验证模型,然后绘制准确率和损失
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_train, y_train, nb_epoch=10, validation_data=(X_test, y_test), shuffle=True)
你也可以这样做......
regressor.compile(optimizer = 'adam', loss = 'mean_squared_error',metrics=['accuracy'])
earlyStopCallBack = EarlyStopping(monitor='loss', patience=3)
history=regressor.fit(X_train, y_train, validation_data=(X_test, y_test), epochs = EPOCHS, batch_size = BATCHSIZE, callbacks=[earlyStopCallBack])
对于情节 - 我喜欢情节......所以
import plotly.graph_objects as go
from plotly.subplots import make_subplots
# Create figure with secondary y-axis
fig = make_subplots(specs=[[{"secondary_y": True}]])
# Add traces
fig.add_trace(
go.Scatter( y=history.history['val_loss'], name="val_loss"),
secondary_y=False,
)
fig.add_trace(
go.Scatter( y=history.history['loss'], name="loss"),
secondary_y=False,
)
fig.add_trace(
go.Scatter( y=history.history['val_accuracy'], name="val accuracy"),
secondary_y=True,
)
fig.add_trace(
go.Scatter( y=history.history['accuracy'], name="val accuracy"),
secondary_y=True,
)
# Add figure title
fig.update_layout(
title_text="Loss/Accuracy of LSTM Model"
)
# Set x-axis title
fig.update_xaxes(title_text="Epoch")
# Set y-axes titles
fig.update_yaxes(title_text="<b>primary</b> Loss", secondary_y=False)
fig.update_yaxes(title_text="<b>secondary</b> Accuracy", secondary_y=True)
fig.show()
https://i.stack.imgur.com/dDpga.png
两种处理方法都没有错。请注意,Plotly 图有两个尺度,1 表示损失,另一个表示准确性。
plt.plot(history.history['acc']) plt.plot(history.history['val_acc'])
应更改为plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy'])
(注意我使用的是 Keras 版本 2.2.4)tf.version.VERSION
给了我'2.4.1'
。我使用“准确度”作为关键,仍然得到KeyError: 'accuracy'
,但“acc”有效。如果您使用metrics=["acc"]
,则需要调用history.history['acc']
。如果在loss="categorical_crossentropy"
的情况下使用metrics=["categorical_accuracy"]
,则必须调用history.history['categorical_accuracy']
,依此类推。有关所有选项,请参见history.history.keys()
。