ChatGPT解决这个技术问题 Extra ChatGPT

是否有人在加载 Keras 保存的模型时得到“AttributeError: 'str' object has no attribute 'decode'”

训练后,我保存了两个 Keras 整个模型和唯一的权重使用

model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)

模型和权重保存成功,没有错误。我可以简单地使用 model.load_weights 成功加载权重,它们很好,但是当我尝试通过 load_model 加载保存模型时,我收到错误。

File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'

我从未收到此错误,我曾经成功加载任何模型。我正在使用带有 tensorflow 后端的 Keras 2.2.4。蟒蛇 3.6。我的培训代码是:

from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard, 
ModelCheckpoint,EarlyStopping
import pandas as pd

MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():

# Here, we initialize the "NASNetMobile" model type and customize the final 
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))

#    model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})

# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)

# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]

train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)

 #    train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
 #    test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd, 
directory="./images", x_col="id", y_col="label",
                                              has_ext=True, 
class_mode="other", target_size=(224, 224),
                                              batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
                                              has_ext=True, class_mode="other", target_size=(224, 224),
                                              batch_size=batch_size)

STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
                    steps_per_epoch=STEP_SIZE_TRAIN,
                    validation_data=valid_generator,
                    validation_steps=STEP_SIZE_VALID,
                    callbacks=callbacks,
                    epochs=20)

# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
   # freeze_support() here if program needs to be frozen
    main()

a
arno_v

对我来说,解决方案是降级 h5py 包(在我的情况下为 2.10.0),显然只将 Keras 和 Tensorflow 恢复到正确的版本是不够的。


有关此问题的更多信息:github.com/tensorflow/tensorflow/issues/44467
从 alexhg 的链接,We will have people working on making TF work with h5py >= 3 in the future, but this will only land in TF 2.5 or later. 出现此问题是因为 TensorFlow 无法与 h5py v3 和更高版本一起使用。 2.10.0 是最新版本-2.xy
有效!我试图以 .h5 格式加载 keras 模型,然后将其保存为 tflite 模型。
不幸的是,对于处理器 2 GHz 四核 Intel Core i5,没有 2.10.0 版本的 cp95 轮,出现不支持错误,而 3..1.0 有问题。
这对我有用,非常感谢!一般的经验法则是检查 Tensorflow、Keras 或任何其他主要库,并与 numpy、h5py、opencv 等其他依赖项相关联。使用常识和直觉调整版本。
S
Sheetal Mangesh Pandrekar

我使用以下命令降级了我的 h5py 包,

pip install 'h5py==2.10.0' --force-reinstall

重新启动了我的 ipython 内核,它工作了。


由于缺少 RECORD 文件,使用这个确切的命令会导致 OSError。但是使用 conda install 'h5py==2.10.0' 有效。
关键是重启内核。
I
Ibra

对我来说,h5py 的版本优于我之前的版本。
通过设置为 2.10.0 来修复它。


C
Codemaker

使用以下命令降级 h5py 包以解决问题,

pip install h5py==2.10.0 --force-reinstall

d
desertnaut

我遇到了同样的问题,解决了将 compile=False 放入 load_model 的问题:

model_ = load_model('path to your model.h5',custom_objects={'Scale': Scale()}, compile=False)
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
model_.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])

为什么我们需要 custom_objects={'Scale': Scale()}
无论如何,compile = False 给了我这个错误,File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/saving.py", line 229, in load_model model_config = json.loads(model_config.decode('utf-8')) AttributeError: 'str' object has no attribute 'decode'
我有同样的问题,但 compile=False 无关紧要:(
它没有帮助。
A
Augusto Cesar de Camargo

使用 TF 格式文件而不是 h5py 保存:save_format='tf'。就我而言:

model.save_weights("NMT_model_weight.tf",save_format='tf')

TypeError: save_weights() got an unexpected keyword argument 'save_format'
E
Eric Fournie

这可能是由于从不同版本的 keras 保存的模型。从 keras 2.2.6 加载由 tensorflow.keras 生成的模型(我认为类似于 keras 2.1.6 for tf 1.12)时,我遇到了同样的问题。

您可以使用 model.load_weights 加载权重并从您要使用的 keras 版本中重新保存完整的模型。


但它也发生在我用来训练模型的同一台机器上。同样的错误...
是的,你是对的,使用 model.load_weights 我可以做到这一点,但我想知道为什么我不能加载整个模型架构
p
pcampana

对我有用的解决方案是:

pip3 uninstall keras
pip3 uninstall tensorflow
pip3 install --upgrade pip3
pip3 install tensorflow
pip3 install keras

A
Azaria Gebremichael

在我的环境中使用 tensorflow==2.4.1、h5py==2.1.0 和 python 3.8 后,我仍然遇到此错误。修复它的原因是将 python 版本降级到 3.6.9