ChatGPT解决这个技术问题 Extra ChatGPT

Does model.compile() initialize all the weights and biases in Keras (tensorflow backend)?

When I start training a model, there is no model saved previously. I can use model.compile() safely. I have now saved the model in a h5 file for further training using checkpoint.

Say, I want to train the model further. I am confused at this point: can I use model.compile() here? And should it be placed before or after the model = load_model() statement? If model.compile() reinitializes all the weights and biases, I should place it before model = load_model() statement.

After discovering some discussions, it seems to me that model.compile() is only needed when I have no model saved previously. Once I have saved the model, there is no need to use model.compile(). Is it true or false? And when I want to predict using the trained model, should I use model.compile() before predicting?


D
Daniel Möller

When to use?

If you're using compile, surely it must be after load_model(). After all, you need a model to compile. (PS: load_model automatically compiles the model with the optimizer that was saved along with the model)

What does compile do?

Compile defines the loss function, the optimizer and the metrics. That's all.

It has nothing to do with the weights and you can compile a model as many times as you want without causing any problem to pretrained weights.

You need a compiled model to train (because training uses the loss function and the optimizer). But it's not necessary to compile a model for predicting.

Do you need to use compile more than once?

Only if:

You want to change one of these: Loss function Optimizer / Learning rate Metrics The trainable property of some layer

Loss function

Optimizer / Learning rate

Metrics

The trainable property of some layer

You loaded (or created) a model that is not compiled yet. Or your load/save method didn't consider the previous compilation.

Consequences of compiling again:

If you compile a model again, you will lose the optimizer states.

This means that your training will suffer a little at the beginning until it adjusts the learning rate, the momentums, etc. But there is absolutely no damage to the weights (unless, of course, your initial learning rate is so big that the first training step wildly changes the fine tuned weights).


Would use lose all optimizer states after a re-compile even when you initially saved the model with include_optimizer=True ?
You need to recompile with the same optimizer... but I'm not sure it's possible.
@DanielMöller : Does it affect model.outputs i.e, after compilation and before compilation what is the difference between the two? Also, would be great if you could explain the use of compile=False in load_model(model, compile=False/True) argument.
Nothing changes. Compiling is to set the "optimizer" and "loss" function for "training", that's all. If you want to load a model and you will not train it, you don't need to compile it. 1 - compile=True: the model will load and compile with the same settings as saved. 2 - compile=False, you will load only the model without optimizer.
Thank you for reminding me of trainable. But there is no problem at all with setting weights.
n
nbro

Don't forget that you also need to compile the model after changing the trainable flag of a layer, e.g. when you want to fine-tune a model like this:

load VGG model without top classifier freeze all the layers (i.e. trainable = False) add some layers to the top compile and train the model on some data un-freeze some of the layers of VGG by setting trainable = True compile the model again (DON'T FORGET THIS STEP!) train the model on some data


What would be the result of not compiling the model after changing a layer's trainable flag?
@Kake_Fisk The change would not be effective, i.e. the trainability status of the layer would remain as it was before the last compile method invocation.

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now