When I start training a model, there is no model saved previously. I can use model.compile()
safely. I have now saved the model in a h5
file for further training using checkpoint
.
Say, I want to train the model further. I am confused at this point: can I use model.compile()
here? And should it be placed before or after the model = load_model()
statement? If model.compile()
reinitializes all the weights and biases, I should place it before model = load_model()
statement.
After discovering some discussions, it seems to me that model.compile()
is only needed when I have no model saved previously. Once I have saved the model, there is no need to use model.compile()
. Is it true or false? And when I want to predict using the trained model, should I use model.compile()
before predicting?
When to use?
If you're using compile
, surely it must be after load_model()
. After all, you need a model to compile. (PS: load_model
automatically compiles the model with the optimizer that was saved along with the model)
What does compile
do?
Compile defines the loss function, the optimizer and the metrics. That's all.
It has nothing to do with the weights and you can compile a model as many times as you want without causing any problem to pretrained weights.
You need a compiled model to train (because training uses the loss function and the optimizer). But it's not necessary to compile a model for predicting.
Do you need to use compile more than once?
Only if:
You want to change one of these: Loss function Optimizer / Learning rate Metrics The trainable property of some layer
Loss function
Optimizer / Learning rate
Metrics
The trainable property of some layer
You loaded (or created) a model that is not compiled yet. Or your load/save method didn't consider the previous compilation.
Consequences of compiling again:
If you compile a model again, you will lose the optimizer states.
This means that your training will suffer a little at the beginning until it adjusts the learning rate, the momentums, etc. But there is absolutely no damage to the weights (unless, of course, your initial learning rate is so big that the first training step wildly changes the fine tuned weights).
Don't forget that you also need to compile the model after changing the trainable
flag of a layer, e.g. when you want to fine-tune a model like this:
load VGG model without top classifier freeze all the layers (i.e. trainable = False) add some layers to the top compile and train the model on some data un-freeze some of the layers of VGG by setting trainable = True compile the model again (DON'T FORGET THIS STEP!) train the model on some data
compile
method invocation.
Success story sharing
model.outputs
i.e, after compilation and before compilation what is the difference between the two? Also, would be great if you could explain the use ofcompile=False
inload_model(model, compile=False/True)
argument.compile=True
: the model will load and compile with the same settings as saved. 2 -compile=False
, you will load only the model without optimizer.trainable
. But there is no problem at all with setting weights.