ChatGPT解决这个技术问题 Extra ChatGPT

How big should batch size and number of epochs be when fitting a model?

My training set has 970 samples and validation set has 243 samples.

How big should batch size and number of epochs be when fitting a model to optimize the val_acc? Is there any sort of rule of thumb to use based on data input size?

I would say this highly depends on your data. If you are just playing around with some simple task, like XOR-Classifiers, a few hundred epochs with a batch size of 1 is enough to get like 99.9% accuracy. For MNIST I mostly experienced reasonable results with something around 10 to 100 for batch size and less than 100 epochs. Without details to your problem, your architecture, your learning rules / cost functions, your data and so on one can not answer this accurately.
is there a way to include all the data in every training epoch?
@kRazzyR . Actually for every training all the data will be considered with splited batch. if you want to include all the data in a single time use batchsize of data length.

L
Lucas Ramadan

Since you have a pretty small dataset (~ 1000 samples), you would probably be safe using a batch size of 32, which is pretty standard. It won't make a huge difference for your problem unless you're training on hundreds of thousands or millions of observations.

To answer your questions on Batch Size and Epochs:

In general: Larger batch sizes result in faster progress in training, but don't always converge as fast. Smaller batch sizes train slower, but can converge faster. It's definitely problem dependent.

In general, the models improve with more epochs of training, to a point. They'll start to plateau in accuracy as they converge. Try something like 50 and plot number of epochs (x axis) vs. accuracy (y axis). You'll see where it levels out.

What is the type and/or shape of your data? Are these images, or just tabular data? This is an important detail.


The batch size should pretty much be as large as possible without exceeding memory. The only other reason to limit batch size is that if you concurrently fetch the next batch and train the model on the current batch, you may be wasting time fetching the next batch (because it's so large and the memory allocation may take a significant amount of time) when the model has finished fitting to the current batch, in which case it might be better to fetch batches more quickly to reduce model downtime.
I often see values for batch size which are a multiple of 8. Is there a formal reason for this choice?
Does a larger epoch result in overfitting? Does having more data and less epoch result in underfitting?
@Peter. This may be helpful stackoverflow.com/questions/44483233/….
g
georgeawg

Great answers above. Everyone gave good inputs.

Ideally, this is the sequence of the batch sizes that should be used:

{1, 2, 4, 8, 16} - slow 

{ [32, 64],[ 128, 256] }- Good starters

[32, 64] - CPU

[128, 256] - GPU for more boost

For me, these values were very bad. I ended up using a batch-size of 3000 for my model, which is way more than you proposed here.
Hmm is there any source why you state this as given fact?
Here's a cited source using these batch sizes on a CNN model. Hope this is a good use to you. ~Cheers arxiv.org/pdf/1606.02228.pdf#page=3&zoom=150,0,125
This seems to be a gross oversimplification. Batch size will generally depend on the per-item complexity of your input set as well as the amount of memory you're working with. In my experience, I get the best results by gradually scaling my batch size. For me, I've had the best luck starting with 1 and doubling my batch size every n hours of training, with n depending on the complexity or size of the dataset until I reach the memory limits of my machine, then continuing to train on the largest batch size possible for as long as possible.
t
tauseef_CuriousGuy

I use Keras to perform non-linear regression on speech data. Each of my speech files gives me features that are 25000 rows in a text file, with each row containing 257 real valued numbers. I use a batch size of 100, epoch 50 to train Sequential model in Keras with 1 hidden layer. After 50 epochs of training, it converges quite well to a low val_loss.


D
Devrath Mohanty

I used Keras to perform non linear regression for market mix modelling. I got best results with a batch size of 32 and epochs = 100 while training a Sequential model in Keras with 3 hidden layers. Generally batch size of 32 or 25 is good, with epochs = 100 unless you have large dataset. in case of large dataset you can go with batch size of 10 with epochs b/w 50 to 100. Again the above mentioned figures have worked fine for me.


Value for batch size should be (preferred) in powers of 2. stackoverflow.com/questions/44483233/…
"For large dataset, batch size of 10...", isn't the understanding correct that more the batch size, better it is, as gradients are averaged over a batch
a
alexanderdavide

tf.keras.callbacks.EarlyStopping

With Keras you can make use of tf.keras.callbacks.EarlyStopping which automatically stops training if the monitored loss has stopped improving. You can allow epochs with no improvement using the parameter patience.

It helps to find the plateau from which you can go on refining the number of epochs or may even suffice to reach your goal without having to deal with epochs at all.


I concur with @alexanderdavide. Early stopping callback should always be used, then one doesn't have to deal with epoch size
V
Vojtech Stas

https://i.stack.imgur.com/qCTxz.png

In this article this is said:

Stochastic means 1 sample, mimibatch batch of few samples and batch means full train dataset = this I fould here

PROS of smaller batch: faster train, less RAM needed

CONS: The smaller the batch the less accurate the estimate of the gradient will be

In this paper, they were trying 256,512,1024 batch sizes and the performance of all models were in the standard deviation of each other. This means that the batch size didn't have any significant influence on performance.

Final word:

If have problem with RAM = decrease batch size

If you need to calculate faster = decrease batch size

If the performace decreased after smaller batch = increase batch size

If you find this post useful, please up-vote & comment. Took the time to share it with you. Thanks


J
J R

From one study, a rule of thumb is that batch size and learning_rates have a high correlation, to achieve good performance.

High learning rate in the study below means 0.001, small learning rate is 0.0001.

In my case, I usually have a high batch size of 1024 to 2048 for a dataset of a million records for example, with learning rate at 0.001 (default of Adam optimizer). However, i also use a cyclical learning rate scheduler which changes this value during fitting, which is another topic.

from the study:

'In this paper, we compared the performance of CNN using different batch sizes and different learning rates. According to our results, we can conclude that the learning rate and the batch size have a significant impact on the performance of the network. There is a high correlation between the learning rate and the batch size, when the learning rates are high, the large batch size performs better than with small learning rates. We recommend choosing small batch size with low learning rate. In practical terms, to determine the optimum batch size, we recommend trying smaller batch sizes first(usually 32 or 64), also keeping in mind that small batch sizes require small learning rates. The number of batch sizes should be a power of 2 to take full advantage of the GPUs processing. Subsequently, it is possible to increase the batch size value till satisfactory results are obtained.' - https://www.sciencedirect.com/science/article/pii/S2405959519303455


d
devforfu

Epochs is up to your wish, depending upon when validation loss stops improving further. This much should be batch size:


# To define function to find batch size for training the model
# use this function to find out the batch size

    def FindBatchSize(model):
        """#model: model architecture, that is yet to be trained"""
        import os, sys, psutil, gc, tensorflow, keras
        import numpy as np
        from keras import backend as K
        BatchFound= 16

        try:
            total_params= int(model.count_params());    GCPU= "CPU"
            #find whether gpu is available
            try:
                if K.tensorflow_backend._get_available_gpus()== []:
                    GCPU= "CPU";    #CPU and Cuda9GPU
                else:
                    GCPU= "GPU"
            except:
                from tensorflow.python.client import device_lib;    #Cuda8GPU
                def get_available_gpus():
                    local_device_protos= device_lib.list_local_devices()
                    return [x.name for x in local_device_protos if x.device_type == 'GPU']
                if "gpu" not in str(get_available_gpus()).lower():
                    GCPU= "CPU"
                else:
                    GCPU= "GPU"

            #decide batch size on the basis of GPU availability and model complexity
            if (GCPU== "GPU") and (os.cpu_count() >15) and (total_params <1000000):
                BatchFound= 64    
            if (os.cpu_count() <16) and (total_params <500000):
                BatchFound= 64  
            if (GCPU== "GPU") and (os.cpu_count() >15) and (total_params <2000000) and (total_params >=1000000):
                BatchFound= 32      
            if (GCPU== "GPU") and (os.cpu_count() >15) and (total_params >=2000000) and (total_params <10000000):
                BatchFound= 16  
            if (GCPU== "GPU") and (os.cpu_count() >15) and (total_params >=10000000):
                BatchFound= 8       
            if (os.cpu_count() <16) and (total_params >5000000):
                BatchFound= 8    
            if total_params >100000000:
                BatchFound= 1

        except:
            pass
        try:

            #find percentage of memory used
            memoryused= psutil.virtual_memory()
            memoryused= float(str(memoryused).replace(" ", "").split("percent=")[1].split(",")[0])
            if memoryused >75.0:
                BatchFound= 8
            if memoryused >85.0:
                BatchFound= 4
            if memoryused >90.0:
                BatchFound= 2
            if total_params >100000000:
                BatchFound= 1
            print("Batch Size:  "+ str(BatchFound));    gc.collect()
        except:
            pass

        memoryused= [];    total_params= [];    GCPU= "";
        del memoryused, total_params, GCPU;    gc.collect()
        return BatchFound

big ouch ......

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now