ChatGPT解决这个技术问题 Extra ChatGPT

How does the "number of workers" parameter in PyTorch dataloader actually work?

If num_workers is 2, Does that mean that it will put 2 batches in the RAM and send 1 of them to the GPU or Does it put 3 batches in the RAM then sends 1 of them to the GPU? What does actually happen when the number of workers is higher than the number of CPU cores? I tried it and it worked fine but How does it work? (I thought that the maximum number of workers I can choose is the number of cores). If I set num_workers to 3 and during the training there were no batches in the memory for the GPU, Does the main process waits for its workers to read the batches or Does it read a single batch (without waiting for the workers)?

might be of interest: discuss.pytorch.org/t/…

S
Shihab Shahriar Khan

When num_workers>0, only these workers will retrieve data, main process won't. So when num_workers=2 you have at most 2 workers simultaneously putting data into RAM, not 3. Well our CPU can usually run like 100 processes without trouble and these worker processes aren't special in anyway, so having more workers than cpu cores is ok. But is it efficient? it depends on how busy your cpu cores are for other tasks, speed of cpu, speed of your hard disk etc. In short, its complicated, so setting workers to number of cores is a good rule of thumb, nothing more. Nope. Remember DataLoader doesn't just randomly return from what's available in RAM right now, it uses batch_sampler to decide which batch to return next. Each batch is assigned to a worker, and main process will wait until the desired batch is retrieved by assigned worker.

Lastly to clarify, it isn't DataLoader's job to send anything directly to GPU, you explicitly call cuda() for that.

EDIT: Don't call cuda() inside Dataset's __getitem__() method, please look at @psarka's comment for the reasoning


Just a remark to the last sentence - it is probably not a good idea to call .cuda() in the Dataset object, as it will have to move each sample (rather than the batch) to GPU separately, incurring a lot of overhead.
I also want to add that setting an umber of workers higher than 0 on windows might lead to errors (cf. discuss.pytorch.org/t/…).