You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The computation will always run on the first graphic card (cuda:0). Is there anyway to customize which card to used?
Besides I met error while simulating wav in a pytorch Dataloader with multi sub-processes (num_workers > 0). The processing breaks and return initialization error of gpuRIR. Does anyone notice this problem and know how to solve it?
thx
Best regards,
Nelson
The text was updated successfully, but these errors were encountered:
At this time, the library doesn't include the option to choose the GPU. It would be a nice feature to add in the future, but I have neither the time to implement it right now nor a multi-GPU machine to test it.
About the Pytorch Dataloader with multi sub-processes, I haven't used gpuRIR in that context, but I think muli sub-processes are typically used when the Dataloader runs in CPU so you can generate your batch in CPU while the neural network runs in the GPU. Could you be running out of GPU memory?
I also encountered the Dataloader issue. It seems to be caused by the default start method of torch.multiprocessing.
The CUDA runtime does not support the fork start method. Just use torch.multiprocessing.set_start_method() to switch from fork to spawn or forkserver. Here is the link. https://pytorch.org/docs/stable/notes/multiprocessing.html?highlight=set_start_method
Hi
The computation will always run on the first graphic card (cuda:0). Is there anyway to customize which card to used?
Besides I met error while simulating wav in a pytorch Dataloader with multi sub-processes (num_workers > 0). The processing breaks and return initialization error of gpuRIR. Does anyone notice this problem and know how to solve it?
thx
Best regards,
Nelson
The text was updated successfully, but these errors were encountered: