Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When reasoning, how to multi-card on a single machine? #159

Open
tiga-dudu opened this issue Jan 21, 2025 · 0 comments
Open

When reasoning, how to multi-card on a single machine? #159

tiga-dudu opened this issue Jan 21, 2025 · 0 comments

Comments

@tiga-dudu
Copy link

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.55 GiB (GPU 1; 23.64 GiB total capacity; 21.21 GiB already allocated; 120.25 MiB free; 21.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Even if I set model = model.to(gpu) -> model = model.half().to(gpu), it still doesn't work.

export CUDA_VISIBLE_DEVICES=0,1,2,3 Equally doesn't work,
Can someone help me?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant