You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
2024-07-28T02:19:03.072034Z INFO text_generation_launcher: Model supports up to 8192 but tgi will now set its default to 4096 instead. This is to save VRAM by refusing large prompts in order to allow more users on the same hardware. You can increase that size using --max-batch-prefill-tokens=8242 --max-total-tokens=8192 --max-input-tokens=8191.
System Info
Intel Gaudi 2 x 8 server
docker run -p 8080:80 -v $volume:/data --runtime=habana -e PT_HPU_ENABLE_LAZY_COLLECTIVES=true -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.1 --model-id $model --sharded true --num-shard 8 --max-input-tokens 4096 --max-total-tokens 8192 --max-batch-prefill-tokens 8242
2024-07-28T02:19:03.072034Z INFO text_generation_launcher: Model supports up to 8192 but tgi will now set its default to 4096 instead. This is to save VRAM by refusing large prompts in order to allow more users on the same hardware. You can increase that size using
--max-batch-prefill-tokens=8242 --max-total-tokens=8192 --max-input-tokens=8191
.Information
Tasks
Reproduction
docker run -p 8080:80 -v $volume:/data --runtime=habana -e PT_HPU_ENABLE_LAZY_COLLECTIVES=true -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tgi-gaudi:2.0.1 --model-id $model --sharded true --num-shard 8 --max-input-tokens 4096 --max-total-tokens 8192 --max-batch-prefill-tokens 8242
Expected behavior
no warning.
The text was updated successfully, but these errors were encountered: