Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deploy multiple model in a node with multople GPUs #165

Open
jjjjohnson opened this issue Sep 14, 2023 · 0 comments
Open

How to deploy multiple model in a node with multople GPUs #165

jjjjohnson opened this issue Sep 14, 2023 · 0 comments
Labels
bug Something isn't working

Comments

@jjjjohnson
Copy link

Description

Suppose I have 5 GPT models with each TP=2 and I want to deploy them in a machine with 8 GPUs.  Is it possible? If so, how to control the GPU allocation? I tried to set CUDA_VISIBLE_DEVICES when launch the Triton server does not work.

Reproduced Steps

Tried CUDA_VISIBLE_DEVICES
@jjjjohnson jjjjohnson added the bug Something isn't working label Sep 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Development

No branches or pull requests

1 participant