-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Argument passing for TP degree #772
Comments
@VenkateshPasumarti thank you for your feedback. You can use the |
Thanks for the reply @dacorvo , can i know how can i run different models in parallel, something like locking those cores for a certain task |
@VenkateshPasumarti you can restrict the number of visible cores by using environment variables, but for that each model must run in a separate process (please refer to the AWS Neuron SDK documentation to see how). |
Thanks for the reply and information @dacorvo |
Feature request
I was trying for generating text-embeddings for mistral based model using sentence transformers, but facing an issue with memory since it is trying to download the complete model in one core and throwing memory constraint issues , since mistral model requires 16GB and one neuron core is of size is 16GB. So, i wanted to activate multiple cores using an argument in order to generate using optimum neuron.
Motivation
Need to activate multiple cores and also such that i can run two models in parellel using different cores
Your contribution
Was able to run smaller models, but for larger models facing issues.
The text was updated successfully, but these errors were encountered: