Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Support for Inference with LoRA Adapter #847

Closed
kamillle opened this issue Jul 31, 2024 · 2 comments
Closed

[Feature] Support for Inference with LoRA Adapter #847

kamillle opened this issue Jul 31, 2024 · 2 comments

Comments

@kamillle
Copy link

Motivation

By using multiple LoRA adapters, we can expect to achieve various behaviors within a single inference server. This can potentially reduce the number of servers needed to deploy inference servers, leading to cost savings. From a training perspective, since there is no need to fine-tune the entire model, we can iterate through experimental cycles more quickly.

Related resources

vllm

@zhyncs
Copy link
Member

zhyncs commented Jul 31, 2024

Hi @kamillle Thank you for your attention and valuable suggestions. Support for LoRA is in our roadmap, please stay tuned. #634

@kamillle
Copy link
Author

@zhyncs I'm looking forward it!! Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants