-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Llama to model test matrix #703
Add Llama to model test matrix #703
Conversation
@@ -30,6 +30,9 @@ jobs: | |||
pip install pytest | |||
pip install -e .[test] | |||
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi | |||
- name: Install model-specific dependencies | |||
run: | | |||
pip install llama-cpp-python |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this the bit which needs special care to get CUDA enabled @Harsha-Nori ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although that wouldn't be for this particular workflow file, but when #694 gets merged and updated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we'll want to figure out the test hardware configuration and then set the right flags accordingly: https://github.com/abetlen/llama-cpp-python?tab=readme-ov-file#installation-configuration
E.g. with CUDA enabled, we'll want to do:
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
Are you OK with this @Harsha-Nori ? |
LGTM |
Working to get Llama models into the test matrix.