-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Model link for Llama-3-instruct 70B is wrong #1369
Comments
@slobentanzer Thanks for reporting. Fix this by PR #1370 for the next release. |
This issue is stale because it has been open for 7 days with no activity. |
This issue was closed because it has been inactive for 5 days since being marked as stale. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
Trying to use the builtin Llama-3-instruct 70B (gguf) from the Python client (
.launch_model()
) fails with this Traceback (excerpt):I guess it should be /Meta-Llama-3-70B-Instruct-Q4_K_M.gguf in the file path, right?
I did (xinference 0.3.10):
I don't think other details matter in this issue, but feel free to correct me in that. :)
The text was updated successfully, but these errors were encountered: