Skip to content

Issues: meta-llama/llama-stack

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

Llama-guard and remote::vllm name model name mismatch
#365 opened Nov 4, 2024 by stevegrubb
1 of 2 tasks
llama-stack-client: command not found
#361 opened Nov 3, 2024 by alexhegit
2 tasks
LLamaGuard, routing, and vllm
#357 opened Nov 1, 2024 by stevegrubb
1 of 2 tasks
Run ollama gpu distribution failed
#350 opened Oct 31, 2024 by alexhegit
1 of 2 tasks
vLLM can't find model from llama download
#344 opened Oct 29, 2024 by stevegrubb
1 of 2 tasks
High GPU power consumption even in standby.
#337 opened Oct 28, 2024 by JoseGuilherme1904
1 of 2 tasks
Ollama inference issue - llama3.2 not registered
#332 opened Oct 27, 2024 by akhtet
1 of 2 tasks
Support AMD ROCm GPU distribution
#320 opened Oct 25, 2024 by AlexHe99
I keep getting 405 forbidden
#273 opened Oct 21, 2024 by whiteSkar
pytorch CUDA not found in host that has CUDA with working pytorch question Further information is requested
#257 opened Oct 16, 2024 by nikolaydubina
ProTip! Adding no:label will show everything without a label.