Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs(windows): update quickstart section #3563

Merged
merged 1 commit into from
Dec 14, 2024
Merged

Conversation

aand18
Copy link
Contributor

@aand18 aand18 commented Dec 14, 2024

Command must run as Administrator, or it fails:

C:\tabby_x86_64-windows-msvc-cuda122>.\tabby.exe serve --model Qwen2.5-Coder-1.5B --chat-model Qwen2-1.5B-Instruct --device cuda
⠹     0.162 s   Starting...←[2m2024-12-14T02:45:52.034764Z←[0m ←[33m WARN←[0m ←[2mllama_cpp_server::supervisor←[0m←[2m:←[0m ←[2mcrates\llama-cpp-server\src\supervisor.rs←[0m←[2m:←[0m←[2m98:←[0m llama-server <embedding> exited with status code -1073741515, args: `Command { std: "C:\\tabby_x86_64-windows-msvc-cuda122\\llama-server.exe" "-m" "C:\\Users\\user1\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml\\model-00001-of-00001.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }`1.211 s   Starting...←[2m2024-12-14T02:45:53.044967Z←[0m ←[33m WARN←[0m ←[2mllama_cpp_server::supervisor←[0m←[2m:←[0m ←[2mcrates\llama-cpp-server\src\supervisor.rs←[0m←[2m:←[0m←[2m98:←[0m llama-server <embedding> exited with status code -1073741515, args: `Command { std: "C:\\tabby_x86_64-windows-msvc-cuda122\\llama-server.exe" "-m" "C:\\Users\\user1\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml\\model-00001-of-00001.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }`2.178 s   Starting...←[2m2024-12-14T02:45:54.061157Z←[0m ←[33m WARN←[0m ←[2mllama_cpp_server::supervisor←[0m←[2m:←[0m ←[2mcrates\llama-cpp-server\src\supervisor.rs←[0m←[2m:←[0m←[2m98:←[0m llama-server <embedding> exited with status code -1073741515, args: `Command { std: "C:\\tabby_x86_64-windows-msvc-cuda122\\llama-server.exe" "-m" "C:\\Users\\user1\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml\\model-00001-of-00001.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }`
...

Command must run as Administrator, or it fails:

33.506 s   Starting...←[2m2024-12-14T02:18:18.280545Z←[0m ←[33m WARN←[0m ←[2mllama_cpp_server::supervisor←[0m←[2m:←[0m ←[2mcrates\llama-cpp-server\src\supervisor.rs←[0m←[2m:←[0m←[2m98:←[0m llama-server <embedding> exited with status code -1073741515, args: `Command { std: "C:\\tabby_x86_64-windows-msvc-cuda122\\llama-server.exe" "-m" "C:\\Users\\user1\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml\\model-00001-of-00001.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }`
@aand18 aand18 changed the title Update run instructions for Windows docs(windows): update quickstart section Dec 14, 2024
@aand18 aand18 marked this pull request as draft December 14, 2024 02:54
@aand18 aand18 marked this pull request as ready for review December 14, 2024 02:54
@wsxiaoys wsxiaoys merged commit 683f630 into TabbyML:main Dec 14, 2024
1 check failed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants