Skip to content

Commit

Permalink
fix: troubleshooting link
Browse files Browse the repository at this point in the history
  • Loading branch information
jaluma committed Aug 7, 2024
1 parent 80830a4 commit 6539231
Showing 1 changed file with 2 additions and 2 deletions.
4 changes: 2 additions & 2 deletions fern/docs/pages/installation/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -312,7 +312,7 @@ $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --

If your installation was correct, you should see a message similar to the following next
time you start the server `BLAS = 1`. If there is some issue, please refer to the
[troubleshooting](#/installation/getting-started/troubleshooting#guide-for-building-llama-cpp-with-cuda-support) section.
[troubleshooting](#/installation/getting-started/troubleshooting#building-llama-cpp-with-nvidia-gpu-support) section.

```console
llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB)
Expand Down Expand Up @@ -345,7 +345,7 @@ CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cac

If your installation was correct, you should see a message similar to the following next
time you start the server `BLAS = 1`. If there is some issue, please refer to the
[troubleshooting](#/installation/getting-started/troubleshooting#guide-for-building-llama-cpp-with-cuda-support) section.
[troubleshooting](#/installation/getting-started/troubleshooting#building-llama-cpp-with-nvidia-gpu-support) section.

```
llama_new_context_with_model: total VRAM used: 4857.93 MB (model: 4095.05 MB, context: 762.87 MB)
Expand Down

0 comments on commit 6539231

Please sign in to comment.