From d47ff6dd4b007ea7419cf564b7a5941b3439284e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Alex=20Gr=C3=B6nholm?= Date: Mon, 16 Dec 2024 18:00:05 +0200 Subject: [PATCH] Updated ROCm installation instructions The updated installation instructions allow the utilization of the GPU, as per the upstream instructions. --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index e00456580..fa5b2b08b 100644 --- a/README.md +++ b/README.md @@ -173,12 +173,12 @@ pip install llama-cpp-python \
-hipBLAS (ROCm) +HIP (ROCm) -To install with hipBLAS / ROCm support for AMD cards, set the `GGML_HIPBLAS=on` environment variable before installing: +To install with HIP / ROCm support for AMD cards, set the `GGML_HIP=on` environment variable before installing: ```bash -CMAKE_ARGS="-DGGML_HIPBLAS=on" pip install llama-cpp-python +CMAKE_ARGS="-DGGML_HIP=on" pip install llama-cpp-python ```