Skip to content

Commit

Permalink
Update LLM pipeline to support GPU parameter with llama.cpp backend, c…
Browse files Browse the repository at this point in the history
…loses #724
  • Loading branch information
davidmezzetti committed May 28, 2024
1 parent 468ed7a commit b3a99f0
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions src/python/txtai/pipeline/llm/llama.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,9 @@ def __init__(self, path, template=None, **kwargs):
# Check if this is a local path, otherwise download from the HF Hub
path = path if os.path.exists(path) else self.download(path)

# Default GPU layers if not already set
kwargs["n_gpu_layers"] = kwargs.get("n_gpu_layers", -1 if kwargs.get("gpu", True) else 0)

# Create llama.cpp instance
self.llm = Llama(path, verbose=kwargs.pop("verbose", False), **kwargs)

Expand Down

0 comments on commit b3a99f0

Please sign in to comment.