Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updated to allow the selection of GPU for embedding where there is mo… #1734

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sgresham
Copy link
Contributor

Updated to allow the selection of GPU for embedding where there is more than one available. Defaults to cuda[0] or cpu if cuda is not available. Commented reference in settings.yaml under embedding.

…re than one available. Defaults to cuda[0] or cpu if cuda is not available. Commented reference in settings.yaml under embedding.
Copy link
Collaborator

@imartinez imartinez left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not so sure about this one. May be too specific to Nvidia setups.

@@ -7,7 +7,7 @@
from private_gpt.settings.settings import Settings

logger = logging.getLogger(__name__)

import torch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd move this to the try block within "huggingface" case. There is no "torch" general dependency declared in pyproject.toml, so this could break the whole execution for people not using huggingface. Actually, we may need to add torch to embeddings-huggingface = ["llama-index-embeddings-huggingface"] as

# Optional Huggingface related dependency
torch = {version = "^2.2.1", optional = true}

embeddings-huggingface = ["torch", "llama-index-embeddings-huggingface"] 

in pyproject.toml.

I think huggingface package from llamaindex already depends on torch, but given we are now importing it explicitly we should also depende on it.

device = torch.device("cuda:0")
else:
# If CUDA is not available, use CPU
device = torch.device("cpu")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens with laptops using a GPU that is not Nvidia based? For example Mac book running Metal GPU? Will this make embedding slower forcing them to go to CPU?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic looks similar this: llama_index.core.utils.infer_torch_device which handles Metal (mps).

settings.yaml Outdated
@@ -54,6 +54,7 @@ embedding:
# Should be matching the value above in most cases
mode: huggingface
ingest_mode: simple
# gpu: cuda[0] # if you have more than one GPU and you want to select another. defaults to cuda[0], or cpu if cuda not available
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You'd need to include this new setting to settings.py

@@ -28,9 +28,33 @@ def __init__(self, settings: Settings) -> None:
"Local dependencies not found, install with `poetry install --extras embeddings-huggingface`"
) from e

# Get the number of available GPUs
num_gpus = torch.cuda.device_count()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding code to the codebase just to print information is not a good practive. I'd remove this. whole block of prints.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants