Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model to device owlv2 CUDA problem #2

Open
R3xpook opened this issue Jan 4, 2024 · 0 comments
Open

Model to device owlv2 CUDA problem #2

R3xpook opened this issue Jan 4, 2024 · 0 comments

Comments

@R3xpook
Copy link

R3xpook commented Jan 4, 2024

hello , i noticed that the model doesn't automatically utilize CUDA for GPU even when there is a CUDA device , im not an expert but i made some modifications in owlv2_model.py , I added .to(DEVICE) to move the model and its inputs to the GPU
after the model instantiation (around line 37)

self.model = Owlv2ForObjectDetection.from_pretrained(
"google/owlv2-base-patch16-ensemble"
).to(DEVICE)

inputs in the predict method (around line 44)

inputs = self.processor(text=texts, images=image, return_tensors="pt").to(DEVICE)

i don't know if is the correct or the most elegant way but it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant