-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to disable CUDA support #569
Comments
It's a cmake option. You can use cmake's GUI (ccmake or cmake-gui) or the CLI to disable it when you initially invoke cmake. |
Sorry for bother but I disabled USE_CUDA in cmake-gui and even delete CUDA_DIR let it blank, after cmake --build ., I run python3 setup.py install to build python depencies, it still find cuda:
This is really not I want to see, I set CUDA_VISIABLE_DEVICES to blank in envirment not work at all. |
You don't call cmake directly if you are using python. So I'm not sure what you are talking about compiling when you say you compiled something with In any case, you can pass options like |
If I run |
|
Is it possible to disable CUDA support at runtime? Maybe with some environment variable? |
No |
What prevents runtime support toggling? What if two modules were built:
Then toggling or switching on the access layer? This is important because in one step of a process I need an accurate face extraction using a CNN on a high-resolution feed. (Which runs to memory issues due to a malloc issue #1725 ) . The workarounds would be running it on the CPU. The other step would be running a |
There is no deep reason why dlib couldn't be upgraded to support this. See #1852. But it is not currently an option. However, in this case what you are suggesting is not a good idea. This stuff runs much faster on the GPU than CPU. If your image is really super huge, you should just chop the image up into parts and run them individually. Like cut the image into 4 subimages or something like that. How best to divide it up depends on your application (i.e. where faces can appear, how big they might be, do you need to divide over scale space, etc.) |
I have a project contains dlib and tensorflow models, when inference, just got a cuda error.
I don't know why, how to disable CUDA support for default so that i can using these 2 at the some time?
The text was updated successfully, but these errors were encountered: