Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

nvidia-docker-plugin exits with "Error: Not supported" after GPU detection. #40

Closed
elezar opened this issue Jan 26, 2016 · 9 comments
Closed
Labels

Comments

@elezar
Copy link
Member

elezar commented Jan 26, 2016

I am trying to run the nvidia-docker-plugin locally, but get the following error:

$> sudo -u nvidia-docker nvidia-docker-plugin -s /var/lib/nvidia-docker
nvidia-docker-plugin | 2016/01/26 09:57:13 Loading NVIDIA management library
nvidia-docker-plugin | 2016/01/26 09:57:13 Loading NVIDIA unified memory
nvidia-docker-plugin | 2016/01/26 09:57:13 Discovering GPU devices
nvidia-docker-plugin | 2016/01/26 09:57:13 Error: Not Supported

(this is after following the other steps described in the wiki).

The output from nvidia-smi is as follows:

$> nvidia-smi
Tue Jan 26 09:56:53 2016       
+------------------------------------------------------+                       
| NVIDIA-SMI 352.63     Driver Version: 352.63         |                       
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Quadro K2100M       Off  | 0000:01:00.0     Off |                  N/A |
| N/A   55C    P0    N/A /  N/A |    194MiB /  2047MiB |     13%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0      1084    G   /usr/bin/X                                     104MiB |
|    0      2054    G   /usr/bin/gnome-shell                            74MiB |
|    0      3097    G   ...s-passed-by-fd --v8-snapshot-passed-by-fd     5MiB |
+-----------------------------------------------------------------------------+
@elezar
Copy link
Member Author

elezar commented Jan 26, 2016

Adding further debug output to the code, it seems as if the following lines:
assert(C.nvmlDeviceGetPowerManagementLimit(dev, &power)) and assert(C.nvmlDeviceGetCpuAffinity(dev, C.uint(len(mask)), (*C.ulong)(&mask[0])))
in nvml.go cause these errors.

It is most likely that my Laptop GPU does not support these properties, and the calls are returning NVML_ERROR_NOT_SUPPORTED instead of NVML_SUCCESS. Is there some more graceful way to handle this?

The assert(C.nvmlDeviceGetPowerUsage(d.handle, &power)) in the function Status also fails when running curl http://localhost:3476/v1.0/gpu/status (assuming the calls to the first two functions have been removed).

Looking at the output of nvidia-smi, I would assume that any field that is shown as N/A will also fail.

@flx42 flx42 added the bug label Jan 26, 2016
@3XX0
Copy link
Member

3XX0 commented Jan 26, 2016

That's a known issue. Mobility cards are poorly supported by NVML and most of the calls will fail.
The implication of this is that the RestAPI will be pretty much useless. Maybe we should add an option to disable the RestAPI completely.

Note that while using nvidia-docker-plugin will fail, using nvidia-docker in standalone mode should work.

@elezar
Copy link
Member Author

elezar commented Jan 27, 2016

Yes, I understand that my card is most likely to blame.

I don't think that this means that the REST API is useless though. Would it
not make sense to handle some some of the properties differently, providing
some indication to the client that they are not available (NVIDIA SMI does
this by printing N/A)?

For power, simply returning zero may make sense, and although CPUs with
higher core count also show NUMA effects (Hyperthreading further compicates
this), one should be able to find a sensible default for this too.

On Tue, 26 Jan 2016 23:08 Jonathan Calmels notifications@github.com wrote:

That's a known issue. Mobility cards are poorly supported by NVML and most
of the calls will fail.
The implication of this is that the RestAPI will be pretty much useless.
Maybe we should add an option to disable the RestAPI completely.

Note that while using nvidia-docker-plugin will fail, using nvidia-docker
in standalone mode should work.


Reply to this email directly or view it on GitHub
#40 (comment)
.

Evan Lezar
www.evanlezar.com

@lukeyeager
Copy link
Member

If it helps, I've found that NVML can find memory info for all cards, but not necessarily utilization or temperature.

https://github.com/NVIDIA/DIGITS/blob/v3.1.0/digits/device_query.py#L215-L235

@flx42
Copy link
Member

flx42 commented Jan 27, 2016

I agree we should handle this case more gracefully, that's why I labeled it as a "bug".

@3XX0
Copy link
Member

3XX0 commented Jan 27, 2016

Handling it gracefully would mean having default values for everything (not just few calls).
Given the current implementation, it's going to be difficult.
I contacted the NVML team about this matter a while ago maybe things will change on their end.

@3XX0
Copy link
Member

3XX0 commented Jan 27, 2016

@lukeyeager Did you check on GeForce Kepler products ?

@3XX0
Copy link
Member

3XX0 commented Jan 27, 2016

@elezar So regarding your card specifically this seems to be an issue with NVML.
Can you provide us the output of nvidia-smi -q ?

In any case, we should start working on handling unsupported features gracefully.

@elezar
Copy link
Member Author

elezar commented Jan 28, 2016

@3XX0 I have run nvidia-smi -q on my machine both on the host and using nvidia-docker (after creating the driver volume manually). Please find the output attached in the files below.

nvidia-docker-smi-q.stdout.txt
nvidia-smi-q.stdout.txt

There seem to be many properties that nvidia-smi cannot determine on my card. For comparison, I have also included the output for the Titan Z and K80s that I have available.
nvidia-smi-q-k80.stdout.txt
nvidia-smi-q-titan.stdout.txt

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants