Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to identify model is using GPU ? #163

Open
PritiDrishtic opened this issue May 17, 2023 · 0 comments
Open

How to identify model is using GPU ? #163

PritiDrishtic opened this issue May 17, 2023 · 0 comments

Comments

@PritiDrishtic
Copy link

I exported the KMAX model using export_model.py on GPU( Tesla T4) .

Please advise how I can determine whether this model is using GPU as the execution performance for pretrained model from CPU mode and this model is nearly identical.

executing inference on an image and attaching tensorflow logs from script -

I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'input_tensor' with dtype uint8 and shape [1530,2720,3]      [[{{node input_tensor}}]] 2023-05-17 05:35:42.665202: I tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:424] Loaded cuDNN version 8800 2023-05-17 05:35:43.606321: I tensorflow/compiler/xla/service/service.cc:169] XLA service 0x7f29702d1dc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2023-05-17 05:35:43.606368: I tensorflow/compiler/xla/service/service.cc:177]   StreamExecutor device (0): Tesla T4, Compute Capability 7.5 2023-05-17 05:35:44.473843: I ./tensorflow/compiler/jit/device_compiler.h:180] Compiled cluster using XLA!  This line is logged at most once for the lifetime of the process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant