Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the inference time of the network #8

Open
wulalalalalalala opened this issue Feb 27, 2022 · 3 comments
Open

About the inference time of the network #8

wulalalalalalala opened this issue Feb 27, 2022 · 3 comments

Comments

@wulalalalalalala
Copy link

wulalalalalalala commented Feb 27, 2022

Thanks for your contribution.

But the inference time I calculated is quite different from it mentioned in the paper. It takes more than 100ms for a 4K image. The code I use is as follows.
'''
model = B_transformer().cuda()
a = torch.randn(1, 3, 1024, 1024).cuda()
starter, ender = torch.cuda.Event(enable_timing=True), torch.cuda.Event(enable_timing=True)
repetitions = 100
timings = np.zeros((repetitions, 1))
# GPU-WARM-UP
for _ in range(50):
enhanced_image = model(a)
# MEASURE PERFORMANCE
with torch.no_grad():
for rep in range(repetitions):
torch.cuda.synchronize()
starter.record()
enhanced_image = model(a)
ender.record()
# WAIT FOR GPU SYNC
torch.cuda.synchronize()
curr_time = starter.elapsed_time(ender)
timings[rep] = curr_time
mean_syn = np.sum(timings) / repetitions
std_syn = np.std(timings)
print(mean_syn)
'''
Is this right?

@zzr-idam
Copy link
Owner

Our model is quick for inference and may have something to do with your cold start.

@wulalalalalalala
Copy link
Author

Thank you for your reply. But my code contains the GPU warm-up step.
If you don't mind, could you provide the correct code for measuring the inference time?

@zhangn77
Copy link

Our model is quick for inference and may have something to do with your cold start.

I have the same question. The running time test on the Telsa V100 GPU for a 1024X1024 image comes to 100ms, which is quiet different from the data in your paper (9ms for a 4k image). Can you share the code for measuring the inference time?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants