-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accelerad's rtrace.exe does not decrease calculation time #9
Comments
I'm not sure I understand the issue. Do you think that Accelerad should use more memory on the GPU? Accelerad's GPU memory usage is quite minimal, but it is not limited by anything other that the amount of memory available. In most cases, Acceleard uses global memory, which is also quickly accessible by GPUs. Regarding speed, it's hard to say what speed-up you should expect without knowing more about the model you are running. Accelerad's parallelism is at the primary ray level, so if you have few sensor points, there will be no speed-up. As a side note, it appears that you are trying to run multiple Accelerad instances simultaneously. This is not recommended because it forces the GPU to do context switching, which is slower than running the Accelerad instances one after the other. |
Hi Nathaniel, So now the question for me is why I get completely different results when doing the ray tracing with Accelerad. I will do more testing and try to figure it out. I will come back to you if I need more help. |
I'd have to know more about your model and settings to understand why the results you get differ. However, common reasons for different results according to posts on the user group include bad sizing of the irradiance cache and geometry placed far from the origin. These and other factors are discussed on the documentation page. |
Hi Nathaniel,
I am referring to an issue that I already opened a few days ago on the GitHub page of
bifacial_radiance
by NREL:[https://github.com/NREL/bifacial_radiance/issues/458].
I am using the python package
bifacial_radiance
to access the Radiance software. Irradiance analysis is performed by calling the rtrace function withinbifacial_radiance
.I recently switched from running the simulations locally on my Windows (11th Gen Intel(R) Core(TM) i7-1185g7 @ 3.00GHz with 4 cores, no GPU) to a Linux computer with a NVIDIA Tesla M10 with 5 multiprocessors. I successfully installed Radiance and then Accelerad.
The software finds the GPU, however memory usage is limited to approx 700-800MiB per rtrace process.
My question is, why is the usage of memory space limited to these 700MiB per process?
Running multiple simulations at once (one for each timestamp) did not change anything:
Unfortunately the results from these simulations are also not in agreement with comparable ones from the runs on Windows...
Is there something else that I can try?
Thanks in advance!
The text was updated successfully, but these errors were encountered: