-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BOPTEST slows down over the course of a simulation #520
Comments
Thanks @kbenne for reporting. I've done some memory and simulation time profiling with the following case: Ubuntu 18.04 VM Below is a plot of computer memory use over time to complete the simulation period (memory recorded every 1s of real time) given the different arrangements described above. Note the reduction in memory consumption and total simulation time provided by use of python arrays instead of numpy arrays or python lists for the data appending and storing. I therefore propose changing the implementation to use python arrays for data storage instead of the current approach of numpy arrays. Above plot created using the attached python script and development on this branch: https://github.com/ibpsa/project1-boptest/tree/issue520_arrayAppend. Latest commit on that uses python arrays. Remaining to-do I see there is adjust numerical differences in unit test results and review. |
See discussion also with regards to single or double float precision here: #521 (comment). |
Clsed by #522. |
We have noticed that the time required to step the simulation increases (about linearly) during a simulation. This isn't as noticable for two week test scenarios, but if you perform an annual run then it becomes more significant. We are seeing almost a four fold increase in step time by the end of an annual simulation.
After some investigation it appears like the problem is related to the
y_store
andu_store
data structures, which hold the historical simulation data and grow over the simulation time. The problem is that these structures are copied on every step in the process appending (they are numpy arrays).Here is a graph of the simulation step time over the course of an annual simulation.
This issue relates to #240 which resulted in numpy array being introduced, because it is more memory efficient compared to Python List. The problem we are seeing now is that numpy array results in copies which are computationally expensive.
The solution may be to use Python Array https://docs.python.org/3/library/array.html which should offer efficient storage of numerical values and be computationally efficient. (avoid copy on each step)
The text was updated successfully, but these errors were encountered: