-
Notifications
You must be signed in to change notification settings - Fork 458
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross merge rc-3.5.4 to dev #2333
Conversation
The new icx compiler is a bit smarter about checking for memory errors and a new one was discovered which causes OpenFAST to crap when calling MAP_End. The error is due to the OutputList nodes containing pointers to memory which has already been freed. This is due to the way the OutputList is constructed where nodes from other lists (containing pointers to string memory) were appended without allocating new memory. So when the original list was freed, these pointers became invalid, but then the OutputList tried to free them again, resulting in a double free error. To fix this issue, set_output_list was changed to allocate new memory for copies of these nodes.
Fix crash in MAP_End when using Intel's new icx compiler and disable caching in setup-python GH action
Update GHCR doc, remove old Dockerfile
* FEA: Add python package outline * REF: Rename python distribution/package to `openfast_python` / `openfast` * FEA: Move OpenFAST readers/writers over from WEIS * DEP: Add `pcrunch` dependency * REF: Update import paths from `weis` to `openfast` * DOC: Add contributors from WEIS to `authors` in `pyproject.toml` * DOC: Add brief explanation of `openfast` package to sub-readme * OPS: Fix format for authors names/emails * TST: Copy test from WEIS * OPS: Move `pyproject.toml` to top level * OPS: Publish `openfast` package to PyPI on release * TST: Add test data for `openfast` python package * REF: Remove extra python files * DOC: Adjust readme wording * DOC: Use link to specific git ref instead of `main` branch in readme * OPS: Add note about not relying on the `octue-openfast` package * OPS: Allow workflow dispatch of `deploy` workflow * WIP: Temporarily change name of python package * Deleting files related to running OpenFAST, Restructuring to be IO reading and writing only * setting ROSCO as optional, removing lin * updating the test files before move to r-test * removing rosco and pcrunch as deps * OpenFAST Output & Lin reader * Adding Oputput reader to test * Pointing test to one r-test case, removing test_data * Changing library name to openfast_io * OPS: Move poetry files into distribution root and rename package * REF: Rename package to `openfast_io` * DEP: Add `rosco` as optional dependency * DOC: Add installation instructions to python package readme * DOC: Fix docker commands for GHCR images * DOC: Add python package installation to docs * DOC: Update python package readme * WIP: Temporarily rename python package * OPS: Set working directory for python package build and publish * WIP: Temporarily change python package version * FIX: Update python package import paths * WIP: Increment temporary version number * replaced references to weis within code. * added Apache-2.0 license to pyproject.toml --------- Co-authored-by: Mayank Chetan <mayankchetan@gmail.com>
…to enable/disable openfastcpp so yaml-cpp isn't required for openfastcpplib
Always build openfastcpplib as shared. Use BUILD_OPENFAST_CPP_DRIVER to disable openfastcpp executable
…ddressing or null pointer dereferencing - CountWords would scan past end of line - ProcessComFile Cleanup would dereference a null pointer when getting NextFile pointer - WrScr would hide a character string allocation which caused a memory issue in ifx
Some of these values were not getting zeroed out. This was occasionally leading to spurious root acceleration values when memory that was previously occupied by something else non-zero was used.
Also moved the zeroing into the `else` part of the error checking instead of after -- we could potentially have triggered memory violations otherwise and not gotten our error back.
|
||
# Read rows from file, raise exception on failure | ||
try: | ||
vals = np.genfromtxt(fid, dtype=np.float64, max_rows=n) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mayankchetan This doesn't necessarily need to be fixed here, but it turns out that genfromtxt uses a bunch of memory and it's more efficient to do the following (https://stackoverflow.com/questions/49076386/why-does-np-genfromtxt-initially-use-up-a-large-amount-of-memory-for-large-dat) though you may want to check this code first:
vals = np.genfromtxt(fid, dtype=np.float64, max_rows=n) | |
vals = np.empty([n,m], np.float64) | |
for i in range(n): | |
vals[i,:] = fid.readline().split() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the heads up! will make the changes.
Ready to merge
Feature or improvement description
Some improvements for the 3.5.4 are needed in
dev
for further work in dev.Related issue, if one exists
Impacted areas of the software
the primary change is the addition of the
openfast_python
for file io conversions.Additional supporting information
@mayankchetan would like to update the
openfast_python
converters for the currentdev
branch.Test results, if applicable
No test results should change