-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add AVHRRMTA_G-NAVO-L2P-v1.0 and AVHRRMTB_G-NAVO-L2P-v1.0 #15
Conversation
Useful links for future reference on shared memory in kubernetes kubernetes/kubernetes#63126 |
podaac/merger/merge_worker.py
Outdated
resized_arr = resize_var(ds_var, var_meta, max_dims) | ||
|
||
# Limit to how much memory we allocate to 60 MB | ||
while memory_limit.value + resized_arr.nbytes > 60000000 and not out_queue.empty(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this value (60000000
) be an environment variable so we could change it at some point in the future? How does this affect the processing, does it just take longer? I'm wondering if we are at risk of not being able to use concise on certain data because we're limiting ourselves to 60MB? Like, could there be a dataset that won't fit into this 60MB and therefore wouldn't be able to be processed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I can put in to to get an env variable or get a default was hoping we can set it via harmony somewhere but doesn't seem like we can change the shared memory at the moment
- The current running version of kubernetes that harmony is running doesn't have way to set the memory size, we can mount memory into shared memory but ends up taking half the memory of the kubernetes pod which is a default to take half when no size is specified. There is a fix for it in later version of kubernetes. Another thing that harmony needs to add is
LocalStorageCapacityIsolation
which im not sure what that is. - It doesn't take any longer as concise is 2 process one read one write, and it will finish when the write is done and read is faster than write, so even if we throttle the read the whole thing will only finish as fast as the write process.
- We are limited to the 60 mb as if there are variables that are bigger than that it will try to write to memory and crash.
- Another option is to go with single core processing so we don't use shared memory until we have a solution to set the shared memory size limit in harmony.
Kudos, SonarCloud Quality Gate passed! |
Description
Add AVHRRMTA_G-NAVO-L2P-v1.0 and AVHRRMTB_G-NAVO-L2P-v1.0 to associations.txt
Issue #10 handle empty granules
Overview of work done
Fork process dies suddenly with a signal 7 which is a sigbus error on linux which is a memory alignment error. This doesn't happen when we run concise locally but when ran in harmony we get this error. I added in code to limit how much memory is allocated and wait for the write process to delete the allocated space before continue reading and allocate more memory. Running in single core mode also works for these collections. Tried to increase memory via env variables for the docker container but didn't seem to have any effect and the fork process still died.
Bash into concise docker and running concise vis merge cli resulted in below error
Shared memory is only 64 mb /dev/shm will ask harmony to see if we can increase
Overview of verification done
Tested collection in uat with local harmony.
Overview of integration done
PR checklist:
See Pull Request Review Checklist for pointers on reviewing this pull request