-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
change critical-heap-percentage to leave out max 200MB (upper limit of 99%) #507
Conversation
…f 99% for critical-heap)
Tests checking for native timer being loaded occasionally fail due to some previous tests initializing system in a different way. Now re-initialize the native timer for such a case.
- Track off-heap size in BucketRegion and add to getSizeInMemory() and getTotalBytes(). This fixes the callers of getTotalBytes() including rebalancing and determination of smallest bucket in SD layer. - Update SnappyRegionStatsCollectorFunction to avoid separate collection of off-heap size.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@@ -300,21 +300,22 @@ protected void setDefaultVMArgs(Map<String, Object> map, boolean hostData, | |||
if (maxHeapStr != null && maxHeapStr.equals(this.initialHeapSize)) { | |||
String criticalHeapStr = (String)map.get(CRITICAL_HEAP_PERCENTAGE); | |||
if (criticalHeapStr == null) { | |||
// for larger heaps, keep critical as 95% and 90% for smaller ones; | |||
// for larger heaps, keep critical as 95-99% and 90% for smaller ones; | |||
// also limit memory remaining beyond critical to 4GB |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this comment now no more applicable ("limit memory remaining beyond critical to 4GB")?
@sumwale the change looks good. From comment in Jira, looks like for cases other than ingestion 200 MB headroom is sufficient to avoid going OOME in most cases and for ingestion case this will likely avoid LowMemoryException when there is some space still available? |
Yes with these limits the GC does hit before CRITICAL_UP and free up space. |
…per limit of 99%) (TIBCOSoftware#507) - For large heaps >40GB leave out max 1GB, for <2GB heaps leave 90% like before while for rest leave 200MB for critical-heap-percentage. This translates to 97.5% for 8GB & 98.8% for 16GB. An upper limit of 99% percent is still applied to be a bit on the safe side. - Track off-heap size in BucketRegion and add to getSizeInMemory() and getTotalBytes(). This fixes the callers of getTotalBytes() including rebalancing and determination of smallest bucket in snappydata. - Update SnappyRegionStatsCollectorFunction to avoid separate collection of off-heap size. - Tests checking for native timer being loaded occasionally fail due to some previous tests initializing system in a different way. Now re-initialize the native timer for such a case.
Changes proposed in this pull request
leave 200MB for critical-heap-percentage. This translates to 97.5% with 8GB and 98.8% for 16GB.
An upper limit of 99% percent is still applied to be a bit on the safe side.
the callers of getTotalBytes() including rebalancing and determination of smallest bucket in SD.
initializing system in a different way. Now re-initialize the native timer for such a case.
Patch testing
precheckin
Is precheckin with -Pstore clean?
in progress
ReleaseNotes changes
Document the default critical-heap-percentage and conditions under which user may want to tweak.
Other PRs
TIBCOSoftware/snappydata#1403