-
Notifications
You must be signed in to change notification settings - Fork 170
[Improvement]record split across writer buffer to save memory #157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
9df952a to
37112a0
Compare
Codecov Report
@@ Coverage Diff @@
## master #157 +/- ##
============================================
+ Coverage 57.38% 57.43% +0.04%
+ Complexity 1208 1205 -3
============================================
Files 150 150
Lines 8209 8196 -13
Branches 775 771 -4
============================================
- Hits 4711 4707 -4
+ Misses 3255 3249 -6
+ Partials 243 240 -3
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
37112a0 to
20ff34d
Compare
| } | ||
| int require = calculateMemoryCost(length); | ||
| int hasCopied = 0; | ||
| if (require > 0 && buffer != null && buffer.length - nextOffset > 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about:
// comments
if (require > 0) {
// comments
if (buffer != null) {
int hasCopied = xxx;
// commnets
if (hasCopied > 0) {
}
}
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Follow this suggestion
|
@summaryzb For this optimization, I think there has once more |
Yeah,it indeed has once more |
For the situation with buffer = 3k, length of record = 2.8k, after insert 1000 records: |
|
|
In spark client, all memory are requested from executor, so there shouldn't have critical problem, eg, memory leak, oom, etc. |
|
no obvious benefit can be gather from this pr, close it |
What changes were proposed in this pull request?
Record added to
WriterBufferwill split across byte array buffer, all the byte array will be fully usedWhy are the changes needed?
Previously for example, the length of every record is 2k, every time we add a record, we create a buffer with 3k length by default and wrap the previous buffer as
WrappedBufferwhich resulting in wasting 1k memory in everyWrappedBuffer.Apply this PR will save memory
Does this PR introduce any user-facing change?
No
How was this patch tested?
Pass unit test