-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HADOOP-18383. Codecs with @DoNotPool annotation are not closed causing memory leak #4585
Conversation
cc @sunchao |
Thanks @kevins-29 , the fix looks good to me. However, is this addressing the issue mentioned in HADOOP-12007? my understanding is that the issue there is related to |
...-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java
Outdated
Show resolved
Hide resolved
gzipCodec.createOutputStream(new ByteArrayOutputStream(), compressor)) { | ||
outputStream.write(1); | ||
fail("Compressor from Codec with @DoNotPool should not be useable after returning to CodecPool"); | ||
} catch (NullPointerException exception) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NPE is the best we can do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately I couldn't find another way to test that the underlying Compressor/Decompress has been closed. There is finished
but that is set by reset()
and has different semantics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a check in the place where we would encounter the null and trigger a more friendly exception from there?
Something like an already closed exception?
...-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java
Outdated
Show resolved
Hide resolved
...-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/compress/TestCodecPool.java
Outdated
Show resolved
Hide resolved
@sunchao Should I create a new Jira ticket for this? |
@kevins-29 , yes please create a new JIRA for this issue, and change the PR title afterwards. Thanks. |
🎊 +1 overall
This message was automatically generated. |
...roject/hadoop-common/src/main/java/org/apache/hadoop/io/compress/AlreadyClosedException.java
Outdated
Show resolved
Hide resolved
...ct/hadoop-common/src/main/java/org/apache/hadoop/io/compress/zlib/BuiltInGzipCompressor.java
Outdated
Show resolved
Hide resolved
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. cc @viirya too since he authored some of the Gzip code.
lgtm, but why there are many failures in CI? |
There's one unit test failure:
which is unrelated. |
Merged to trunk, thanks @kevins-29 ! |
@kevins-29 could you open a PR targeting |
@sunchao will do. |
Description of PR
Explicitly call
end()
when returningCompressor
orDecompressor
implementations withDoNotPool
annotation to theCodecPool
.How was this patch tested?
I created the following project to demo the leak. You can run the demo with
and then monitor the memory usage using
Results - Before Patch
Results - After Patch