Fix potential OOM when parsing logs in workflows #36
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
📝 Description
WorkflowRun.getLogInputStream
will read the entire log output into aByteArrayOutputStream
, then construct aByteArrayInputStream
from that buffer, which we then construct aInputStreamReader
from. (https://github.com/jenkinsci/workflow-job-plugin/blob/1551f82/src/main/java/org/jenkinsci/plugins/workflow/job/WorkflowRun.java#L1105)If the log is large, this will potentially cause a
java.lang.OutOfMemoryError
. Instead, usebuild.getLogReader()
, which simply constructs anInputStreamReader
that wraps the underlying source log session.For
hudson.model.Run
this should have no impact, as itsgetLogReader()
just constructs anInputStreamReader
the same way we used to.💎 Type of change
🚦 How Has This Been Tested?
Deployed to our Jenkins that (due to some overactive logging facilities) spewed ~130MB of logs, causing an OOM in the log parser.
Here's the exception we'd hit before this change had been deployed:
🏁 Checklist: