-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-24307][CORE] Add conf to revert to old code. #21867
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In case there are any issues in converting FileSegmentManagedBuffer to ChunkedByteBuffer, add a conf to go back to old code path.
|
Test build #93520 has finished for PR 21867 at commit
|
| // SPARK-24307 undocumented "escape-hatch" in case there are any issues in converting to | ||
| // to ChunkedByteBuffer, to go back to old code-path. Can be removed post Spark 2.4 if | ||
| // new path is stable. | ||
| if (conf.getBoolean("spark.fetchToNioBuffer", false)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we have a better prefix, rather than just spark. ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this condition is immutable, can we define a new variable whose value is assigned out of this method to reduce overhead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure -- the fetch-to-disk conf is "spark.maxRemoteBlockSizeFetchToMem" which is why I stuck with just "spark." prefix. Also on second thought, I will make the rest of it more specific too, as there is lots of "fetching" this doesn't effect.
how about "spark.network.remoteReadNioBufferConversion"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we'd better to rename that one "spark.maxRemoteBlockSizeFetchToMem" also ?
| if (data != null) { | ||
| return Some(ChunkedByteBuffer.fromManagedBuffer(data, chunkSize)) | ||
| // SPARK-24307 undocumented "escape-hatch" in case there are any issues in converting to | ||
| // to ChunkedByteBuffer, to go back to old code-path. Can be removed post Spark 2.4 if |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: to ChunkedByteBuffer -> ChunkedByteBuffer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oops, thanks for catching that. fixed
|
LGTM |
|
Test build #93545 has finished for PR 21867 at commit
|
|
Test build #93548 has finished for PR 21867 at commit
|
|
retest this please |
|
Test build #93566 has finished for PR 21867 at commit
|
|
LGTM |
|
retest this please |
|
Test build #93574 has finished for PR 21867 at commit
|
|
Thanks! Merged to master. |
In case there are any issues in converting FileSegmentManagedBuffer to
ChunkedByteBuffer, add a conf to go back to old code path.
Followup to 7e84764