-
Notifications
You must be signed in to change notification settings - Fork 9.2k
HADOOP-18146: ABFS: Added changes for expect hundred continue header #4039 #5516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HADOOP-18146: ABFS: Added changes for expect hundred continue header #4039 #5516
Conversation
…pache#4039) This change lets the client react pre-emptively to server load without getting to 503 and the exponential backoff which follows. This stops performance suffering so much as capacity limits are approached for an account. Contributed by Anmol Asranii
|
:::: AGGREGATED TEST RESULT :::: HNS-OAuth[INFO] Results: HNS-SharedKey[INFO] Results: NonHNS-SharedKey[INFO] Results: NonHNS-OAuth[INFO] Results: AppendBlob-HNS-OAuth[INFO] Results: Time taken: 106 mins 44 secs. |
|
🎊 +1 overall
This message was automatically generated. |
|
@steveloughran, as discussed on #4039. I have backported the change in branch-3.3 Requesting you to kindly review it. Thanks. |
steveloughran
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
+1
Added Expect Hundred Continue Header with all append requests
-> Heavy load from a Hadoop cluster lead to high resource utilization at FE nodes. Investigations from the server side indicate payload buffering at Http.Sys as the cause. Payload of requests that eventually fail due to throttling limits are also getting buffered, as its triggered before FE could start request processing.
Approach: Client sends Append Http request with Expect header, but holds back on payload transmission until server replies back with HTTP 100. We add this header for all append requests so as to reduce load on the server.