Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert RestGetMappingAction to chunked encoding #89906

Conversation

original-brownbear
Copy link
Member

Straight forward conversion of this endpoint and a basic test for a large enough response that chunks into a few pieces.

relates #89838

Straight forward conversion of this endpoint and a basic test for a large enough
response that chunks into a few pieces.
@original-brownbear original-brownbear added :Distributed Coordination/Network Http and internode communication implementations >refactoring v8.5.0 labels Sep 8, 2022
@elasticsearchmachine elasticsearchmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Sep 8, 2022
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

RestStatus.OK,
ChunkedRestResponseBody.fromXContent(
() -> Iterators.concat(
Iterators.single((b, p) -> b.startObject()),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not quite sure why the GetMappingsResponse is a fragment and we had the complicated wrapping here but didn't want to change it in this PR so I manually wrapped it for now.

@LuceneTestCase.SuppressFileSystems(value = "HandleLimitFS") // we sometimes have >2048 open files
public class RestGetMappingsIT extends HttpSmokeTestCase {

public void testGetLargeMappingsResponse() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I think if we approach testing chunked encoding like this across all APIs we'll end up adding a lot of costly tests. Could we instead reduce the chunk size in these smoke tests to force chunking at a more reasonable scale?

Also I think this test passes even without the production code changes. Can we assert that the REST response really did arrive in multiple chunks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm now that I think about it more, maybe this test is pointless. I really wanted to have a test that would actually cause a round of chunked encoding where the channel becomes not-writable at least once. But now that I think about it again, that's a really redundant test probably ... I'll just remove this and rely on the fact that this is tested by the existing REST tests (that will now see a chunked response) and our Netty specific tests around flushing messages. I think that's good enough here. Sorry for the noise

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm confused, but I was expecting that we'd send the response in chunks regardless of writability. I think it's worth checking that, but MAX_BYTES_PER_WRITE = 1 << 18 is just a bit big for a test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea the response is always chunked but the fact that it serializes fine is already tested elsewhere. Adding a test that verifies that we actually get chunked bytes back seems quite redundant doesn't it, I mean unless someone wilfully changes the code to revert back to a normal response (which I'll make very hard shortly, PR incoming :)) this doesn't guard against anything does it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ehh maybe I'm paranoid but I could imagine some future refactoring which accidentally puts all the objects into a single chunk, quietly destroying the memory-efficiency of this API.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense :) How about b6517df to make sure that won't happen?

Copy link
Contributor

@DaveCTurner DaveCTurner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think that's good enough, especially if you have some other change in progress to make it impossible to accidentally skip chunking entirely.

@original-brownbear
Copy link
Member Author

Thanks David!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Distributed Coordination/Network Http and internode communication implementations >refactoring Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. v8.5.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants