-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
do not reparse JSON responses in a loop #172
Conversation
7f6cf45
to
7c609ad
Compare
with very large responses, we were looping over HTTP chunks by accumulating them, trying to parse the JSON response, then go for another iteration if it was not enough, so we end up parsing the same data very frequently This commit first accumulates the data entirely, then parses it
7c609ad
to
87259f8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm!
None | ||
} | ||
}, | ||
serde_json::from_slice::<graphql::Response>(¤t_payload_bytes).unwrap_or_else( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yay for unwrap or else :D
Potential fix for #144
related: #33 #80
with very large responses, we were looping over HTTP chunks by
accumulating them, trying to parse the JSON response, then go for
another iteration if it was not enough, so we end up parsing the same
data very frequently
This commit first accumulates the data entirely, then parses it
We should investigate that
Edit: @stream is not currently supported, so this can be merged right now, and we'll modify once we do it (we're keeping the Stream of responses to that end)
performance results
main d9b3c43
spending most of the time deserializing strings (I was testing a products subgraph where the name is 40MB long)
this PR
There's definitely an improvement on large responses. It probably won't have a big impact on small responses that can fit in one chunk