-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The client sending too much data is now reported as a ProtocolError::limits_exceeded #5956
The client sending too much data is now reported as a ProtocolError::limits_exceeded #5956
Conversation
namespace future_details { | ||
template <typename T> | ||
class Promise; | ||
|
||
template <typename T> | ||
class CopyablePromiseHolder; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes in this file are just to make some of our testing a little cleaner.
}; | ||
|
||
template <typename T> | ||
util::Future<T> wait_for_future(util::Future<T>&& input, std::chrono::milliseconds max_ms) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works like timed_sleep_waiting_for that we use in a bunch of places, except it synchronously waits on a Future. If we like the way this works, I'll go through and clean up other places where we futz with std::atomic's and the event loop to busy wait for a condition to become true.
* Do not use this type to try to fill a Promise from multiple places or threads. | ||
*/ | ||
template <typename T> | ||
class future_details::CopyablePromiseHolder { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a bunch of places in the codebase (mainly our tests) where we make a std::shared_ptr of a Promise so it can be captured in a std::function or other copyable type. This just lets us avoid that overkill by re-using the fact that the internal shared state of a Promise/Future is a ref-counted type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good - I have one usage question that is marked inline
}; | ||
auto pf = util::make_promise_future<SyncError>(); | ||
config.sync_config->error_handler = | ||
[sp = util::CopyablePromiseHolder(std::move(pf.promise))](auto, SyncError error) mutable { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we always use this when passing a promise into the lambda captures? Does sp = std::move(pf.promise)
not work like expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it depends on what the lambda is going to be used for. If you need to store the lambda in a std::function, where all captures are required to be copyable, then you need to wrap a Promise (which is not copyable) into something that is. If you're storing the lambda in a util::UniqueFunction (which requires all its captures to be moveable and not copyable) then you don't need this. Likewise, if you aren't storing your lambda in any type-erasing wrapper at all, you don't need this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the explanation!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
auto error = wait_for_future(std::move(pf.future), std::chrono::minutes(5)).get(); | ||
REQUIRE(error.error_code == make_error_code(sync::ProtocolError::limits_exceeded)); | ||
REQUIRE(error.message == "Sync websocket closed because the server received a message that was too large: " | ||
"read limited at 16777217 bytes"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a check that the server action is indeed a ClientReset?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure!
…nification * origin/master: (22 commits) Fix a race condition in error reporting for async open (#5968) Updated release notes version bump to 12.11.0 Fix a deadlock with simultaneous sync and nonsync commit notifications while tearing down RealmCoordinator (#5948) Track the last line seen and report it when there's an uncaught exception Verify that the same key is used when opening multiple DBs for a file Add stricter checks for valid reads on encrypted files Run core tests with encryption enabled on windows (#5967) Fix assertion failures after calling SectionedResults::reset_section_callback() (#5965) Add sync integration test for in-memory mode (#5955) Added realm core version to app login request (#5961) The client sending too much data is now reported as a ProtocolError::limits_exceeded (#5956) Updated release notes release core v12.10.0 Fix C-API for realm_object_get_parent (#5960) fixed a bug in recovery when copying embedded lists with a single link property prefix in the path (#5957) Js/sectioned notification (#5926) Use named apikey callbacks Fix IN query with TypedLink as left operator Fix a use-after-free in client reset when the original RealmCoordinator is gone (#5949) ...
What, How & Why?
The server sends code 1009 when it receive a message larger than it can process. This is done inside the websocket library so it is impractical for it to send a true "in-band" error message which is how we normally signal client reset cases. Given that an oversized message (typically a giant single transaction) is a "sticky" condition that will keep happening whenever we reconnect, we should treat it as a client reset condition.
☑️ ToDos