-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add idempotency #6748
Add idempotency #6748
Conversation
@dplewis Thanks, that worked. |
This PR is ready for review. It's rather extensive, but the added docs and test cases provide a good entry point for review I think. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Lets see how this goes. @davimacedo You want to give it a look over?
@mrmarcsmith If you have time can you take a look at the PR? I saw you worked on |
@dplewis, that's a good point, I should add a quick comparison for future readers: Parse Server Idempotency (this PR):
|
@mtrezza As @dplewis pointed I authored the PCIdempotency module. I was only able to spend a couple minutes looking at the code but this looks like a really well thought out implementation. Here is what I really liked:
I'm planning on taking a closer look tonight but overall, great job man! I will happily deprecate PCIdempotency in favor of this solution in the module README when this is prod ready. |
@mrmarcsmith Thanks for your work on PCIdempotency; it provided a good starting point when I researched on how to design this solution. |
@mrmarcsmith Do you think we should ignore the duplicate request or throw an error. Every request could throw this error in theory. @mtrezza This is my only concern now that I re-reviewed it. |
In my research for a design, it was consensus that one needs to send a response. The whole network chain is waiting for a http response and not sending one, may create more complexity down the line, where people need custom solutions according to their architecture. To prevent this complexity, I found two common approaches:
I think even if a cached response will be implemented, either approach should be optional in Parse Server, so the developer can choose what best fits their use case. |
@dplewis of those two options, Throwing an error makes sense to me especially if the first response was lost in cyber space which might have been the whole reason the request was sent again. @mtrezza Is right though that the "best" way to do it is to cache and send the response again and as long as the parameters are the same (mismatched parameters would be an error). this is the way stripe implements their idempotency and it makes the most sense to me. @mtrezza we can brainstorm but I can't think of many "cans of worms" that this would open if we are only using this for network idempotecy. Especially since the TTL will ensure the data won't be too stale. It would essentially just allow the client to say "I missed that could you say that again?" without creating two objects etc... As I understand the way we use requests, only one response is honored per http request so no matter which response is actually received the app should behave as expected. |
@mtrezza Can you resolve the conflict? @mrmarcsmith Can you approve it? |
@mrmarcsmith The challenges with caching responses I see are:
Stripe can adapt their idempotency strategy specifically to their architecture and cost/benefit evaluation, Parse Server can find itself in various scenarios where it may or may not be desired to send cached data for the above mentioned implications.
Generally yes, but this is not the purpose of the TTL index. The TTL index says "how long do you want me to remember a request for deduplication". The question here is "What level of data staleness is acceptable for your use case?" I can't see introducing response caching without including a separate parameter and mechanism to manage data staleness. I have implemented this feature already for some weeks in a production system of Parse Server and I observe that duplicate requests can arrive with a delay of several seconds, in a rare case it was almost 1 min. That is why I set the default TTL of the index to 5min, but data that is 5min old may be considered unusable. I'm all for adding a cached response enhancement (in a separate PR, once we have some data points on this experimental feature), but I think it should be optional and not replace the current approach of returning an error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, Thank you @mtrezza for your hard work on this PR!
@mrmarcsmith Thanks for the discussion, I think it would be great if we could put our heads together at some point in the future to brainstorm how to go about response caching. |
@mtrezza Can you regenerate the cli definitions to remove the conflict? I can merge it in after. |
@dplewis ready for merge, the coverage change must be related to the merged master, because this PR is fully covered. |
Adds idempotency enforcement for client requests as new middleware for select routes.
Related to issue: #6744
TODO:
_Idempotency
to schema for tracking client requests with unique and TTL indexParse Error 159: DUPLICATE_REQUEST
(only referenced; has to be added to Parse JS SDK via add idempotency error "duplicate request" Parse-SDK-JS#1189)add Parse Server docs(experimental feature)Potential Future Improvements:
_created_at
,_updated_at
from_Idempotency
class to optimize performanceNotes:
POST
,PUT
routes asGET
,DELETE
are already idempotentConfiguration example:
Other path examples:
functions/.*
= enforce for all functionsjobs/.*
= enforce for all jobsclasses/.*
= enforce for all classesusers
= enforce for user creation and update (because user has custom path)installations
= enforce for installation creation and update (because installation has custom path)