-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
WIP: Push scaling #3080
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Push scaling #3080
Conversation
@flovilmart updated the pull request - view changes |
@flovilmart updated the pull request - view changes |
@flovilmart updated the pull request - view changes |
What is missing so that this is available in the master? |
@adirgan probably testing and deploying to SNS/Lambdas, make sure no regression exist, rebasing, testing with Google Cloud as well (PubSub + GCF) etc... |
5b61e70
to
c732735
Compare
@flovilmart updated the pull request - view changes |
c732735
to
5b61e70
Compare
@flovilmart updated the pull request - view changes |
@flovilmart updated the pull request - view changes |
@flovilmart updated the pull request - view changes |
@flovilmart updated the pull request - view changes |
@flovilmart updated the pull request - view changes |
@flovilmart I do not think it is a problem that causes some instability, it is only a problem with the state of the same |
I can see that when at least one recipient is marked, the push is properly marked as sent. |
@flovilmart Not that it does not have a hearing, what happens is that the notifications are per user and if the user does not want to receive them, they are excluded by placing the codes of the alerts that they do not want to receive in a list, just as the server sends the push so the user does not You want to receive it, because the one that issues the alert is a platform completely separated to parse, as you can see in the image to those who have no hearing stays in sending. |
@flovilmart But also happens when you have many users to send and some do not get the push for some reason, such as uninstalling the application, etc. |
@flovilmart Before updating, with the previous version of parse without scalability, always when sending the push the state ended in sent, even with those who had no audience. |
I have a pretty good idea why it may happen, i'lol check for a fix |
its not clear to me browsing this PR, but is the applicationId sent with each push request into the queue adapter? it doesn't look like it does. |
It does: https://github.com/ParsePlatform/parse-server/blob/master/src/Push/PushQueue.js#L53 Each item enqueued has:
|
doh - yeah just found it myself. excellent, thx. |
This works pretty well at not locking up the server, but there's no backpressure on push, so if the publisher is held up, or its all running with the default setup and the sending backs up, ram still just grows. |
not sure what you mean. Also, we can't solve all dimensioning problems. node being single threaded, it's quite impossible to 'nice' the push sending process or reduce it's priority. I'm not sure how streams would solve the issue, if it solves anything. From what I see in what I wrote, we could limit the number of 'concurrent' messages being processed at once in the PushWorker with an internal logic queue that would just hold the received messages, dequeing and processing them one at a time. This way, it will be guaranteed that only one batch hits the memory at any given time. What do you think? |
I think reactive streams could solve the issue because they support back pressure, even the js impl, and almost every other language also has an impl. just holding the recieved messages isn't enough, the rate at which the database is read (well, the messages generated ) needs to be throttled. For even modest installation counts the input side can overwhelm ram as it builds up in the queue faster than it can be processed. Throwing it all directly to sqs via the adapter is likely to solve the issue, but that doesn't help lots of folk who dont run in a cloud and dont want to run their own queue. |
@flovilmart Any documentation on this? |
not at the moment (unfortunately), this is just an internal re-implementation, we'll follow up with examples of queues and consumer at a later time. |
@flovilmart Can I assume that this should work for nearly 8 million installations? I just updated to version 2.3.2 and I'm still unable to send pushes to any large amount of installations. |
@dvanwinkle are you on aws? if so, you can give this a try, i've tested it, and if you want to try it, I can document and answer any question/fix any issues for you. |
@acinader Unfortunately, we're on Azure. While it isn't a bad service, most the people on it aren't doing OSS. I may take your linked PR and see if I can repurpose it for one of Azure's queue services. |
That would be awesome! I'm planing to implement it for Gcloud pub/sub that would give the holy trinity of cloud services. Next 'big' step would be the lightweight sender (that can report the status) and that would not require parse-server but just a client. |
this PR broke the https://www.npmjs.com/package/parse-server-onesignal-push-adapter. im getting a new unsubscribed user which has no device on the one signal dashboard. so it seems the deviceToken is not getting sent somehow |
This PR broke my push adapters, I cannot update parse-server with a custom push adapter. I will investigate when I will have time.. to lookup on all changes. Minor changes should not break existing code. |
@mihai-iorga you're 100% right, the changes should be backwards compatible, and they are designed to be backwards compatible. Can you open a proper issue filling the issue template so we can have a look? |
@flovilmart yes, as soon as I will have some spare time to investigate .. I will do that. Thanks |
Do you have or have in plans some adapter implementation for use with AWS SQS? |
@ortimanu This works well for me... #3080 (comment) Edit: I should probably note, it's also on NPM |
@dvanwinkle awesome! thanks! |
Sounds like a big improvement we've been waiting for a long time... is there an estimate when we will see this included in a release version? |
@kontextbewusst This is in the current version. You just have to also install the plugin that's relevant to you. |
@dvanwinkle thank you, is there any kind of documentation for the right config? Also, which plugin are you referring to? |
This PR attempts to solve the scaling issues related with sending large amount of push.
What's in there?
The responsibility of the PushQueue is to split the work (configurable by batchSize) and publish it so it's available to the PushWorker.
The responsibility of the PushWorker is to consume messages published by the PushQueue. All the work is represented by a
PushWorkItem
.A new Adapter is introduced, ParseMessageQueue. It shares the same API as the ParsePubSub, but is a 1..1 instead of a 1..* (fan out). The messages in ParseMQ should be consumed by the worker only once (otherwise the push would be sent twice).
It is also possible to disable the push worker on parse-server and to defer the execution to the message queue. In mind, is the integration with SQS/Lambda or Google Message Service / Cloud Functions.
The default ParseMessageQueue implementation is based on an EventEmitter, where only the last subscriber is active.
Fixes #2977