You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One common pain point in deploying Bolt apps to serverless (FaaS) platforms such as AWS Lambda is that the process terminates as soon as the HTTP response is complete. In Bolt terms, sending the HTTP response is called ack(), at least using the default ExpressReceiver. There are two ways this problem manifests in Bolt apps:
For Events API requests, the Bolt receiver will ack() the incoming request before dispatching the payload to the middleware/listeners. That means the process can terminate before any of the user's listeners have run and result an in an unreliable app.
For all other requests, the best practice is for the listener to ack() the request as soon as possible, and perform I/O and potentially long running work after that. If the ack() takes longer than 3 seconds, then the Slack user will see an error.
After several feedback conversations, a specific remedy was proposed. If we could configure the Bolt app to always expect the user to manually ack() (effectively removing #1 above), and the user only did so after they knew all work was complete, then you could be confident that the serverless platform wouldn't terminate the process prematurely.
It seems to me that this would work, as long as the user was willing to accept these three constraints:
Never take more than 3 seconds to completely handle an event.
Forfeit the ability to reuse middleware shared by the community (because those middleware will likely only handle the default configuration and depend on knowing that Events API events have already been ack()ed).
Don't rely on the TypeScript types for .events() listener/middleware arguments to be correct, since they will reflect ack isn't supplied. It would be an unforeseen cost to the users with the default config to mark that argument as optional, and getting a different set of types to conditionally flow into those listeners when that configuration is set is complex and maybe not possible.
I'd like to know how folks feel about these constraints and whether the outcome would be worth it to them.
While thinking about this, please keep in mind that this isn't the only way that Bolt could pursue to support serverless platforms, but it does seem like the one that's quickest to ship. There's at least one more idea being explored about using SQS to decouple the ack() from the lifetime of the listeners in a specific way. It's yet to be determined how much complexity that solution involves for users, but its likely not too high. It's also yet to be determined how much later something like this would ship if we went in that direction, but I'd estimate its on the order of 1 month.
Requirements (place an x in each of the [ ])
I've read and understood the Contributing guidelines and have done my best effort to follow them.
As no one is mentioning the necessity to have the manual acknowledgment feature in addition to processBoforeResponse, we don't have any further actions to take regarding this issue.
Let me know if you have a different view of it. After waiting for responses for a few weeks, we can safely close this issue now.
Description
One common pain point in deploying Bolt apps to serverless (FaaS) platforms such as AWS Lambda is that the process terminates as soon as the HTTP response is complete. In Bolt terms, sending the HTTP response is called
ack()
, at least using the defaultExpressReceiver
. There are two ways this problem manifests in Bolt apps:ack()
the incoming request before dispatching the payload to the middleware/listeners. That means the process can terminate before any of the user's listeners have run and result an in an unreliable app.ack()
the request as soon as possible, and perform I/O and potentially long running work after that. If theack()
takes longer than 3 seconds, then the Slack user will see an error.After several feedback conversations, a specific remedy was proposed. If we could configure the Bolt app to always expect the user to manually
ack()
(effectively removing#1
above), and the user only did so after they knew all work was complete, then you could be confident that the serverless platform wouldn't terminate the process prematurely.It seems to me that this would work, as long as the user was willing to accept these three constraints:
ack()
ed)..events()
listener/middleware arguments to be correct, since they will reflectack
isn't supplied. It would be an unforeseen cost to the users with the default config to mark that argument as optional, and getting a different set of types to conditionally flow into those listeners when that configuration is set is complex and maybe not possible.I'd like to know how folks feel about these constraints and whether the outcome would be worth it to them.
While thinking about this, please keep in mind that this isn't the only way that Bolt could pursue to support serverless platforms, but it does seem like the one that's quickest to ship. There's at least one more idea being explored about using SQS to decouple the
ack()
from the lifetime of the listeners in a specific way. It's yet to be determined how much complexity that solution involves for users, but its likely not too high. It's also yet to be determined how much later something like this would ship if we went in that direction, but I'd estimate its on the order of 1 month.Requirements (place an
x
in each of the[ ]
)The text was updated successfully, but these errors were encountered: