-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #168 #354 Add logger
, client
to the list of args send through to listeners
#359
Conversation
Codecov Report
@@ Coverage Diff @@
## master #359 +/- ##
==========================================
+ Coverage 83.13% 83.68% +0.54%
==========================================
Files 7 7
Lines 510 527 +17
Branches 148 151 +3
==========================================
+ Hits 424 441 +17
Misses 57 57
Partials 29 29
Continue to review full report at Codecov.
|
src/types/actions/index.ts
Outdated
@@ -34,6 +36,8 @@ export type SlackAction = BlockAction | InteractiveMessage | DialogSubmitAction | |||
* this case `ElementAction` must extend `BasicElementAction`. | |||
*/ | |||
export interface SlackActionMiddlewareArgs<Action extends SlackAction = SlackAction> { | |||
logger: Logger; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of adding logger
and client
properties to this all the other AnyMiddlewareArgs
union members, i suggest that we add them to this line: https://github.com/slackapi/bolt/blob/3d59cf78999934dd81d8d18f4e514e24d001279c/src/types/middleware.ts#L24
this is where context
and next
are added to arguments, and allows us not to repeat these properties which have no dependence on the type of event that's being triggered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much 🙇I'll update the changes with your suggestion!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I should have done this. Will do today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
src/App.ts
Outdated
/** Say function might be set below */ | ||
say?: SayFn | ||
/** Respond function might be set below */ | ||
respond?: RespondFn, | ||
/** Ack function might be set below */ | ||
ack?: AckFn<any>, | ||
} = { | ||
logger: this.logger, | ||
client: this.client, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i agree with this change, but i had a thought that i think is worth sharing.
Should we supply listeners with a WebClient
instance that is initialized with a token, similar to how say
is bound to a token and a channel?
In favor:
- reduces the amount of boilerplate needed to call Web API methods in the most common cases - you won't have to pull in
context
and pick out theuserToken
orbotToken
yourself as often. - aligns well with listener arguments being "smart" while global variables being generic.
Against:
- hides the important choice that app developers need to make about choosing which token to use when calling a method. the most often case is the
botToken
, but the opportunity to change that choice touserToken
will become very hidden, to the point where that functionality is unlikely to be discovered. - managing instances of
WebCient
becomes a hard problem. if we create a new instance for each incoming event, then any benefit from instances being aware of rate limit handling is negated since each instance will have an empty queue and will eagerly attempt each request at least once and then multiple instances can cause each other to deplete the rate limit in a very unoptimal way. if we keep a pool of instances around (one instance per token), this could cause memory bloat because there's no deterministic way to free our references. if i had to pick, i would pick the latter, and then later if users complained about memory bloat find a solution to that problem (maybe an additional API to clean out client instances or a configurable TTL for instances).
we should care about this choice now. going back on this is not going to be easy, even after a breaking API change because the usage patterns in example code will look the same but produce unexpected results if we change the behavior later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my understanding, there are six patterns we can consider as below.
- Option A) current implementation
- A-1) current implementation in this PR
- Option B) Support only bot token clients
- B-1) Have only
client
that is not the same withapp.client
- B-2) Have
client
andbotTokenClient
- B-1) Have only
- Option C) Support both bot and user token clients
- C-1) Have three types of clients
- C-2) Have
client
as an alias ofbotTokenClient
- C-3) Have only
botTokenClient
anduserTokenClient
Option A) current implementation
When a Bolt app needs to use user tokens for WebClient
, it has to manually create a WebClient
as below. In the case, probably, we may need to add clientSettings
(that is equivalent to this) to Context
. Without the object, it could be bothersome to reuse the settings for WebClient
initialization.
app.commad('/hi', async ({ ack, client, context }) => {
// A-1 just an alias of `app.client`
await client.chat.postMessage({ token: context.botToken, ... });
// user token client
if (context.userToken) {
const userTokenClient = new WebClient(
context.userToken,
context.clientSettings // new one that is equivalent to https://github.com/slackapi/bolt/blob/%40slack/bolt%401.5.0/src/App.ts#L151-L157
);
await userTokenClient.chat.postMessage({...});
} else {
// handle the pattern
}
});
Pros:
- Can keep it simple
- No difference between single team auth and multi-team auth
- No need to implement an instance pool
Cons:
- Rate limit handling by
WebClient
instances is negated - Instantiating a WebClient can be bothersome
A-1) current implementation in this PR
name | type | bound token | memo |
---|---|---|---|
client |
WebClient |
undefined | the same with app.client |
Option B) Support only bot token clients
Regarding use token clients, the way to initialize them is the same with Option A).
app.commad('/hi', async ({ ack, client, botTokenClient, context }) => {
// B-1 bot token client
await client.chat.postMessage({...}); // with underlying bot token
// B-2 bot token client
await client.chat.postMessage({ token: context.botToken, ... });
await botTokenClient.chat.postMessage({...}); // with underlying bot token
// user token client - the same with Option A)
});
B-1) Have only client
that is not the same with app.client
name | type | bound token | memo |
---|---|---|---|
client |
WebClient |
context.botToken |
B-2) Have client
and botTokenClient
name | type | bound token | memo |
---|---|---|---|
client |
WebClient |
undefined | the same with app.client |
botTokenClient |
WebClient |
context.botToken |
Option C) Support both bot and user token clients
app.commad('/hi', async ({ ack, client, botTokenClient, userTokenClient, context }) => {
await client.chat.postMessage({ token: context.botToken, ... });
await botTokenClient.chat.postMessage({...}); // with underlying bot token
if (typeof userTokenClient !== 'undefined') {
await userTokenClient.chat.postMessage({...}); // with underlying user token
} else {
// handle the pattern
}
});
Pros:
- Would be handy for developers
- Can take advantage of Rate limit handling
Cons:
- Need to implement an internal instance pool and manage its memory consumption
- Documentation/education cost -
userTokenClient
is always missing for single team auth
C-1) Have three types of clients
name | type | bound token | memo |
---|---|---|---|
client |
WebClient |
undefined | the same with app.client |
botTokenClient |
WebClient |
context.botToken |
|
userTokenClient |
WebClient? (optional) |
context.userToken |
C-2) Have client
as an alias of botTokenClient
name | type | bound token | memo |
---|---|---|---|
client |
WebClient |
context.botToken |
an alias of botTokenClient |
botTokenClient |
WebClient |
context.botToken |
|
userTokenClient |
WebClient? (optional) |
context.userToken |
C-3) Have only botTokenClient
and userTokenClient
name | type | bound token | memo |
---|---|---|---|
botTokenClient |
WebClient |
context.botToken |
|
userTokenClient |
WebClient? (optional) |
context.userToken |
My proposal - C-1 without pooling in the beginning
I'm not sure if there is a huge demand particularly for userTokenClient
pooling yet. Also, considering the lack of rate-limiting tiers consideration in WebClient
(=current implementation of WebClient
has a single queue that globally works regardless of tiers), just having a single WebClient
instance doesn't fit for all possible cases. If a developer really needs to take care of the number of requests, having dedicated WebClient
instances corresponding to critical endpoints (correct me if my understanding on this is wrong) would be much safer.
If we can come to a public interface design that allows us to change the internals later, it's possible to hold off implementing the instance pooling feature until we really get many requests to have a pooling feature.
For these reasons, I prefer C-1) without instance pooling this time. But I'm open to other options.
What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great analysis @seratch! I have a few thoughts:
-
Client settings: I think we should also consider using the same settings for all instances (even for those with bound tokens). My reasoning: considering the settings we accept [
agent
(for proxying),slackApiUrl
,logger
,logLevel
,tls
], I can't see any reason one might want to use them for specific instances and not across all instances. For example, if my app needs to use a proxy to reach the public internet, all instances ofWebClient
must be aware of that proxy. Can we all agree on this? -
Instance pooling and rate limits: Thank you for pointing out a misunderstanding I had about how rate limits are evaluated. Specifically, I was under the impression that rate limits were enforced per token. In actuality, they are evaluated per workspace, per method. Here's a quote from the changelog that makes this very clear:
If your app has 10 user tokens and 1 bot user token belong to a single workspace, all 11 tokens draw from the rate limit pool assigned by the method's associated tier.
This makes it apparent that
WebClient
's queue's pause/resume logic isn't quite optimal, because it doesn't maintain separate queues per method. Maybe this is something we can further optimize in@slack/web-api
later.If we choose not to do any instance pooling, and instead initialize new
client
s for each incoming request, I believe the result is even less optimal. Many instances (possibly using different tokens) communicating with the same workspace for the same method can consume from the same rate limit pool. This will result in more failing requests being sent than necessary.In Bolt, we have enough information (
source.teamId
) to do instance pooling by workspace. However, instances ofWebClient
are bound to a single token, as opposed to all the tokens within a workspace. There's no single "right"token
to initialize this shared instance with. Therefore the closest we can get to optimal (fewest number of instances we need to create) is to pool based on the token.Is this optimization worth the added complexity of implementation? At this point, I think no. If we find that this becomes an issue in the future, we can implement changes in
WebClient
, pool by token and/or by team, or brainstorm other ideas. Thanks for making a convincing case @seratch! -
Token binding and argument names: There are a few main concerns that I think we are trying to balance.
- Listeners (and middleware) should be able to depend on the behavior of client instance(s) always having a token, or always not having a token, but not sometimes having a token.
- Listener argument names should feel light and simple. Don't make the user think if Bolt can provide an opinion that most users will agree with.
- Allow users to explicitly control things when they care about the specifics. Don't inhibit users from disagreeing with a Bolt-provided opinion.
I think a good balance can be achieved in the following solution:
name type bound token memo client
WebClient
botToken ?? userToken
same token selection as say()
While
client
isn't bound to known-in-advance token, theauthorize
function is the only place you should care. We've already made it clear in Bolt that bot tokens should be used over user tokens (that's how we createsay()
). That will suit most people, but there should always be a way to make an explicit choice. Users can useapp.client
instance when they want an unbound client and are willing to make that explicit choice. This has the nice property that it preserves the validity of any previous example code - that example code was just making an explicit choice - its not outdated. New sample code using theclient
argument is taking advantage of the convenience of a Bolt provided opinion.I think creating both
botTokenClient
anduserTokenClient
sacrifices too much in the lightness and simplicity of the API. That explicit choice for everything feels overbearing, especially as the number of use cases for user tokens is going to shrink over time.The biggest problem I see is what happens when
authorize
neither returns a user token nor a bot token? Currently,say()
will use anundefined
token, which means unless the user explicitly passes one at the call site all calls will fail. The easiest answer would be to just adoptsay()
's behavior. Another option is to emit an error to the global error handler before the event is processed. [If we choose to do the latter, we should only do it in the nextsemver:major
update, as this might break existing code. The intention of our current interface is that one of these tokens is required, but it's never been enforced.] I think this problem still exists if we go with any previous option @seratch listed, since they all seem to assumeauthorize
will return a bot token. I don't have a strong opinion for or against these two options. I'm also open to other options to solve this problem.
@seratch thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PS. i feel cool for being able to use the brand new nullish coalescing operator in the real world 😎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Client settings: I think we should also consider using the same settings for all instances (even for those with bound tokens). Can we all agree on this?
Yes, I agree with this.
Instance pooling and rate limits
I think the plan described here is feasible enough. I believe we can go with the plan to have an instance pool by tokens.
Besides, I'd like to mention a possible implementation idea about a proper way to handle rate limiting in @slack/web-api
. Allowing developers to have a shared queue tied to team_id
among multiple WebClient
instances (probably by passing the shared queue to WebClient
constructor, it may reject if the token is not for the given queue's team by verifying it with auth.test
) looks fine to me. With this approach, we can most properly handle tiers per app per workspace. I'll create another issue in node-slack-sdk repo.
Token binding and argument names
Thanks for the clear advice here. I totally agree that we sould keep the simplicity of Bolt API. From that perspective, expecting the same behavior as with say()
function makes sense a lot.
The easiest answer would be to just adopt say()'s behavior. Another option is to emit an error to the global error handler before the event is processed. [If we choose to do the latter, we should only do it in the next semver:major update, as this might break existing code.
As Bolt is opinionated to always use bot tokens, I'm sure it's fine to raise a global error when authorize
function doesn't return botToken
. But I also don't have any strong opinions like "we should do the validation since the next major version". So, I'm comfortable to go with the first one (adopting say()
's behavior to client
).
If everything is clear to others as well and there are no other things to discuss here, I can work on the implementation now. I'll come up with further changes to make this PR ready to merge soon. 💪
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay great!
Just to be clear, I am okay with moving forward without instance pooling at all or with instance pooling by token. If we think the pooling implementation will take more time to implement and test, then I’m okay with that coming later in another PR too. Totally up to you.
Besides, I'd like to mention a possible implementation idea about a proper way to handle rate limiting in @slack/web-api.
This sounds really exciting! Looking forward to seeing how this might work. I thought about exposing a configuration option for a queue, which is any object that implements a specific interface with the queue operations. I never thought about the affect of sharing this queue across instances to achieve a per workspace per method rate limit strategy, but that is smart!
Sounds like we’re all aligned 🍻
client
to the list of args send through to listenerslogger
, client
to the list of args send through to listeners
@aoberoi @stevengill I've implemented all the necessary things here. Could you review this when you have a chance? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great @seratch!
looks good to me too! thanks @seratch |
I will make a pull request to reflect this change to the @slack/bolt@next branch. |
Summary
This pull request depends on #358. The essential change which fixes #354 is c571f44
Requirements (place an
x
in each[ ]
)