You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It's unclear based on the current documentation how spawn requests for already existing backends resolve conflicting grace periods. For example, if I spawn a backend for lock foo with a grace_period of 60s and then 15s later spawn it again with the lock foo and grace_period of 300s, my understanding is that a duplicate backend will not be started. But the behavior of the grace_period is now undefined.
Will it
remain at 60s with45s left
remain at 60s with 60s left
now be at 300s with 255s left
now be 300s with 300s left
Ideally I'd like an enum field on the spawn request that can define which strategy to use (defaulting to the existing behavior for backwards compat I imagine). However, a strategy that my team in particular would benefit from is a strategy where grace periods are reset to the new requested value only if they're larger than the previous grace period. e.g. GracePeriodStrategy.LARGEST_WINS.
The reason for this is that we have two codepaths which spawn backends. The first is triggered by users who are connecting to a backend. In this codepath we set a generous grace period anticipating that users may become disconnected and reconnect. The second is automation in our backend which spawns backends with a small grace period since (assuming no users are online already) we know the scope of interaction is small. In this pattern, largest wins allows us to upgrade to a longer session is a user does connect and prevents downgrading when automation runs if a user is already online.
The text was updated successfully, but these errors were encountered:
It does not currently indicate that it is not idle. One of the assumptions behind that is that a spawn request is initiated for the purpose of connecting (immediately) from a client, so that connection will be what indicates that it is not idle.
It's unclear based on the current documentation how spawn requests for already existing backends resolve conflicting grace periods. For example, if I spawn a backend for lock
foo
with agrace_period
of60s
and then 15s later spawn it again with the lockfoo
andgrace_period
of300s
, my understanding is that a duplicate backend will not be started. But the behavior of the grace_period is now undefined.Will it
60s
with45s
left60s
with60s
left300s
with255s
left300s
with300s
leftIdeally I'd like an enum field on the spawn request that can define which strategy to use (defaulting to the existing behavior for backwards compat I imagine). However, a strategy that my team in particular would benefit from is a strategy where grace periods are reset to the new requested value only if they're larger than the previous grace period. e.g.
GracePeriodStrategy.LARGEST_WINS
.The reason for this is that we have two codepaths which spawn backends. The first is triggered by users who are connecting to a backend. In this codepath we set a generous grace period anticipating that users may become disconnected and reconnect. The second is automation in our backend which spawns backends with a small grace period since (assuming no users are online already) we know the scope of interaction is small. In this pattern, largest wins allows us to upgrade to a longer session is a user does connect and prevents downgrading when automation runs if a user is already online.
The text was updated successfully, but these errors were encountered: