-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Redis: Multi-instance app session expiration inconsistency (v2) #32
Comments
I have a feeling this diff to the expiration key is the cause. Since the attribute is probably random for each pod it causes it? I'll test with a local fork to be sure. https://github.com/fasthttp/session/blob/v1.1.8/const.go#L14 |
After some investigation and testing I can confirm that change is the one actually causing the issue. Everything works fine if I force it to just be the following: const expirationAttrKey = "__store:expiration__" |
This commit is what we're using downstream to fix it. However I'm waiting to hear back from you to make a PR. I think logically it should be a configurable that defaults back to a random value - i.e. if config string isn't blank/empty/whitespace-only use it to configure the var, otherwise use the already randomly generated string. Though it's possible there's a method already I couldn't find any. |
Hi @james-d-elliott, Sorry for delayed answer! You're right, well seen!! it was my mistake. Could you make a PR to fix it, please?? |
@savsgio Yes I can, what's the preferred method? A static value? Or something that can be configured? |
A static value is ok |
* between v1 and v2 the session expiration attr key became a random value * this caused sessions generated in one app/daemon/etc to have expirations not aligning with others * this patch basically restores the functionality found in v1 * fixes #32
I have not yet 100% confirmed the version this occurred in, but I think it was the upgrade from 1.1.8 to 2.0.0.
Basically if you use the redis provider in a mutli-instance app in something like kubernetes the redis provider does not seem to obtain the correct session expiration (after using Store.SetExpiration() and Session.Save())from the redis store, leading to the situation where your session expiration is not the same between sessions.
I have also played with using Traefik to enable sticky load balancing. In this instance as soon as the load balancer cookie expires, and the client his a new pod, it appears as if the second pod is completely unaware of the change to the expiration.
To me it makes sense the session expiration should be available to all instances, but maybe I'm incorrect?
The text was updated successfully, but these errors were encountered: