Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Logout does not invalidate auth cookie #44

Open
ssuresh1750 opened this issue Mar 24, 2020 · 7 comments
Open

Logout does not invalidate auth cookie #44

ssuresh1750 opened this issue Mar 24, 2020 · 7 comments

Comments

@ssuresh1750
Copy link

Issue

Logging out of Keymaster does not invalidate the auth_cookie ie the jwt that is issued upon login.

Steps to reproduce

Login to Keymaster. Using a proxy such as Burp, one would be able to see the auth_cookie that is issued upon successful login. This is a jwt whose expiry is around 12 hrs. Send this request to the Burp repeater.
Now click logout. We will notice that the request we sent to the burp repeater will still work successfully despite the logout.
Additionally, if we re-login, the old jwt/auth-cookie will still be valid.

Impact

If a session can still be used after logging out then the lifetime of the session is increased and that gives third parties that may have intercepted the session token more time to impersonate a user.

Remediation

One possible solution would be - to store a “blacklist” of all the tokens that are no longer valid and have not expired yet. One can use a DB that has TTL option on documents which would be set to the amount of time left until the token is expired. Redis is a good option for this, that will allow fast in memory access to the list. Then, in a middleware of some kind that runs on every authorized request, one should check if provided token is in the blacklist. If it is, then throw an unauthorized error. Else, let the JWT verification handle the request and identify if the jwt already expired or is still active.
(Ref - https://medium.com/devgorilla/how-to-log-out-when-using-jwt-a8c7823e8a6)

@cviecco
Copy link
Contributor

cviecco commented Mar 25, 2020

Adding another depenency seems against the resiliancy guarantees of keymaster. However a blacklist kept in the db, and propagated to the machines every X seconds would be a decent trade-off. The question then is of how often to copy the blacklist. Is 60 seconds too long? Do you have any options on this?

@rgooch
Copy link
Member

rgooch commented Mar 25, 2020

Why not treat this the same way as when a U2F token is registered/changed/deleted? It's an event originating in one Keymaster that simply needs to be recorded and distributed.

@cviecco
Copy link
Contributor

cviecco commented Mar 25, 2020

At the end of the day this is just a desire to have a blacklist for explicitly disabled tokens. (revocation list). Since quering this list must be inline to operations, this operation needs to be local to the checking box (to avoid both performance AND resiliancy issues). Yes the question becomes on how to distribute this list. My suggestion is to use another table in the db to put all expired tokens and have something copy this list into a local map/db for the actual inline use.

The communication pattern for any blacklist either:

  1. keymasterd --> common db --> keymasterd* (Everyother keymaster).
  2. keymasterd -->keymaster* (every other keymaster)

I dont like option 2 because we need to: create a communication path between all keymasters (not a requirement now). Have each keymaster keep a history of its known expired certs so that on new operation a new keymaster can populate this list.

Speed of propagation into the list is not a big issue... as this is protecting against explicitly 'abusing' the token. IE: this attacks requires help from the user-agent and/or malicious MITM.

@rgooch
Copy link
Member

rgooch commented Mar 25, 2020

Agreed that option (1) is preferred, as it doesn't add a new dependency or a new type of capability (cross-Keymaster discovery and communication).
The point of my previous comment is that I think this data copy can be done on the same schedule as the token DB copy, currently 5 minutes.

@ssuresh1750
Copy link
Author

ssuresh1750 commented Mar 26, 2020

Adding another depenency seems against the resiliancy guarantees of keymaster. However a blacklist kept in the db, and propagated to the machines every X seconds would be a decent trade-off. The question then is of how often to copy the blacklist. Is 60 seconds too long? Do you have any options on this?

My 2 cents on this, 60 seconds should be okay. Ideally, as short as possible. thanks!

@rgooch
Copy link
Member

rgooch commented Mar 26, 2020

There's a tradeoff with how hard we hammer the database. I think that 5 minutes is fine, especially considering that if you've been subjected to a MitM attack, you've got far bigger problems (they can simply prevent you from logging out, even fake the logout).

@rgooch
Copy link
Member

rgooch commented Jan 22, 2021

I don't think the code should dictate the refresh interval. We should add a configuration option (with the default being 5 minutes to preserve current behaviour). Site administrators can then make the tradeoff themselves.
I do think that a single DB refresh interval should be used for both distributing the U2F token list and any CRL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants