-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TrustedAppsReplacement #61
Conversation
Added a todo to update the rules based on Tim's feedback
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, blacklists, in a world in which any agent can get any number of webids, seem like a bad idea altogether, as any agent can mint a new id different from the one banned. A monotonic positive logic for allowing things like we have in the current ACL system is needed.
Proposals/TrustedAppsReplacement.md
Outdated
|
||
In the following scenario, Alice is going to go to `https://badguys.com` not knowing that it's a malicious site. Bob, however, does know about badguys, and has set up access control rules to prevent badguys from being able to take his data. | ||
|
||
This deviates from the current dpop flow in the following ways: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a link to the 'current dpop flow'?
solid:oidcIssuer <https://idp.com>, | ||
<https://otheridp.com>. | ||
``` | ||
The important thing to note here is the `solid:oidcIssuer` that links a variable number of issuers to Alice. All these issuers are trusted by Alice, but may not be trusted by other parties. In the case that a party does not trust one issuer, Alice should use a different one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite follow the intent of solid:oidcIssuer here. The rdfs:comment
for the term in the Solid vocabulary (https://www.w3.org/ns/solid/terms#oidcIssuer) states: "The preferred OpenID Connect issuer URI for a given Web ID", so what's the reasoning behind having multiple here? Is the intent here to allow Alice to list all the IdP's that she personally trusts (which might not even include the IdP that she uses to login)? If so, then maybe it should be a separate predicate like solid:trustedOidcIssuer
?
And what do you mean by 'Alice should use a different one'? i.e. where does Alice pick an OIDC Issuer?
(BTW, I know asking multiple questions in a single comment is awkward - which is why README's should have short lines (e.g. 80), and should also be split whenever a new concept or nuance is being written about. It just means questions can be asked more specifically.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps the rdfs:comment
should be changed to say "A trusted OpenID connect issuer URI for a given webid". The discussion on multiple OPs is here
Every solid:oidcIssuer is trusted by Alice. Resource Servers MUST reject any tokens that are signed by issuers that are not listed in Alice's WebID as stated in step 6 of the request sending flow
So, I guess it falls on me to say "huh, I don't understand" :-) My main concern right now is the performance of common requests for resources. First, the way that ACLs used to be was that we just identified the relevant ACL resource, augmented that with group listings, had that in a memory triple store, then we practically had a closed world. It was beautiful, it was simple. It could be cached, with a write-through proxy, we could easily make sure it was up to date. Moreover, once the ACL was determined, it would need no further HTTP requests to determine the effective permissions. Then I'd like to understand if it is possible to design a algorithm that doesn't have this problem, or if it can minimize the impact of the performance hit somehow, either because it becomes easy to cache, or because the cost would incur if the target RS (e.g. Bob's server) has a plausible suspecion. And that's where I don't quite see where we have the possibility to make it lighter in the network flow diagram, it seems to me to require three requests to happen between Alice's request for Bob's resource, and Bob's response, in all cases. Moreover, these requests happen in the open world, i.e. it is hard to mitigate the performance hit by prefetching because it migth just be anyone making the request. If it was some way around point 3. or 4. where Bob's RS could go "yeah, definitly safe" and jump right to response in most cases, that would be comforting, but I don't understand if that's doable now. Or it might be just that I simply don't understand what happens, but at least I need to say so :-) Anyway, it would be great to have that clarification. |
@kjetilk Caching is certainly an issue that wasn't considered on this first pass. So, I think once resources are allocated to look at this we should do another pass with caching as a consideration. |
I think any endorsement service must be prepared to revoke endorsements for various reasons. Any alternative to distributed revocation lists would require guaranteed access to some authority which maintains its revocations internally, e.g. any implementation of OCSP. Any time you cache the results of consulting such an authority, you're caching the negative responses, which locally copies the subset of the revocation list that you've cared about in the past. A distributed revocation list gets out in front of offline vulnerabilities at the expense of caching a bunch of stuff you'll never need. |
I agree with this point, blacklists don't seem like sustainable pattern. Effort to maintain those blacklists seems much higher than effort of creating 'fresh' WebIDs. I see use case proposed by @zenomt differently, which I understand has something like "all mebers of this group except this specific person (while also member of that group)", i reminds me of To replace first Trusted Apps experiment I think we need to focus on that "client restrictions" part. To match scope of that first experiment we could just include access mode (eg. read only) in capability that app presents to RS. Once we sort out how we use shapes etc. we should be able to just add that additional constraint to that capability. |
@jaxoncreed i want to point out that #48 which here you reference as "TagAC" is intended and proposed as a complete replacement for "trusted apps". |
Yep. Perhaps I should make it clearer in this document that the examples are using specifically the definition of Tags from the tag proposal, and the tag proposal contains other things that aren't referenced here. |
Actually I think whenever possible we should separate parts of proposals that could get incorporated independently. For example labeling with tags and relying on designated prefix for IRIs denoting user's client authorizations. Just because same PR includes both it doesn't mean that we can't cherry-pick from that PR. Authorization document (with specific prefix) sounds very similar to capability credential passed by reference (capability url). How RS would verify that user issued them differs indeed. I think both could use some kind of tags labels in the document / credential itself. |
Huh, I thought I responded yesterday... Anyway. It is not just about caching, but also the functional dependencies that we require. If we require an open world for every request, it is likely to cause performance problems one way or the other. Caching is one way to mitigate, but we should try to not get into the situation. |
Doesn't this mean you have to either know the future or never change your mind? You can require that the expressivity of your ACLs language be limited to positive assertions, which gets you around the implied revocations when principals are added to negated dynamic group. But you can't predict a signatory getting new information (WebID was stolen or the owner is up to no good) and redacting a WebID. I'm not sure how minting fresh WebIDs would get around this problem except maybe by abandoning a signatory and switching to a fresh replica of it. |
As usual, my question here is regarding the exact problem we are solving. One remark already is that I strongly dislike the notion of semi-trusted. That gives rise to all kinds of confusion regarding three-quarter- and one-third-trusted. It seems to be an orthogonal dimension of trust, rather whether the delegation (not trust) is complete or partial. |
Agreed. I think I'll remove mentioning the term "semi-trusted" |
@jaxoncreed in enterprise scenarios which you have mentioned during last two panel meetings. Where in practice resource controller would want to define restrictions for clients? I have impression that it would work more as 'global policy' rather than per resource specific rule. Possibly enterprise would want to have client restrictions defined at the root of the storage. Maybe even have some enterprise wide whitelist which would apply across number of storage instances. |
I'm going to merge this in as it's only a proposal |
In this PR we propose half of a replacement to trusted apps.
Other half coming later.