Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add read-only interceptor to use on storage providers #1849

Merged
merged 1 commit into from
Jul 9, 2021

Conversation

micbar
Copy link
Member

@micbar micbar commented Jun 30, 2021

Description

For migration scenarios it can be handy to use a storage provider in read-only mode. For example if we use the ownCloud SQL driver to connect to a ownCloud Classic, readonly mode is a vital step in the migration process.

Changes

  • Implement readonly Interceptor
  • Add proper WebDAV Response body also in some error cases

How it works

  • The read-only interceptor uses a list of known request types which are allowed.
  • Known request types which change data on the storage are blocked and return a grpc error which will be forwarded to WebDAV.
  • Unknown Request Types will be blocked and throw a GRPC error
  • The interceptor changes the grants on the resources which will cause WebDAV to return read-only permissions.

WebUI

  • The webUI reflects the webdav permissions and "greys out" or hides all actions that would change the files on the storage

Config

  • To enable the interceptor, add it to the storage provider config.

Example storage-home.toml

# This storage-home.toml config file will start a reva service that:
[shared]
jwt_secret = "Pive-Fumkiu4"
gatewaysvc = "localhost:19000"

# - authenticates grpc storage provider requests using the internal jwt token
# - authenticates http upload and download requests requests using basic auth
# - serves the home storage provider on grpc port 12000
# - serves http dataprovider for this storage on port 12001
#   - /data - dataprovider: file up and download
#
# The home storage will inject the username into the path and jail users into
# their home directory

[grpc]
address = "0.0.0.0:12000"
interceptors = [
  "readonly"
]

# This is a storage provider that grants direct access to the wrapped storage
# TODO same storage id as the /oc/ storage provider
# if we have an id, we can directly go to that storage, no need to wrap paths
# we have a locally running dataprovider
# this is where clients can find it
# the context path wrapper reads tho username from the context and prefixes the relative storage path with it
[grpc.services.storageprovider]
driver = "ocis"
mount_path = "/home"
mount_id = "123e4567-e89b-12d3-a456-426655440000"
expose_data_server = true
data_server_url = "http://localhost:12001/data"
enable_home_creation = true

[grpc.services.storageprovider.drivers.ocis]
root = "/var/tmp/reva/data"
enable_home = true
treetime_accounting = true
treesize_accounting = true
#user_layout = 
# do we need owner for users?
#owner = 95cb8724-03b2-11eb-a0a6-c33ef8ef53ad 


[http]
address = "0.0.0.0:12001"

[http.services.dataprovider]
driver = "ocis"
temp_folder = "/var/tmp/reva/tmp"

[http.services.dataprovider.drivers.ocis]
root = "/var/tmp/reva/data"
enable_home = true
treetime_accounting = true
treesize_accounting = true

Known Issue

  • Currently we need to add the interceptor to storagehome and storageusers which share the same physical storage (which is IMO weird and will be changed in the future)

@micbar micbar requested a review from labkode as a code owner June 30, 2021 15:28
@update-docs
Copy link

update-docs bot commented Jun 30, 2021

Thanks for opening this pull request! The maintainers of this repository would appreciate it if you would create a changelog item based on your changes.

@micbar micbar requested a review from butonic June 30, 2021 15:29
@micbar
Copy link
Member Author

micbar commented Jun 30, 2021

@labkode It took me a while to figure out the interceptors concept in reva. It works now when you add the interceptor to the storage provider config.

@labkode
Copy link
Member

labkode commented Jul 1, 2021

@micbar awesome, this can be useful in many places, not only for a migration, for example to give access to read only storages.

Can you add in the PR description or in some example config how to enable it?

The CI complains about some tests.

@micbar
Copy link
Member Author

micbar commented Jul 1, 2021

The CI complains about some tests.

Yes. I am on it. Still working on the WebDAV response Codes and Response Body.

@ishank011
Copy link
Contributor

@micbar @labkode can this be combined with the scopes we introduced recently? It does a similar job and adding the read-only scope would be a good addition.

@micbar
Copy link
Member Author

micbar commented Jul 5, 2021

@ishank011 Can you point me to some resources / docs / tickets regarding the scopes?

@ishank011
Copy link
Contributor

ishank011 commented Jul 7, 2021

Hi @micbar. There's a brief description in the PR #1669 and the workflow is detailed here https://codimd.web.cern.ch/XTib-1TzTyqx2IZJJOq5pA. I'll add some proper documentation as well.

As a summary, the auth provider returns the scope for which the token is valid. For example, for basic auth and in ocis, this token has the 'owner' scope, i.e., unrestricted access to all resources. For public shares, users are restricted to only that particular share and resource. We can add a similar read-only scope, and make it configurable in the auth providers. The checks can be done like this. I skipped the reader/editor checks because of issues with the WebDAV response codes as well. So it'll be good to fix those.

@micbar micbar force-pushed the read-only branch 5 times, most recently from cfe1699 to 13ba6cc Compare July 7, 2021 15:08
@micbar
Copy link
Member Author

micbar commented Jul 7, 2021

@ishank011
IMO your use case is different if I understand correctly

I am trying to protect a storage by keeping it in a forced read-only mode to make sure that nothing changes "on disk" regardless of the accessing user and the auth method.

I would really like to keep the scope narrow in this PR. We need this for a migration scenario where you have a parallel usage of the same storage by two instances where only one instance has write access.

@micbar
Copy link
Member Author

micbar commented Jul 7, 2021

@labkode I added an example config to the top post

butonic
butonic previously approved these changes Jul 8, 2021
Copy link
Contributor

@butonic butonic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

omg so many fixes!

@labkode
Copy link
Member

labkode commented Jul 9, 2021

@micbar @ishank011 let's keep the context of this PR for sysadmins to force a storage in read-only.
The scope for auth tokens is much more fine grained and I think this way is simpler for a sysadmins to configure the storage to be put in RO.

@labkode labkode merged commit 21e154f into cs3org:master Jul 9, 2021
thmour pushed a commit to thmour/reva that referenced this pull request Jul 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants