Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct Download Link for Up- and Download - Was "Broker" #12286

Closed
karlitschek opened this issue Nov 19, 2014 · 34 comments
Closed

Direct Download Link for Up- and Download - Was "Broker" #12286

karlitschek opened this issue Nov 19, 2014 · 34 comments
Assignees
Milestone

Comments

@karlitschek
Copy link
Contributor

Let´s track the Broker requirements and the progress here

@karlitschek
Copy link
Contributor Author

@icewind1991

@DeepDiver1975
Copy link
Member

Is this the direct download link for the client?

@karlitschek
Copy link
Contributor Author

Yes. Download and upload

@DeepDiver1975
Copy link
Member

Ok - please keep Klaas and myself in the loop. We have talked about this the past days as this was subject of discussions with our friends here in Switzerland ....

@karlitschek
Copy link
Contributor Author

Sure

@craigpg craigpg added this to the 2014-sprint-08-current milestone Nov 20, 2014
@DeepDiver1975
Copy link
Member

Yes. Download and upload

Hmm - can we do this in small steps and first start with download?

I'm assuming we already get enough to work on to get download properly working under all circumstances (various protocols and various auth mechanisms in addition to the fact that in some cases direct download will not be allowed (e.g. with enabled encryption)).

Furthermore with upload the logic is not longer straight forward with respect to chunking, encryption, general hooks processing and so on ....

@karlitschek
Copy link
Contributor Author

Sure. I´m a big fan of doing things in small steps :-)

@ogoffart
Copy link

Here is what the desktop client does already in mirall 1.7:

Mirall understands two new webdav properties in the http://owncloud.org/ns namespace:

  • dDU: stands for "direct download URL" and is just an url that points to the file, it needs to be an url specific for this version of this file (that etag)
  • dDC: stands for "direct download Cookie" and is the cookies that will be sent with the GET request. That is just the raw cookie header (example: 541bf4fc82bd8=8ebbl4istpe83qk3o70dak2647) and it replaces any other cookies.

Notes:

  • We don't read the etag from the GET reply, the etag has to be the one that was sent in the propfind and the file must not have changed.
  • We don't issue Ranges header to resume when direct URL is used (because the server we implemented this feature for did not support it)
  • (Don't blame me for the name of the properties, that's how it is)

Of course, everything can be changed. But this is how it is in mirall 1.7 if we want to keep compatibility with it.

@DeepDiver1975
Copy link
Member

Thinking about the great variety of external systems we support we need to enhance this a bit:

  • support of multiple protocols (ftp, sftp, samba, swift, s3 ....)
  • each protocol comes with it's own way of authentication (user and password @LukasReschke questionable from a sec pov I guess, cookies, oauth headers, for S3 it would be some kind of secret+key .....)

@LukasReschke
Copy link
Member

I actually don't quite get what this broker feature is about - I assume it should allow clients to upload to the storage backend directly instead piping it through ownCloud?

Can't comment on something that I don't know what it is about. Let me comment on that once we have a specification ready ;-)

@karlitschek
Copy link
Contributor Author

@LukasReschke yes, this is what it is :-)

@karlitschek
Copy link
Contributor Author

@DeepDiver1975 I think we can concentrate on http with basic auth for now. The providing the right link and credentials is in the responsibility of the storage backend

@karlitschek
Copy link
Contributor Author

Http should also cover s3 and swift

@DeepDiver1975
Copy link
Member

Can't comment on something that I don't know what it is about. Let me comment on that once we have a specification ready ;-)

Well - I need your input now as we are brainstorming 😉

First of all let's focus on download first - in case of ftp the server would need to tell the client(s) to download a file from an ftp server which requires a user and a password.

This might result in leakage of credentials and the question is on how to treat this.

@LukasReschke
Copy link
Member

That's a really tricky question here how we can transfer the credentials to the client. If we just expose it via the API without any kind of protection this leads to the problem that a simple XSS vulnerability allows you to steal all login credentials.

I can think of the following solutions:

  1. We encrypt the credentials with the login credentials of the user. Those credentials are also known to the sync clients. (Easier to implement)
  2. We encrypt the credentials with a private key known to the client and to the server. Getting this key requires always authenticating with the proper credentials. (Would be better from my opinion)
  3. Getting the credentials via API always requires re-authenticating (Most probably the easiest to implement)

@DeepDiver1975
Copy link
Member

Http should also cover s3 and swift

s3 and swift use http as transport - yes - but basic auth will not work. Both add their own mechanisms for auth (multiple???) and require additional headers afaik

@DeepDiver1975
Copy link
Member

@DeepDiver1975 I think we can concentrate on http with basic auth for now. The providing the right link and credentials is in the responsibility of the storage backend

from an implementation pov I totally agree - but we should respect extensibility for the future

@DeepDiver1975
Copy link
Member

That's a really tricky question here how we can transfer the credentials to the client. If we just expose it via the API without any kind of protection this leads to the problem that a simple XSS vulnerability allows you to steal all login credentials.

This is why I want your feedback - now 😉 - THX

I can think of the following solutions:

We encrypt the credentials with the login credentials of the user. Those credentials are also known to the sync clients. (Easier to implement)

Sounds reasonable to me.

We encrypt the credentials with a private key known to the client and to the server. Getting this key requires always authenticating with the proper credentials. (Would be better from my opinion)

Which will introduce some kind of key management - 🙊

@DeepDiver1975
Copy link
Member

S3 auth (scroll down a bit - the example upload shows some more http headers) - http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationExamples

I really doubt that we can stick with simple http urls - we need to use an s3 library in the clients and pass over all required parameters.

Furthermore - s3 supports some kind of time based access token. The server(owncloud) could request such an access token on behalf of the client which will be returned to the client.
Question now is: generating these access token has to be triggered by some action - what would that action be? Solution might be a broker entry point where the client can request the access token - but this will result in an additional http request.

For sure we need to have a file size limit on when to use the direct download url in such a case.
Furthermore enabling direct download is a setting on storage basis.

@LukasReschke
Copy link
Member

Another option would be:

1a. Client uploads public key to server when initially setting-up
1b. Admin is provisioning public key for user
2. Server encrypts credentials for the public key
3. Client decrypts with the private key

This would allow for somewhat "more security" as in a provisioning scenario no additional key can be added. Therefore if an attacker steals your ownCloud credentials he gains no access to further storage backends . Though, admittedly, this will add a huge overhead. The "use the password for encryption" works also for me, but obviously is somewhat "less secure".

@karlitschek
Copy link
Contributor Author

@DeepDiver1975 I wasn't aware of the additional headers. This sucks
@LukasReschke good ideas but we have to do it in a compatible ways with existing storage systems.

@LukasReschke
Copy link
Member

Then let's encrypt the credentials with the user's password for now for the sake of simplicity. I'd suggest that we add this as login hook to prevent storing the users' credentials within the session. The encrypted credentials are then stored within the session. (or somewhere in the DB)

That means we need a way to indicate that the clients have to re-authenticate (or re-send the basic auth header) in case a new backend has been added. - I'd really prefer not to store the password if we don't have to. I know that there are cases where we need this, but here we can prevent it :-)

@guruz
Copy link
Contributor

guruz commented Nov 24, 2014

(mirall supports URL + HTTP cookie at the moment. Maybe it's better to focus on backends that can do this before trying to be super generic and never finishing the feature. I don't know what the requirements for the broker where. It's an optional feature after all)

@karlitschek
Copy link
Contributor Author

I agree with @guruz

@icewind1991
Copy link
Contributor

Another issue with authentication is that the user should not always have full access to the storage, if an admin sets up a webdav mount for a user which is restricted to a subfolder on the webdav server we cant simply give the client authentication info for the webdav server (whether it be a cookie or user/pw)

@karlitschek
Copy link
Contributor Author

@icewind1991 true. The idea is that the storage backend handles this somehow. The password could a a one time password for example

@DeepDiver1975
Copy link
Member

(mirall supports URL + HTTP cookie at the moment. Maybe it's better to focus on backends that can do this before trying to be super generic and never finishing the feature. I don't know what the requirements for the broker where. It's an optional feature after all)

Besides using owncloud via a dav mount there is no single backend which allows cookies - and as per pure webdav spec we even should not allow session cookies - this is not really an option from my pov.

Regarding "super generic and never finished" - we better think about what we want to have/want before implementing something we can throw away later.

The password could a a one time password for example

One time password will require a second call as described above.

@karlitschek @MTRichards can we define a list of storages that we want to support? Attached to the items we should have a priority/release assignment. And then we search for a storage specific solution which holds some freedom of extensibility.

@dragotin dragotin changed the title Broker Direct Download Link for Up- and Download - Was "Broker" Nov 24, 2014
@PVince81 PVince81 mentioned this issue Nov 28, 2014
22 tasks
@DeepDiver1975
Copy link
Member

@karlitschek @MTRichards can I ask you for the requested feedback as of above? THX

@MTRichards
Copy link
Contributor

I think the real question is what sorts of external storage solutions can support this approach.

Clearly SWIFT and S3 make sense, so I would start there.

After that, can you get something like this from FTP, SharePoint and CIFS/Windows Network Drives? That would be great...but not sure it is possible.

@karlitschek
Copy link
Contributor Author

Hmm. not sure FTP, Sharepoint CIFS will work. We would have to implement the corresponding protocols in our clients. But even then this won´t work because they are usually firewalled and not accessible via the internet.
So I suggest to start with focusing on http protocols like SWIFT and S3. And let´s also keep in mind the custom backend that are used be certain education institutions.

@DeepDiver1975
Copy link
Member

Ok - swift and s3 first.

I'll take care to coordinate this with the client teams.

@DeepDiver1975
Copy link
Member

Ok - swift and s3 first.

I'll take care to coordinate this with the client teams.

@dragotin @danimo @ogoffart @guruz

Please share your thoughts/ideas regarding integration of C/C++ SDKs for SWIFT and S3 protocols. THX

@DeepDiver1975
Copy link
Member

In the first implementation the direct download link (for s3 and swift backends) will contain an link to the owncloud instance itself which will generate a temporary/one time access token.

Sample direct download url

https://cloud.example.com/remote.php/direct/{fileId}

The final target url will be generated and returned as redirect to the client - the client will follow the redirect accordingly.

Furthermore in case the redirected url is unreachable or the owncloud server returns with an error (e.g. token could not be generated) the client shall fallback to the regular webdav based download.

This approach will work as long as tokens can be generated.
There is no need to add any additional libraries in the client.
We have to think about a solution for systems where no token can be generated (for whatever reason).

let's generate explicit tickets now for

  • S3 - generating token on GET
  • SWIFT - generating token on GET
  • mirall - allow redirect on direct download link @guruz

@DeepDiver1975
Copy link
Member

closed within the scope of oc8 - we will continue in OC8.1 #13149

@lock lock bot locked as resolved and limited conversation to collaborators Aug 14, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

8 participants