-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Direct Download Link for Up- and Download - Was "Broker" #12286
Comments
Is this the direct download link for the client? |
Yes. Download and upload |
Ok - please keep Klaas and myself in the loop. We have talked about this the past days as this was subject of discussions with our friends here in Switzerland .... |
Sure |
Hmm - can we do this in small steps and first start with download? I'm assuming we already get enough to work on to get download properly working under all circumstances (various protocols and various auth mechanisms in addition to the fact that in some cases direct download will not be allowed (e.g. with enabled encryption)). Furthermore with upload the logic is not longer straight forward with respect to chunking, encryption, general hooks processing and so on .... |
Sure. I´m a big fan of doing things in small steps :-) |
Here is what the desktop client does already in mirall 1.7: Mirall understands two new webdav properties in the
Notes:
Of course, everything can be changed. But this is how it is in mirall 1.7 if we want to keep compatibility with it. |
Thinking about the great variety of external systems we support we need to enhance this a bit:
|
I actually don't quite get what this broker feature is about - I assume it should allow clients to upload to the storage backend directly instead piping it through ownCloud? Can't comment on something that I don't know what it is about. Let me comment on that once we have a specification ready ;-) |
@LukasReschke yes, this is what it is :-) |
@DeepDiver1975 I think we can concentrate on http with basic auth for now. The providing the right link and credentials is in the responsibility of the storage backend |
Http should also cover s3 and swift |
Well - I need your input now as we are brainstorming 😉 First of all let's focus on download first - in case of ftp the server would need to tell the client(s) to download a file from an ftp server which requires a user and a password. This might result in leakage of credentials and the question is on how to treat this. |
That's a really tricky question here how we can transfer the credentials to the client. If we just expose it via the API without any kind of protection this leads to the problem that a simple XSS vulnerability allows you to steal all login credentials. I can think of the following solutions:
|
s3 and swift use http as transport - yes - but basic auth will not work. Both add their own mechanisms for auth (multiple???) and require additional headers afaik |
from an implementation pov I totally agree - but we should respect extensibility for the future |
This is why I want your feedback - now 😉 - THX
Sounds reasonable to me.
Which will introduce some kind of key management - 🙊 |
S3 auth (scroll down a bit - the example upload shows some more http headers) - http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#RESTAuthenticationExamples I really doubt that we can stick with simple http urls - we need to use an s3 library in the clients and pass over all required parameters. Furthermore - s3 supports some kind of time based access token. The server(owncloud) could request such an access token on behalf of the client which will be returned to the client. For sure we need to have a file size limit on when to use the direct download url in such a case. |
Another option would be: 1a. Client uploads public key to server when initially setting-up This would allow for somewhat "more security" as in a provisioning scenario no additional key can be added. Therefore if an attacker steals your ownCloud credentials he gains no access to further storage backends . Though, admittedly, this will add a huge overhead. The "use the password for encryption" works also for me, but obviously is somewhat "less secure". |
@DeepDiver1975 I wasn't aware of the additional headers. This sucks |
Then let's encrypt the credentials with the user's password for now for the sake of simplicity. I'd suggest that we add this as login hook to prevent storing the users' credentials within the session. The encrypted credentials are then stored within the session. (or somewhere in the DB) That means we need a way to indicate that the clients have to re-authenticate (or re-send the basic auth header) in case a new backend has been added. - I'd really prefer not to store the password if we don't have to. I know that there are cases where we need this, but here we can prevent it :-) |
(mirall supports URL + HTTP cookie at the moment. Maybe it's better to focus on backends that can do this before trying to be super generic and never finishing the feature. I don't know what the requirements for the broker where. It's an optional feature after all) |
I agree with @guruz |
Another issue with authentication is that the user should not always have full access to the storage, if an admin sets up a webdav mount for a user which is restricted to a subfolder on the webdav server we cant simply give the client authentication info for the webdav server (whether it be a cookie or user/pw) |
@icewind1991 true. The idea is that the storage backend handles this somehow. The password could a a one time password for example |
Besides using owncloud via a dav mount there is no single backend which allows cookies - and as per pure webdav spec we even should not allow session cookies - this is not really an option from my pov. Regarding "super generic and never finished" - we better think about what we want to have/want before implementing something we can throw away later.
One time password will require a second call as described above. @karlitschek @MTRichards can we define a list of storages that we want to support? Attached to the items we should have a priority/release assignment. And then we search for a storage specific solution which holds some freedom of extensibility. |
@karlitschek @MTRichards can I ask you for the requested feedback as of above? THX |
I think the real question is what sorts of external storage solutions can support this approach. Clearly SWIFT and S3 make sense, so I would start there. After that, can you get something like this from FTP, SharePoint and CIFS/Windows Network Drives? That would be great...but not sure it is possible. |
Hmm. not sure FTP, Sharepoint CIFS will work. We would have to implement the corresponding protocols in our clients. But even then this won´t work because they are usually firewalled and not accessible via the internet. |
Ok - swift and s3 first. I'll take care to coordinate this with the client teams. |
In the first implementation the direct download link (for s3 and swift backends) will contain an link to the owncloud instance itself which will generate a temporary/one time access token. Sample direct download url
The final target url will be generated and returned as redirect to the client - the client will follow the redirect accordingly. Furthermore in case the redirected url is unreachable or the owncloud server returns with an error (e.g. token could not be generated) the client shall fallback to the regular webdav based download. This approach will work as long as tokens can be generated. let's generate explicit tickets now for
|
closed within the scope of oc8 - we will continue in OC8.1 #13149 |
Let´s track the Broker requirements and the progress here
The text was updated successfully, but these errors were encountered: