Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shares Jail PROPFIND returns different File IDs for the same item #9933

Open
felix-schwarz opened this issue Aug 28, 2024 · 11 comments
Open
Assignees
Labels
Priority:p2-high Escalation, on top of current planning, release blocker QA:p2 QA:team Type:Bug

Comments

@felix-schwarz
Copy link

felix-schwarz commented Aug 28, 2024

Describe the bug

A folder shared from one user to another is identified by different File IDs in the Share Jail, depending on the location targeted by the PROPFIND.

Steps to reproduce

  1. Share folder Universe from user A to user B
  2. As user B, perform a PROPFIND on the Shares Jail root folder.
  3. As user B, perform a PROPFIND on the Universe folder in the Shares Jail.
  4. Compare <oc:id>s returned for the Universe folder item.

Expected behavior

Only one File ID is returned for an item in the Shares Jail, irrespective of the PROPFIND path and depth used.

Actual behavior

The File ID is different depending on the (Shares Jail) path targeted by the PROPFIND.

Setup

This was reproduced with demo.owncloud.com.

Examples

PROPFIND on / inside the Shares Jail with Depth: 1:

(request path: /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/)

Response:

<?xml version="1.0"?>
<d:multistatus xmlns:s="http://sabredav.org/ns" xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:18:11 GMT</d:getlastmodified>
        <d:getetag>"abfb1a5bbbd08eb8f253e37c51fef1f7"</d:getetag>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!a0ca6a90-a365-4782-871e-d44447bbc668</oc:id>
        <oc:size>0</oc:size>
        <oc:permissions/>
        <oc:favorite>0</oc:favorite>
        <oc:owner-id>katherine</oc:owner-id>
        <oc:owner-display-name>Katherine Johnson</oc:owner-display-name>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:18:11 GMT</d:getlastmodified>
        <d:getetag>"abfb1a5bbbd08eb8f253e37c51fef1f7"</d:getetag>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!166d1210-cdb9-50ab-9f1e-ecb9ef12a304:4c510ada-c86b-4815-8820-42cdf82c3d51:3ef59385-34ba-4055-b705-f4f05f1388a4</oc:id>
        <oc:size>0</oc:size>
        <oc:permissions>SDNVCK</oc:permissions>
        <oc:favorite>0</oc:favorite>
        <oc:share-types>
          <oc:share-type>0</oc:share-type>
        </oc:share-types>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Theory/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:10:22 GMT</d:getlastmodified>
        <d:getetag>"202a52840b4f000e6b7997c6d9e025a9"</d:getetag>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!166d1210-cdb9-50ab-9f1e-ecb9ef12a304:4c510ada-c86b-4815-8820-42cdf82c3d51:fdb56a22-9792-41b1-8538-ad66f31c67fb</oc:id>
        <oc:size>2567402</oc:size>
        <oc:permissions>SDNVCK</oc:permissions>
        <oc:favorite>0</oc:favorite>
        <oc:share-types>
          <oc:share-type>0</oc:share-type>
        </oc:share-types>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
</d:multistatus>

Notice how the response contains the following <oc:id> for /Universe/:
a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!166d1210-cdb9-50ab-9f1e-ecb9ef12a304:4c510ada-c86b-4815-8820-42cdf82c3d51:3ef59385-34ba-4055-b705-f4f05f1388a4

PROPFIND on /Universe/ inside the Shares Jail with Depth: 1:

(request path: /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/)

Response:

<?xml version="1.0"?>
<d:multistatus xmlns:s="http://sabredav.org/ns" xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns">
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
        <d:getlastmodified>Wed, 28 Aug 2024 20:19:07 GMT</d:getlastmodified>
        <d:getetag>"78306138f800b46e89628999cf69d532"</d:getetag>
        <oc:id>166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!17634536-719e-47c4-bf8b-e8b97188259a</oc:id>
        <oc:size>993045</oc:size>
        <oc:permissions>SDNVCK</oc:permissions>
        <oc:favorite>0</oc:favorite>
        <oc:share-types>
          <oc:share-type>0</oc:share-type>
        </oc:share-types>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/IMG_0004.jpeg</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype/>
        <d:getlastmodified>Wed, 08 Aug 2012 21:29:49 GMT</d:getlastmodified>
        <d:getcontentlength>993045</d:getcontentlength>
        <d:getcontenttype>image/jpeg</d:getcontenttype>
        <d:getetag>"f64ca9ad4b5bbf0a20d091beee2c4473"</d:getetag>
        <oc:id>166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!d43e8715-6fd7-4736-9790-24f274987de7</oc:id>
        <oc:size>993045</oc:size>
        <oc:permissions>DNVW</oc:permissions>
        <oc:favorite>0</oc:favorite>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
</d:multistatus>

Notice how the response contains the following <oc:id> for /Universe/:
166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!17634536-719e-47c4-bf8b-e8b97188259a

Additional context

Different File IDs for the same item cause issues in the iOS app, which uses them to track the item's location and lifecycle:

  • disappearing folder contents: https://github.com/owncloud/enterprise/issues/6842
  • breaks the internal chain of item IDs, so items inside shared folders erroneously are seen as "root" shared items themselves, leading to side effects like the wrong actions being shown to the user. F.ex. Unshare (which of course fails) is shown, but Delete (which would work and be useful) is not.
@felix-schwarz
Copy link
Author

As noticed by @jesmrec, this issue could be the same issue as #8455 (and, by extension #8766).

@butonic
Copy link
Member

butonic commented Sep 2, 2024

When listing the share jail root we currently return the resourceid of the mountpoint, which resides in the share jail. As a result any resourcce in the sharejail has a resource id starting with a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!.

When listing the content of a folder in the share jail we return the resource id of the shared resource and the contained files.

We could try returning the mount point resourceid when listing the mount point, but IIRC there are use cases where clients, maybe web, need to correlate the resourceid from the PROPFIND with resource ids from the sharedWithMe response.

Hm maybe that would also be cleaner ... @kulmann @JammingBen do we do some kind of resource id comparison magic there?

WebDAV currently has no way to show both the mount point resourceid AND the shared resource id. The Graph API makes this transparent: the driveItem for a mount point has an id and a remoteItem representing the shared resource with its own id.

I guess it would be semantically more correct to also return the mount point id for the shared folder when access happens via the share jail.

@felix-schwarz
Copy link
Author

Thanks for the details.

In the end, the only thing that matters to the iOS app (and presumably the Android app, too) is that:

  • every item in the Shares Jail has a consistent <oc:id>- regardless of the path targeted by the PROPFIND
  • no special handling of these File IDs is necessary (be it lookup / coordination with the drive list - or performing special processing on the ID)

@butonic
Copy link
Member

butonic commented Sep 4, 2024

I agree ... just need to find the prio for this ....

@butonic
Copy link
Member

butonic commented Sep 4, 2024

Let me take a step back. Why are you even using the sharejail? the graph /me/drives endpoint and the /me/drive/sharedWithMe both return a list of driveItems that contain the remoteItem with the original driveItem id.

Let me be clear: the share jail is a broken and deprecated way af accessing shares. It forces us to invent ids for resources that can be accessed in two ways:

  1. via the share jail PROPFIND /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe
  2. via the resource itself: PROPFIND /dav/spaces/166d1210-cdb9-50ab-9f1e-ecb9ef12a304$4c510ada-c86b-4815-8820-42cdf82c3d51!17634536-719e-47c4-bf8b-e8b97188259a

The latter not only works for the owner of the space but also for users that have been granted access by a share.

Instead of making a PROPFIND againts the share jail root clients should use the /me/drives/sharedWithMe endpoint to get a list of all shares, their mount point name and the underlying remoteItem.

Question: what id is the server supposed to return in a PROPFIND /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/With/Sub/Sub/Sub/Folder? An id from the share jail I suppose? Great now every resource has two ids. One when accessing it via the share jail and one when accessing it by its actual space.

This makes it impossible to correlate shares with the fake ids from the sharejail.

So, why don't we return the actual resoruce id for direct children of the share jail?

Because that is a mountpoint. A different resource than the shared resource. With its own id. In the share jail.

It gets more complicated with WOPI. Because it uses the fileid to access the resource. That is why we return the real id when stating a mountpoint.

We could try returning a share jail id for mountpoints, but I don't know out of my head if that still works with WOPI.

Let me create a PR.

BTW, this has been part of the code since a long time:

	// FIXME when stating a share jail child we need to rewrite the id and use the share
	// jail space id as the mountpoint has a different id than the grant
	// but that might be problematic for eg. wopi because it needs the correct id? ...
	// ... but that should stat the grant anyway

	// FIXME when navigating via /dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668 the web ui seems
	// to continue navigating based on the id of resources, causing the path to change. Is that related to WOPI?

@butonic
Copy link
Member

butonic commented Sep 4, 2024

Semantically, the graph /me/drive/sharedWithMe and the webdav /dav/spaces/{sharejailid} endpoint should be equivalent. You get a list of all shares and their mountpoint name on both. On the graph you also get the share state and the actually shared remoteItem id. We could even add a query filter to only list accepted shares.

Actually, the idea was that clients could query the /me/drives endpoint and would get not only spaces but also shares as spaces with type mountpoint. One endpoint to get a list of all spaces AND shares that the user has access to.

The share jail was just added to allow clients to use it until they had implemented the drive lookup properly. I may have failed to make this clear to all teams, but I think we should still go there...

@felix-schwarz
Copy link
Author

Thanks for the detailed answer and your perspective on this.

Due to time constraints I can't go into the same level of detail, but here are some of the problems using the drive list directly poses to the iOS client:

  • for mountpoint drives, the shared items are also the root item of the WebDAV endpoint, which breaks many underlying assumptions of the client (f.ex. "every file has a containing folder", "root folders don't have a name", …), making adoption a lot harder. If there was always a (virtual) root folder that contains just the shared item, adoption would be a lot easier and more straightforward.
  • for the File Provider, which provides the VFS to the iOS system Files.app, 20 MB of memory usage is a hard limit, meaning that exceeding that amount of memory usage leads to immediate termination without prior warning
    • parsing large JSON objects (like a drives list with large amounts of shares) therefore inherently carries a higher risk of immediate termination than WebDAV XML, which is parsed incrementally as a stream
    • while combining items from different (WebDAV) sources is something the iOS app's SDK is prepared for, it also comes at a memory cost, which, again, increases the risk of immediate termination
    • for perspective: in the most recent iOS version, it's not even possible to debug the File Provider anymore because that makes the memory usage balloon past 20 MB the moment the process is started. Before the first line of the File Provider's code is even run, the linked (system) libraries already consume a large part of it.

In summary: moving away from the Shares Jail and to a drive-endpoint-driven implementation in its current form will require a non-trivial amount of work and time. I'd therefore welcome if – through a fix like cs3org/reva#4835 – the Shares Jail could provide consistent IDs to allow users to fully use the iOS app until we get there.

@butonic
Copy link
Member

butonic commented Sep 26, 2024

Phew, I found the time to debug this in the context of WOPI. Things have gotten better and all Stat requests are using the internal resource id. Be that acessing an office document as the owner, as a recipient via the share jail or via a public link. We now always use the correct fileid.

Let me bump cs3org/reva#4835 and run an ocis full CI as well. Then we can merge this.

@tbsbdr
Copy link
Contributor

tbsbdr commented Sep 30, 2024

QA missing

@butonic
Copy link
Member

butonic commented Sep 30, 2024

QA as in 'let clients test this'? @ScharfViktor and I went over the testsuite. One test scenario needs to be updated and we added it to the expected failures. The fix got merged to master ... backporting it to stable5 might solve some client issues.

@saw-jan
Copy link
Member

saw-jan commented Oct 1, 2024

I tested with different dav paths and here's what I found:

Note

old and new paths doesn't have Shares space-id prefixed
spaces path has Shares space-id prefixed

Need confirmation if this is the expected behavior

  1. old:
  • root:
    curl -XPROPFIND 'https://localhost:9200/webdav/Shares' -udemo:demo -vk
  • folder:
    curl -XPROPFIND 'https://localhost:9200/webdav/Shares/Universe' -udemo:demo -vk
    same ids (NO Shares space-id prefix) ❗
    <d:href>/webdav/Shares/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:id>
        <oc:fileid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:fileid>
        <oc:spaceid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:file-parent>
        <oc:name>Universe</oc:name>
  1. new:
  • root:
    curl -XPROPFIND 'https://localhost:9200/dav/files/demo/Shares' -udemo:demo -vk
  • folder:
    curl -XPROPFIND 'https://localhost:9200/dav/files/demo/Shares/Universe' -udemo:demo -vk
    same ids (NO Shares space-id prefix) ❗
    <d:href>/dav/files/demo/Shares/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:id>
        <oc:fileid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!cbc094dc-a5f8-415a-a4ee-bde36314fe0d</oc:fileid>
        <oc:spaceid>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:file-parent>
        <oc:name>Universe</oc:name>
  1. spaces:
  • root:
    curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668' -udemo:demo -vk
  • folder:
    curl -XPROPFIND 'https://localhost:9200/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe' -udemo:demo -vk
    same ids (Shares space-id prefix) ❓
    <d:href>/dav/spaces/a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668/Universe/</d:href>
    <d:propstat>
      <d:prop>
        <oc:id>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!1b4334ac-6184-4d9e-bf34-0d9f1d52ab27:8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0:580e67f7-941e-40e3-a2be-98b30131ecdf</oc:id>
        <oc:fileid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668!1b4334ac-6184-4d9e-bf34-0d9f1d52ab27:8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0:580e67f7-941e-40e3-a2be-98b30131ecdf</oc:fileid>
        <oc:spaceid>a0ca6a90-a365-4782-871e-d44447bbc668$a0ca6a90-a365-4782-871e-d44447bbc668</oc:spaceid>
        <oc:file-parent>1b4334ac-6184-4d9e-bf34-0d9f1d52ab27$8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0!8dc8c61f-08ad-4d7a-a4b3-5691a82fb8b0</oc:file-parent>
        <oc:name>Universe</oc:name>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority:p2-high Escalation, on top of current planning, release blocker QA:p2 QA:team Type:Bug
Projects
Status: In progress
Development

No branches or pull requests

6 participants