Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eTags do not propagate consistently #4251

Closed
kiranparajuli589 opened this issue Jul 21, 2022 · 6 comments
Closed

eTags do not propagate consistently #4251

kiranparajuli589 opened this issue Jul 21, 2022 · 6 comments
Assignees
Labels
Priority:p2-high Escalation, on top of current planning, release blocker Type:Bug

Comments

@kiranparajuli589
Copy link
Contributor

Description

The eTags should be updated consistently on file operations like copy, move. With the current state of oCIS server, most of the time the eTag propagation is correctly executed but sometimes this is not true.

We have tests for eTag propagation after COPY operation on files/folders here at https://github.com/owncloud/core/blob/master/tests/acceptance/features/apiWebdavEtagPropagation2/copyFileFolder.feature. These tests were failing the oCIS CI multiple times and are now skipped with owncloud/core#40225

One assumption was maybe the eTag propagation is async and we tried to add retry for etag assertions. But even with multiple retries and waits in between the scenarios were not passing 100%. In the multiple runs, some were still failing. So the async eTag propagation is not really the issue, but the eTag does never really gets updated.

Related

@kiranparajuli589
Copy link
Contributor Author

appearing in move scenarios too. https://drone.owncloud.com/owncloud/ocis/13909/50/7

@dragonchaser
Copy link
Member

dragonchaser commented Aug 10, 2022

I could reproduce it locally with ocis, NOT with revad standalone.
I am assuming that there is some race condition happening in ocis.
It was impossible to isolate anything from the logs that would explain that behaviour.
Any input is appreciated.

To reproduce:

## Run integration tests
export PATH_TO_CORE=../testrunner
export PATH_TO_OCIS=../ocis
export TEST_WITH_GRAPH_API=true

### Ocis
make test-acceptance-api \
TEST_SERVER_URL=https://localhost:9200 \
TEST_OCIS=true \
SKELETON_DIR=/<path-to-your-dev-env>/testrunner/apps/testing/data/apiSkeleton \
OCIS_SKELETON_STRATEGY="" \
OCIS_REVA_DATA_ROOT='/home/<user>/.ocis/' \
BEHAT_FEATURE=/<path-to-your-dev-env>/testrunner/tests/acceptance/features/apiWebdavEtagPropagation1/moveFileFolder.feature:124 \
BEHAT_FILTER_TAGS='~@skipOnOcis-OCIS-Storage'

The error should occure randomly in 1 out of 10 runs.

@dragonchaser
Copy link
Member

Since we tracked it down to the cache, here is a cdperf result comparing cached and uncached runs:

Test with cache without cache delta
most-used-sizes-upload 0h06m04.3s 0h06m16.8s 12.5s
propfind-deep-100-files-45-nested-folders 0h08m49.4s 0h09m14.7s 26.1s
propfind-deep-1000-files-5-nested-folders 0h09m21.5s 0h09m46.1s 24.4s
propfind-flat-1000-files 0h03m07.8s 0h03m14.4s 6.6s
upload-delete-restore-many-large 0h00m57.4s 0h00m31.2 -26.2s
upload-delete-restore-many-small 0h00m32.3s 0h00m32.2s -0.1s
upload-delete-trash-many-large 0h00m44.2s 0h00m45.7s 1.5s
upload-delete-trash-many-small 0h00m30.8s 0h00m31.3s 0.5s
upload-download-delete-many-large 0h00m34.7s 0h00m35.8s 0.9s
upload-download-delete-many-small 0h00m51.6s 0h00m53.1s 1.5s
download-delete-with-new-user X 0h00m00.3s X 0h00m00.2s 0.1s
propfind-deep-rename 0h01m36.2s 0h01m39.1s 2.9s
share-with-new-user 0h00m44.2s 0h00m45.5s 1.3s

@dragotin
Copy link
Contributor

Why would the cache slow down the upload-delete test?
And in the propfind-deep cases, is the upload time measured as well?

This was referenced Aug 22, 2022
@dragonchaser dragonchaser added Priority:p2-high Escalation, on top of current planning, release blocker and removed Priority:p1-urgent Consider a hotfix release with only that fix labels Aug 22, 2022
@dragonchaser dragonchaser removed their assignment Dec 1, 2022
@dragotin
Copy link
Contributor

@dragonchaser @micbar is this done in a sense that this issue could be closed?

@micbar
Copy link
Contributor

micbar commented Dec 27, 2023

Seems that everything was done here.

@dragonchaser moved it to "done" but forgot to close.

@micbar micbar closed this as completed Dec 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority:p2-high Escalation, on top of current planning, release blocker Type:Bug
Projects
Archived in project
Development

No branches or pull requests

4 participants