Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client removes file on server if it could not create VFS placeholder file #3444

Open
isdnfan opened this issue Jun 15, 2021 · 51 comments
Open

Comments

@isdnfan
Copy link

isdnfan commented Jun 15, 2021

How to use GitHub

  • Please use the 👍 reaction to show that you are affected by the same issue.
  • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
  • Subscribe to receive notifications on status change and new comments.

I found some other issues where the client deleted files (like #1433) but my issue looks like specific VFS problem.

Expected behaviour

client should not remove server file if creation of local placeholder file fails (maybe mark local file as "dirty" or fallback to full file download)

image

#=#=#=#=# Propagation starts 2021-06-14T13:01:05Z (last step: 368 msec, total: 368 msec)
||InstantUpload/Camera/20200306_234227.jpg|64|2|1583530946|638ef5870efff25b01a8063e9cfc17b7|4046796|00748461occ5uwmhix0f|2|Couldn't create placeholder info|0|4046796|1583530946||
||InstantUpload/Camera/20200314_120134.jpg|64|2|1584180094|9e934f897c5641ba2c4bc21139f7e5ee|9056787|00748459occ5uwmhix0f|2|Couldn't create placeholder 

Actual behaviour

For some reason client could not create placeholder files. I have no idea why it even tried to create placeholder files - majority of affected files existed and successfully synced months before (at least client reported successful sync).. as a result a client folder had no files anymore and subsequent sync removed all the files on the server side. Fortunately this happened to few hundreds of files and I could recover the files from server trashbin.

image

but the problem still existed and the client complained it could not create placeholder file, and removed the files again!

at the same time access to existing placeholder files (with blue cloud icon) was not possible - the error was "0x8007016A The cloud file provider is not running." The error is often reported for OneDrive - and computer restart is recommended as solution. After client restart I can access placeholder files again. I have no clue how to find out if some process or service crashed - eventlogs don't say anything

image

Steps to reproduce

no idea.

maybe this is related: short time before I added huge folder with my photo archive (600GB,>50k files) but from my feeling it was synced successfully (with VFS) but might introduced some performance issues problems with the client. The client feels totally overloaded - click on settings or properties of every folder results in minutes of unresponsive UI and client using lot of CPU

image

the file was initially removed by the client and I recovered it from server trashbin. removed files didn't reside in this huge folder but on the other one InstantUpload and 2-3 others (completely random in my eyes). After this happened 2 times I switched the specific folders to "Always available locally" and the client successfully downloaded all the files.

Client configuration

Client version: Version 3.2.2stable-Win64 (build 20210527)
Operating system: Win 10 1909
OS language: EN
Installation path of client:

Server configuration

Nextcloud version: 21.0.2 (docker/apache)
Storage backend (external storage): mysql

Logs

I have debug archive and Nextcloud_sync.log from the problematic period, but I'm not willing to upload complete file due to privacy reasons (debug archives are up to 25MB file with extracted log of 300MB). Please advice how to find and extract and

  1. Client logfile:
    Since 3.1: Under the "General" settings, you can click on "Create Debug Archive ..." to pick the location of where the desktop client will export the logs and the database to a zip file.
    On previous releases: Via the command line: nextcloud --logdebug --logwindow or nextcloud --logdebug --logfile log.txt
    (See also https://docs.nextcloud.com/desktop/3.0/troubleshooting.html#log-files)

  2. Web server error log:

  3. Server logfile: nextcloud log (data/nextcloud.log):

@allexzander
Copy link
Contributor

@isdnfan can you upload logs here https://cloud.nextcloud.com/s/botn4JSYfMR83xt

@isdnfan
Copy link
Author

isdnfan commented Jun 18, 2021

Hi @allexzander thank you for your time. I uploaded the debug archive nc-vfs-camera.zip and Nextcloud_sync.log. From the latter it looks like the sync cycle at 10:07 failed to create lot of placeholder files

#=#=#=#=# Propagation starts 2021-06-14T10:07:20Z (last step: 43256 msec, total: 43256 msec)
||InstantUpload/Camera/20200403_105701.mp4|64|2|1585904238|82ed7969538b101de46ad76200aae59f|28445625|00748486occ5uwmhix0f|2|Couldn't create placeholder info|0|28445625|1585904238||
||InstantUpload/Camera/20200408_175417.jpg|64|2|1586361256|3da2601dcb91e63ed48917809c7a86f1|8062342|00748496occ5uwmhix0f|2|Couldn't create placeholder info|0|8062342|1586361256||
||InstantUpload/Camera/20200408_175418.jpg|64|2|1586361258|efdcc469711e24e715053592fe059068|7272802|00748497occ5uwmhix0f|2|Couldn't create placeholder info|0|7272802|1586361258||

and then something happens at 2021-06-14T10:08:22 - most likely the deletion but I can't see it easily from this log

10:08:22||InstantUpload/Camera/20200408_175418.jpg|2|1|1586361258|efdcc469711e24e715053592fe059068|7272802|00748497occ5uwmhix0f|4||204|7272802|1586361258|3f7140fa-4854-4ea7-adc6-bf047d1b4c8f|
10:08:22||InstantUpload/Camera/20200408_175417.jpg|2|1|1586361256|3da2601dcb91e63ed48917809c7a86f1|8062342|00748496occ5uwmhix0f|4||204|8062342|1586361256|b7a0e688-eccb-44f2-9f84-7224ce7b57cd|
10:08:22||InstantUpload/Camera/20200403_105701.mp4|2|1|1585904238|82ed7969538b101de46ad76200aae59f|28445625|00748486occ5uwmhix0f|4||204|28445625|1585904238|edb92e1b-5181-4ef7-989d-4cef229d748f|

In total the problem happened 2 times on this day at 10:xx I restored the files around 13:00 and they got deleted again an hour later ~14:xx

@allexzander
Copy link
Contributor

@isdnfan Thank you. We have received your logs.

Are you able to create files manually in the location mentioned in logs? Can those files be synced if you disable the VFS mode and use same location for your local sync folder?

@isdnfan
Copy link
Author

isdnfan commented Jun 18, 2021

As I stated above:

After this happened 2 times I switched the specific folders to "Always available locally" and the client successfully downloaded all the files.

But I think there was some general problem with VFS - as was unable to access files not hydrated at this time.. (error "0x8007016A The cloud file provider is not running.") after reboot access to the non-hydrated files worked again..

now I switched the folder back to "free up local space" and files replaced by placeholders within seconds.

UPDATE: #3452 and #3447 look related for me. I could imagine that I killed VFS by sending the client to sleep, or maybe I even killed the client when it was unresponsive for long time.

@Discostu36
Copy link

I had a similar but not exactly the same error today:

  1. I had my whole Nextcloud synched to hard drive
  2. I enabled virtual files for two folders containing a few thousand files
  3. NextCloud Desktop gave me many (many many!) warnings:
    grafik
  4. To stop the obviously not working process, I deactivated virtual files
  5. Instead of now downloading the original files again, Nextcloud desktop started deleting about 3,500 files from these folders on server
  6. I had to undelete the files from trash, then Nextcloud Desktop synched them to hard drive again

I'd be willing to share my debug file, but not in public.

@github-actions
Copy link

This bug report did not receive an update in the last 4 weeks. Please take a look again and update the issue with new details, otherwise the issue will be automatically closed in 2 weeks. Thank you!

@github-actions github-actions bot added the stale label Jul 31, 2021
@isdnfan
Copy link
Author

isdnfan commented Aug 5, 2021

for me the issue did not repeat anymore but so far I see the root cause was never found so the problem could reoccur.

@allexzander
Copy link
Contributor

@Discostu36 If you happen to have this debug log still around, would be nice if you upload it to https://cloud.nextcloud.com/s/ozbSCx5wGDrtRGQ

@allexzander
Copy link
Contributor

@isdnfan It could've been improved by fixing numerous other bugs with VFS between 3.2.2 and 3.3.0.

@Discostu36
Copy link

@Discostu36 If you happen to have this debug log still around, would be nice if you upload it to https://cloud.nextcloud.com/s/ozbSCx5wGDrtRGQ

I deleted it some days ago, but might still be in trash, will have a look this evening.

@allexzander
Copy link
Contributor

@Discostu36 Sorry for not paying attention to it earlier. Somehow, I've missed your reply. If the file is not there, you may also want to try the latest 3.3.0 version to see if the issue is still there. https://nextcloud.com/install/#install-clients

@isdnfan
Copy link
Author

isdnfan commented Aug 6, 2021

It could've been improved by fixing numerous other bugs with VFS between 3.2.2 and 3.3.0.

@allexzander it's true I have the feeling new version works better in terms of performance and stability. But I didn't see any change addressing three underlying problems we see here:

  • the client always consider local file state as reference: if the file doesn't exist locally this files is removed from the cloud
  • if there is a problem with VFS (client fails to create placeholder file) the client doesn't remember the problem
  • if there is a change/new file on the server the client touches/syncs the whole directory (which in my case made other placeholder files to fail as well so the problem of one file was multiplied to all files in the directory) and as a result it removes all the contents of the directory where this file lives, including previously synced files untouched for ages

first issue is hard: I don't know if there is good solution. It is easy to monitor file changes while the client is started but what should happen if the user removes local files while the client is stopped? prefer server/client or raise conflict? I would prefer manual problem resolution in this case. Additional interaction is bad in terms of user experience but it gives the user a chance to avoid data loss (depending on server trashbin setting file may be completely removed). As reference: MS OneDrive has additional confirmation to remove files in the cloud if the user removes lot of files locally..

second issue is easier to handle in my eyes:

  • if there is a problem with VFS mark/remember this file is broken locally and try to sync from server next time (repeat until successful)
  • If there are multiple issues with VFS completely stop syncing to avoid the situation with different state on the next run..

I have no idea about the third one - in my eyes there is no reason to sync/touch the whole directory if only one file is changed.. Maybe there is a reason but even then the client should become more fail safe - if there is some local problem (VFS problem, local hardware issue, exhausted local storage, permissions problem) the client should stop syncing until the problem is resolved.. (or even better only upload new files to the cloud)

I really appreciate you feedback (at least as documentation about how it is expected to work).
Feel free to close the issue if you feel all this problems are addressed already or not relevant at all.

@github-actions github-actions bot removed the stale label Aug 6, 2021
@allexzander allexzander added the confirmed bug approved by the team label Aug 7, 2021
@mgallien
Copy link
Collaborator

It could've been improved by fixing numerous other bugs with VFS between 3.2.2 and 3.3.0.

@allexzander it's true I have the feeling new version works better in terms of performance and stability. But I didn't see any change addressing three underlying problems we see here:

* the client always consider local file state as reference: if the file doesn't exist locally this files is removed from the cloud

* if there is a problem with VFS (client fails to create placeholder file) the client doesn't remember the problem

* if there is a change/new file on the server the client touches/syncs the whole directory (which in my case made other placeholder files to fail as well so the problem of one file was multiplied to all files in the directory) and as a result it removes all the contents of the directory where this file lives, including previously synced files untouched for ages

first issue is hard: I don't know if there is good solution. It is easy to monitor file changes while the client is started but what should happen if the user removes local files while the client is stopped? prefer server/client or raise conflict? I would prefer manual problem resolution in this case. Additional interaction is bad in terms of user experience but it gives the user a chance to avoid data loss (depending on server trashbin setting file may be completely removed). As reference: MS OneDrive has additional confirmation to remove files in the cloud if the user removes lot of files locally..

second issue is easier to handle in my eyes:

* if there is a problem with VFS mark/remember this file is broken locally and try to sync from server next time (repeat until successful)

* If there are multiple issues with VFS completely stop syncing to avoid the situation with different state on the next run..

I have no idea about the third one - in my eyes there is no reason to sync/touch the whole directory if only one file is changed.. Maybe there is a reason but even then the client should become more fail safe - if there is some local problem (VFS problem, local hardware issue, exhausted local storage, permissions problem) the client should stop syncing until the problem is resolved.. (or even better only upload new files to the cloud)

I really appreciate you feedback (at least as documentation about how it is expected to work).
Feel free to close the issue if you feel all this problems are addressed already or not relevant at all.

Do you have any information that could help understanding why the files could not be created ?

The idea is that to improve reliability we can always make it try again but that would only partially solve your problem.After all you want your files ?

@isdnfan
Copy link
Author

isdnfan commented Sep 3, 2021

@mgallien I don't get your point. I have no idea what caused the problem. I remember at this time the client was unstable - it was eating CPU, created huge local DB files, actions in the UI lagged. I'm sure I killed the client multiple times, additionally I might have stopped some action when I suspended the PC when it was in the middle of some operation. as a result VFS was broken (placeholder files creation) - the rest was fine, the client successfully downloaded all the files from the affected folder after I changed to "make available locally". new client versions look more stable so the issue might not happen anymore (or less frequent). But the issue uncover some facts about the sync process which could be improved to make the client more safe - especially the fact the client doesn't remember local state is unhealthy and for this reason removes files from server is really bad and worth attention. (and fact why the whole folder is touched when only one file changes).

I'm happy to discuss the logic of the sync process - maybe I don't understand something. the issue happened here described more generic:

  • the file exists on the server and doesn't exist on the client
  • the client removes the file from the server

in my eyes the logic must be different

  • when the client performs first sync after re-start
  • previous sync was not successful/had errors

in other words a client must not delete files on the server until it is confident the local state is healthy and holds a full copy of user files. otherwise it should replicate server state because the server is the only instance which knows what happened in the time the client didn't run/didn't properly sync.

@ImanuelBertrand
Copy link

I'm currently experiencing the same problem. Two days ago, the client (3.3.2 on Windows 10, "wincfapi" used for virtual files) deleted 12.000 files, ~55GB in total.
They are still in the trashbin, so they are not completely lost, but restoring them via the web frontend is 1. not feasible and 2. didn't work because of file locking exceptions (I'm using Redis for file locks in case that's relevant).
Is there a way to restore them via CLI? Otherwise I'm already working on locally restoring a server backup and uploading the files through the client (on a different machine than the client that deleted the files). I guess I'll have to resolve the file locks prior to that, though.

Support for virtual files is now disabled in the client.
I have create a debug archive, but the logs don't cover the relevant time frame, they start today.

The client (that deleted the files) is installed on a new laptop. At first, the sync with virtual files seemed work fine (no DELETE calls in the web server log), so it was not an issue with the initial setup of the client.
First call in the webserver logs: 2021-08-28 12:48:20 +0200
First DELETE call: 2021-09-03 23:28:35 +0200
The client was running fine for 8 days and then started to delete all files.

The log excerpt is from today, prior to disabling virtual files.
The filenames were the ones I tried to recover in the web frontend.

excerpt.log

@isdnfan
Copy link
Author

isdnfan commented Sep 5, 2021

thank you @ImanuelBertrand showing the issue exists on new versions as well. Do you have any idea what might be the root cause why placeholder files failed to create? did you recognize any atypical pattern, any issues (maybe with other programs)? something interesting in the windows eventlogs?

@svenb1234
Copy link

svenb1234 commented Sep 5, 2021

because the server is the only instance which knows what happened in the time the client didn't run/didn't properly sync.

This is not true if a server was migrated, then the client knows better what was added/removed while the server was down. However I agree that avoiding data loss is the main objective. Thus files should be kept on the server if one cannot be 100% sure deleting them was triggered by the user.

Adding files or keeping them by mistake is a lot less of a problem compared to silently deleting files.

@ImanuelBertrand
Copy link

ImanuelBertrand commented Sep 5, 2021

I remember nothing atypical.

2021-09-03 23:25:15 +0200: EL: System booted
2021-09-03 23:25:16 +0200: EL: Error related to Microsoft Wi-Fi Direct Virtual Adapter
2021-09-03 23:25:25 +0200: EL: Windows complained about an incorrect license key (fixed in the meantime)
2021-09-03 23:25:25 +0200: EL: Many entries of "Die Anmeldeinformationen in der Anmeldeinformationsverwaltung wurden gelesen." - something like "The login credentials in the user account management have been read"
2021-09-03 23:25:37 +0200: EL: Windows defender status sucessfully set to SECURITY_PRODUCT_STATE_ON
2021-09-03 23:25:58 +0200: ManicTime started tracking
2021-09-03 23:27:09 +0200: Started Dell Mobile Connect
2021-09-03 23:28:44 +0200: Opened Settings to uninstall Dell Mobile Connect
2021-09-03 23:30:08 +0200: EL: System powered down

The entries prefixed with EL are from the Windows event log.
The difference between the system clocks of the laptop and the server (regarding the server logs above) is within one second.
The system is a Dell XPS, if that's somehow similar to your system.

@mgallien
Copy link
Collaborator

mgallien commented Sep 6, 2021

@mgallien I don't get your point

there is tow points in your reply

  1. rest assured that we share your concerns about reliability and many of the changes done in versions 3.2 and 3.3 are done for that and unfortunately we are not yet done
  2. I would like to also understand how to trigger that such that the trigger condition can be removed

my point is that any errors in the placeholder files creation will block sync and I guess that is not what you want
we then need to understand why this is happening no matter how much time we spend on improving sync reliability

@ImanuelBertrand
Copy link

ImanuelBertrand commented Sep 6, 2021

They way I see it, blocking the sync would be preferable to deleting files. Of course that distinction is only relevant as long as the issue is not found and fixed.
I would have been grateful for a message like this: "Couldn't create placeholder info. Retry or proceed (proceeding will delete affected files on all devices)".
I mean, "grateful" would probably be an overstatement since it is still an error message, but I'd rather have an annoying error message than data loss.

@isdnfan
Copy link
Author

isdnfan commented Sep 6, 2021

@mgallien

my point is that any errors in the placeholder files creation will block sync

this is exactly what I suggest (at least as long the client works as it does now). because not stopping the sync results on files deleted on subsequent run.

@ImanuelBertrand

I would have been grateful for a message like this: "Couldn't create placeholder info..."

exactly: if you stop syncing and give the user good hint where/how to provide a bug report chances are higher you get the reports and data you are looking for.

silently going forward and removing files results in

  • high risk of data loss (because users may not recognize some data was lost - for me it was a lucky punch I saw this in activity feed)
  • completely uncoordinated users reactions (reports through different channels and different wording)

I completely agree with @ImanuelBertrand - hard fail is better then continue somehow and cause data loss (or at least lot of work for restore). I think we all agree this is a really bad situation which should never happen - but it happens.. three users managed to identify the problem and report the issue to same bug report within short time since VFS was released. I suggested some mitigations - I have no idea if this are suitable or complete nonsense - I didn't receive any response..

@isdnfan
Copy link
Author

isdnfan commented Sep 6, 2021

svenb1234

This is not true if a server was migrated, then the client knows better what was added/removed while the server was down.

I only partially agree. In general we must consider the server as the most stable part. The scenario you show only works as long only one client is involved. what is if you have 5 clients? should each client move/add/change files only because the local data is different from the server? what if you restore the server for some reason?

there are lot of moving parts in the system - but in general the sync is build around the server - this one must be the "root of trust". If the server has crashed, has been restored - admin should perform some action (maybe the is a way to automate it) to inform the clients new full sync is required.. other way round the client should never take priority over the server. it must only perform actions when it knows this action is intended e.g. remove the file only after a successful sync cycle (full sync on start).

definitely there are ways to improve the sync like keeping a journal (like database transaction log) so one doesn't have to rewind all the history - but in general every endpoint must ensure it doesn't work on invalid/incomplete data set..

@martijnvandijk
Copy link

This just bit me too. Deleted close to 40k files from my server before I noticed.

@diditopher
Copy link

I believe a similar issue exists with VFS disabled, possibly caused by long file paths with more then 260 characters.
Either way, when I rename specific folders on one client, the other client subsequently deletes all contents of the newly renamed folder.

There are also reports of the client deleting files from the server if the clients disk runs out of free memory.

I think the underlying issue ist that the client doesn't remember if it fails to create a file locally. Ignoring the issue and continuing as if the file was successfully created is just asking for trouble.

@isdnfan
Copy link
Author

isdnfan commented Oct 2, 2021

recently noticed #3731 - the client stops sync cycle if a file blocked by AV program - all good (despite the fact it crashes) - something similar must happen for other problems - if the sync fails for some reason inform the user and stop until the problem is fixed.

@kottonau
Copy link

kottonau commented Oct 9, 2021

Happened to me as well after enabling VFS. We went back and disabled VFS again since this never happened before.

nextcloud: 21.0.5
client: 3.3.5

@isdnfan
Copy link
Author

isdnfan commented Dec 3, 2021

another problem #4016 which could be avoided by measures I suggested before

the client always consider local file state as reference:

which must not be the case!

In general we must consider the server as the most stable part

@francescop75
Copy link

Client 3.4.1/windows, server 22.2.0, this problem still happening with VFS.

@daralph
Copy link

daralph commented Jan 26, 2022

Had the same problem within my organisation, 6800 files deleted.
Prior the windows client had an sync error and then deleted the files.
Virtual Files enabled.

@mgallien
Copy link
Collaborator

at all we have integrated a fix that should solve this issue
#4191
please if you can test and provide feedback, that would be highly appreciated

@tob123
Copy link

tob123 commented Jan 31, 2022

assuming you want feedback: my files state were in the middle of the deletion going on. i upgraded to 3.4.2 and i see the deletion just resumes.
More detail:
a) folders that i have recently worked on still have files intact. older files in older folders have been removed, or are in progress of being removed.
b) as soon as i saw my client started removing files i also revoked access to various external storage mountpoints (a few of them read only)

envronment: latest 22 nextcloud server, windows 10 3.4.2 desktop client vfs active.

What do you mean by the issue is fixed ?
Should i reset sync state on the nextcloud client first and if so how ?

@allexzander
Copy link
Contributor

@tob123 If you are using the same sync folder with the new 3.4.2 client, then, indeed, the deletion may continue, as the folder is still in that state that files are removed from it and the desktop client will also delete them on the server.
Best if, you can start a client with an Internet connection turned off, remove the old sync folder, and, add the new one. Then, the deletion should stop.

@svenb1234
Copy link

I could not find this hint in the release notes. How is John Doe supposed to know this, leaving alone fixing it? Shouldn't the update take care of solving the issue?

@tob123
Copy link

tob123 commented Feb 5, 2022

@allexzander thanks for the tip.i wanted to see whether i could restore from nextcloud's trashbin instead of classical backups. it's in progress now based on some steps i thought would be useful to share, although it's by far no smooth experience yet to restore this way.

summary:
-use curl to get xml data from trashbin
-filter based on your needs (deletion time, etc.)
-use curl again to exec move. (this is quite slow, probably due to loggin in each time, but at least it runs stable in my case)
i could not find a ready to use python webdav client that could to this job any simpler.
also python requests does not support "MOVE" natively.
source of documentation how the nextcloud can be reached:
https://docs.nextcloud.com/server/latest/developer_manual/client_apis/WebDAV/basic.html
details:

step 1 curl to see what is in the trashbin.

curl -u <YOUR_USERNAME>:<YOUR_PASSWORD> '<YOUR_NEXTCLOUD_URI>/remote.php/dav/trashbin/<YOUR_USERNAME>                                                                                                                                                    /trash' -X PROPFIND --data '<?xml version="1.0" encoding="UTF-8"?>
<d:propfind  xmlns:d="DAV:" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
  <d:prop>
        <d:getlastmodified />
        <d:getetag />
        <d:getcontenttype />
        <d:resourcetype />
        <oc:fileid />
        <oc:permissions />
        <oc:size />
        <d:getcontentlength />
        <nc:trashbin-filename />
        <nc:trashbin-original-location />
        <nc:trashbin-deletion-time />
  </d:prop>
</d:propfind>' 

this will create output similar to:

<?xml version="1.0"?>
<d:multistatus xmlns:d="DAV:" xmlns:s="http://sabredav.org/ns" xmlns:oc="http://owncloud.org/ns" xmlns:nc="http://nextcloud.org/ns">
  <d:response>
    <d:href>/remote.php/dav/trashbin/demo/trash/</d:href>
    <d:propstat>
      <d:prop>
        <d:resourcetype>
          <d:collection/>
        </d:resourcetype>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
    <d:propstat>
      <d:prop>
        <d:getlastmodified/>
        <d:getetag/>
        <d:getcontenttype/>
        <oc:fileid/>
        <oc:permissions/>
        <oc:size/>
        <d:getcontentlength/>
        <nc:trashbin-filename/>
        <nc:trashbin-original-location/>
        <nc:trashbin-deletion-time/>
      </d:prop>
      <d:status>HTTP/1.1 404 Not Found</d:status>
    </d:propstat>
  </d:response>
  <d:response>
    <d:href>/remote.php/dav/trashbin/demo/trash/Nextcloud%20intro.mp4.d1643722857</d:href>
    <d:propstat>
      <d:prop>
        <d:getlastmodified>Tue, 01 Feb 2022 13:40:57 GMT</d:getlastmodified>
        <d:getetag>1643722857</d:getetag>
        <d:getcontenttype>video/mp4</d:getcontenttype>
        <d:resourcetype/>
        <oc:fileid>1458</oc:fileid>
        <oc:permissions>GD</oc:permissions>
        <oc:size>3963036</oc:size>
        <d:getcontentlength>3963036</d:getcontentlength>
        <nc:trashbin-filename>Nextcloud intro.mp4</nc:trashbin-filename>
        <nc:trashbin-original-location>Nextcloud intro.mp4</nc:trashbin-original-location>
        <nc:trashbin-deletion-time>1643722857</nc:trashbin-deletion-time>
      </d:prop>
      <d:status>HTTP/1.1 200 OK</d:status>
    </d:propstat>
  </d:response>
</d:multistatus>

step 2 filtering what you want to restore.

you can use the tool yq, jq to get the files you want. for the next step the reference ie needed as mentioned in xml as href (in the example above: /remote.php/dav/trashbin/demo/trash/Nextcloud%20intro.mp4.d1643722857
link to yq: https://kislyuk.github.io/yq/

have a list of files ready in file "hrefs"
in the example below 2 files are there for user "demo"

/remote.php/dav/trashbin/demo/trash/Nextcloud%20intro.mp4.d1643722857
/remote.php/dav/trashbin/demo/trash/Reasons%20to%20use%20Nextcloud.pdf.d1643722880

step 3 restore using curl

recommendation: use screen,vnc or something that makes sure you do not lose your session while the following starts to run.
for ~10000 files this can take a long time (more than 8 hours).

for i in $(cat hrefs); do
        file_ref=`echo $i | awk -F'/' {'print $(NF)'}`
  echo $file_ref
  curl -u <YOUR_USERNAME>:<YOUR_PASSWORD> -X MOVE --header 'Destination: <YOUR_NEXTCLOUD_URI>/remote.php/dav/trashbin/<YOUR_USERNAME>/restore/${file_ref}' <YOUR_NEXTCLOUD_URI>${i}
done

@mgallien
Copy link
Collaborator

mgallien commented Feb 9, 2022

@tob123 whoah this is really nice !!!!
I will make sure to have a look
thanks !

@tob123
Copy link

tob123 commented Feb 9, 2022

thx for feedback. i could restore all i wanted this way (~ 35000 files). advantage seems that it is quite consistent (item that got removed gets pulled out of trashbin to original location). it comes at a cost in terms of performance and time to find what you want to have restored. using screen is recommended when you execute step 3. will add that

@tob123
Copy link

tob123 commented Feb 11, 2022

some more feedback on client 3.4.2: from my end it runs stable now. some remarks / concerns still left. see below.
environment:
server on latest 22.X
clients 3.4.2

remarks:

  1. It's the second time I encounter this issue and as a workaround I created (again) a new empty directory so a clean sync could start. I guess time will tell whether the issue comes back or not and will of course let you now.
  2. I read here / there that there are some conditions / designs in the nextcloud client to delete files. What are those conditions ?
  3. smaller issue: in case the server is down and i try to open a file not synced yet it retains this state when the server is up (i cannot download / open the file and keeps giving the same error it cannot be opened). workaround: sync / download the folder where the file is in.
  4. As i remarked here: Delete files as first action when syncing #4070 : VFS syncs on demand but it still administers every file it can see and is in the end a sync engine. 90% of my data does not need to be in this "sync adminsitration". So perhaps one could consider an interface that has no synching to nextcloud but only allows online access and inplace modifications ? (similar to how raidrive connects to webdav for example). or to put it differently: let the desktop client behave similar to the mobile client for some of my data. for starters just 2 buckets (one that does not sync), one that does sync would be a big help. (again: vfs enforces me now to choose "sync administration" for all data available to me as a user.)

For remark 3 : should i create a new issue / bug ?

Tobias.

@eibex
Copy link

eibex commented Feb 18, 2022

I thought 3.4.2 was stable as I haven't had an issue in a long time.

I randomly checked my taskbar after launching Windows only to find TWO instances of Nextcloud running.
I closed one of them as it was full of "Couldn't create placeholder" errors. The other then showed a green tick.

A few seconds after the other one starts showing the same errors. I forcefully close it only to find 705 files in the trashbin (luckily they weren't E2EE files).

The client was trying to "update" thousands of files, deleting them for good.

If I didn't randomly open the taskbar I would have lost thousands of files.

@george2asenov
Copy link

We have the same issue. More than 5000 files deleted/moved to NC trash bin. We have found that turning off and on the client PC trigger it somehow. First it shows the error that it can't update VFS data and start deleting whole folders with hundreds of documents.
Now we have to click on every file thick to restore from there, which is painfully slow (the restore it self not only the clicking). Many errors say can't restore. Tried to debug this slow restore, but the server is near idle.
We are considering NC change - it is not reliable. If you can't sync your files is not so bad but deleting all your files from both the server and locally is nightmare.

@tob123
Copy link

tob123 commented Apr 12, 2022

@george2asenov . i encountered similar issues but the behavior is healthy now (using nextcloud latest 22 or 23 AND desktop version 3.4.4). I agree the issue is (or at least has been) severe. Can you share what versions you are running (including the users that had the issue?)

@bessw
Copy link

bessw commented Apr 24, 2022

I've got two problems with VFS (I'm not absolutely sure if they are related to this issue, I can create a new issue if not):

  • Yesterday after enabling VFS it started to create placeholder files for previously excluded folders (as expected).
    In a folder with images it created a placeholder file for the gpx track but then for the first two images (P1080184.JPG and P1080185.JPG):

    • it created .P1080184.JPG.~3656 and .P1080185.JPG.~1b02 and reported an error that it couldn't sync the metadata for the images and stopped syncing
    • on the next retry it created ..P1080184.JPG.~3656.~2c96
  • After that I removed the sync profile, moved the folder away and started a fresh sync with VFS enabled. The previous issue didn't reappear, but now I get errors for files with .bin, .dll and .info extension:
    <file_path_and_basename>.bin: Die Datei wurde vom Server gelöscht (Sabre\DAV\Exception\NotFound)
    As far as I can tell the files still exist on the server and it doesn't delete them.

    • UPDATE: since these files aren't important I "fixed" it by adding them to .sync-exclude.lst, now the placeholder files are displayed as sync pending, but the error message is gone and the sync client stopped retrying all the time.
      => There was nothing deleted, even though the error message said something else, therefor I think its unrelated to this is issue.

@george2asenov
Copy link

@george2asenov . i encountered similar issues but the behavior is healthy now (using nextcloud latest 22 or 23 AND desktop version 3.4.4). I agree the issue is (or at least has been) severe. Can you share what versions you are running (including the users that had the issue?)

Nextcloud Hub II (23.0.3)
image

this is the one user that faced the issue. But it is enough when the files are important documents.

@rgreeott
Copy link

We have seen this exact issue with NC 24.0.5 and client ver 3.6.2. We have up to 60 clients sync'ing, with a typical load of 32.

@lorenzo93
Copy link

Just have the same problem, a whole directory of music files completely erased. It is possible that 2 years later this problem is still not addressed?

@ballit6782
Copy link

we've just had thousands of files from multiple groups deleted by a single user without them doing anything, presumably from the same problem. restoring wasn't too bad thanks to trashbin:restore and being able to know the deletion date through the last modification time of the emptied directories.

however this was a very stressful moment, as we've had multiple users reporting that all of the data from their collective was gone. we have many users so it is hard to know which ones use Windows on VFS mode, and even harder to make sure they update the client. We're also not sure that it is going to fix the problem.

Our server version is 28.0.2, client version is unknown at this time (we're not even sure which user caused this yet, and it is very hard to investigate).

@pierreozoux
Copy link
Member

https://github.com/nextcloud/server/blob/master/config/config.sample.php#L2008-L2020

:)

@isdnfan
Copy link
Author

isdnfan commented Feb 21, 2024 via email

@george2asenov
Copy link

This issue happened again on 27.X . The app version is 3.4.1 on windows 10.0.19045 . The the client logs just say
image (19)
right from the pc/app startup.
Also the trash wasn't working i.e. not showing anything which made it lot more worse. NC update fix the trash issue,
I have updated to 28.0.2 and hope it will fix the issue.

@q-wertz
Copy link
Contributor

q-wertz commented Feb 22, 2024

I think you should update the CLIENT from version 3.4.1 (which is from Dec 20, 2021 and now over 2 years old) and not the server.

How should the server differ between intentional removals and one that were a bug in the client?

Additionally you can refer to the hint @pierreozoux wrote about denying access of such old clients to your server

https://github.com/nextcloud/server/blob/master/config/config.sample.php#L2008-L2020

:)

As long as you cannot reproduce the behavior on a new client version, the issue seems to be an issue with your local administration.

@Insprill
Copy link

Insprill commented Apr 2, 2024

I believe I've just run into this issue; correct me if this is unrelated. The client is saying "Access denied" when syncing a few folders, which I'm assuming is something I broke recently, but it's then deleting all of those folders on the server. I went to look for a video today and noticed that several folders were missing (>70GB!), and I had to recover them from the server's trash bin.

@ImanuelBertrand
Copy link

This sounds like the same issue to me, or at least a similar one. Do you have VFS enabled?
In my case the log contained the message "Couldn't create placeholder info", which might be a consequence of denied file or folder access.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests