-
-
Notifications
You must be signed in to change notification settings - Fork 859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
--monitor not working as expected #1074
Comments
@tibmeister Secondly, is your |
Very much local drive. Before stopping the service, locally I created a file in /Lightburn called t.txt with no data. After starting the debug log, that file did not upload, and I created a new file, /Lightburn/t2.txt and put some random text in it. Still, no upload. |
@tibmeister
The above log shows:
All of this was performed on Ubuntu 20.04.1 LTS In addition to the debug log, please can you provide:
|
Just double-checked and no upload was performed even though the log says there was, so there's a disconnect somewhere. |
Also, in the debug log provided, there is no entry for either t.txt or t2.txt Please can you generate the debug log whilst replicating the issue. Below is an example using your config as garnished from your current debug log file - no issue in creating and uploading a file:
|
I created t2.txt while the debug log process was running. I am noticing in the regular logs (pasted in original post) that the number of items being processed in the one folder is constantly changing. Is this something you expect to see based on your experience with the Graph API? I have not messed with the Graph API so not very familiar with some of it's potential oddities. |
t2.txt does not appear anywhere within the log file at all. There is no entry in the file provided.
This is normal:
The Graph API bundles 'items' on OneDrive into change set, where each changeset can have between 200 and 300 items. |
I was suspecting that, since I know that directory has way more than 300 files. One thing, since that directory is not in my sync_list, why are we processing it at all? Is there something in the config file I am missing? |
I even get this, so somethings wonkey:
|
Unfortunately, due to how the Graph API handles objects, all objects returned from OneDrive need to be iterated through to determine a match. It is a gap in the API to provide a more granular way to iterate through folders and items to determine matches. |
OK .. start from scratch.
If everything is in-sync:
|
Ok, this will take some time, last count I have somewhere around 300k+ files out there. Actually, the one directory in the logs above, opti_back, has 460k files itself. So breaking this into 200~300 chunks will take most of the night, if not longer. |
OK .. it also might be worth setting up a 'new' OneDrive personal account to ensure the client is doing what you want, without having to process that volume of files. Need to find out / work out why logs are not showing files being created / moved / deleted or not even showing up in the application debug log file. By not having so many 'read' files to go through will make that a lot simpler. Further to this, this setting in your current configuration is not correct:
What this means is every 60 seconds it is going to query OneDrive for changes. How often is your data being changed on OneDrive - by using Office 365 - surely that data is not changing every 60 seconds? If the process of checking all your files takes several hours, this really pointless to kick it off again so quickly once it completes if your rate of change is very low or all your data changes are local. The default for this is 300 seconds, and should be 'tuned' based on the rate of data change performed by manipulating data or files on OneDrive itself. I potentially suspect that you might even be better off increasing that value to '43200' to perform a OneDrive check 12 hours after the completion of the previous scan. |
Could this be an issue with kernel limits for inotify? It probably depends also on the number of directories, but with this amount of files I guess there kernel infra for watching files and notifying the application might be overloaded. |
Could be, however there should be then the warnings generated re inotify not having enough file descriptors .. and in the debug log there is no reference to that nor any inotify events at all. |
Sorry for the delay, got busy over the last few days and got pulled away. So, after the full sync completed, I was able to perform CRUD operations on files and folders and anything local instantly replicated up, and remote items replicated down according to the schedule (60 seconds). It seems that while performing a full sync, there is no filesystem notification that occurs, or it get's set into a queue and as long as the full sync finished and the service isn't restarted, the notification is processed. There also is no remote changes replicated. I was able to verify that by starting a full sync, making a change in the middle, and after the sync completed watched the change happen. I repeated the same but interrupted the service in the middle, and the notification never got processed. Not sure if the service is a single threaded application or if you could move the sync and ionotify functions into separate threads. As of now, I would say that things are syncing as expected, as long as I don't make changes during this periodic full sync that occurs. Since I do have a rather large amount of files in my OneDrive, the full sync takes about 3 hours each time it runs. |
Correct. This is due to the OneDrive API and how events are currently handled. These local events however are queued and handled post syncing the data from OneDrive. This is why it is very important to set the Please also note, Microsoft OneDrive has performance limitations if you have > 300K files stored on OneDrive. This is something that I cannot fix. Microsoft has acknowledged in the past that this performance degradation starts at the 100K file stored level.
Refer to #232 for further details Closing issue as not a bug or issue |
Can multiple —single-directory arguments be passed on the command line? I think this may be the better approach. I was hoping the sync_list would only look at the folders entered for sync, or if there could be a way to run in a mode to exclude everything except explicitly defined sync_list. Kinda like doing selective sync on the Windows client. This would take the load of the service my not scanning everything but rather just saying “give me folder x” and “give me folder y”. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Bug Report Details
Describe the bug
When making local changes, be it modify, add, or delete of files, change is not reflected in OneDrive online even after waiting for the monitor period, currently set to 60 seconds. Restarting the service (systemd) will pull the deleted files back down instead of deleting them from Online.
No error in logs, runnign with --enable-logging and --verbose from systemd service file.
Application and Operating System Details:
OS: Ubuntu 20.04.1 LTS (Focal Fossa) (kernel 5.4.0-48-generic)
Are you using a headless system (no gui) or with a gui installed? GUI
OneDrive Account Type: Personal
Did you build from source or install from a package? Source
If you installed from source, what is your DMD or LDC compiler version:
dmd --version
orldmd2 --version
: DMD64 D Compiler v2.090.1Application configuration: Output of
onedrive --display-config
curl --version
: 7.68.0Log file filled with:
The text was updated successfully, but these errors were encountered: