-
Notifications
You must be signed in to change notification settings - Fork 556
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
shiori update doesn't clear cache after it's finished, maxing out /tmp/ dir. #966
Comments
I came to report just this. I have my suspicions that it has to do with a mismatch of packages used to create and remove the files. In these lines the file is created with the stdlib but the afero abstraction is used to clean up instead. That abstraction provides its own |
It looks like there's also a double delete 🤔 |
OK, I can confirm the cause of the issue, I executed under
As seen, using the abstraction constructs a different path because the |
BTW, using |
To quickly backup shiori before this do I need to copy a directory or database file or what? Then I can build and test that version fine. |
I would say only copy the binary, but if you want to be extra cautious, maybe copy the data dir. |
It didn't work - I cloned your forked repo, switched to the branch, and built it with the command. I ran it using the binary ( |
What do the files look like for you? Mine were |
Looking more closely, I think I understand why I didn't solve your issue. I was dealing with the leak itself by running @Monirzadeh I think a reasonable fix would be to have an env var to limit concurrency here, what do you think? If that's OK with you I may make a PR. I can first do a test version for @Skivling to test. |
Oh, there is a semaphore already there, nevermind. I still think the concurrency is the issue tho. Maybe the archives are individually too big. Concurrency is set to 10 simultaneous archives right now. |
@Skivling this branch on my fork should be useful to confirm or reject the hypothesis: test/limit_concurrency. |
The limit concurreny worked 🎉, although now I think I didn't switch branches properly the first time and actually just build the main branch, bc I only put |
Ok. The limit concurreny branch was required to fix the issue.
setting it as an ENV variable would be good. Maybe in my system I'd set it to 3 so it isn't as slow but avoids the issue. I'd be happy to test a new branch to test it.
Yes that's what mine are like. |
@Monirzadeh @fmartingr since adding a config flag may be invasive I'd like to have your input before doing so. |
Re: the FS abstraction. I would add another domain that represents the tmp filesystem, which may or may not be based on the data dir depending on config. By default it should still be |
Hey @Oppen, I'm good on your two proposals. Notes:
|
IIUC, it's two different kinds of concurrency at play here. What the semaphore limits is asynchronous processing, because it limits the number of
I'm all for avoiding/postponing complexity, but understood the abstraction was the preferred way. |
Another possibility would be to limit on what we actually care for, which is the number/size of temporary files. It would be more complex, but if it sounds sensible I may give it some thought. |
I sent a new PR that I think implements that idea. Testing and review welcome. |
Data
Describe the bug / actual behavior
I ran
shiori update 500-600
to update the descriptions, archives and images for 100 bookmarks. After it was completed, the last few hadn't worked because they 'ran out of storage'. I have 50GB free on my disk, so didn't know what it meant. Turns out it had filled up the/tmp/
directory, which I think is RAM / cache storage. This resulted in my computer running very slowly until I restarted and it was cleared.Expected behavior
After a bookmark update is finished, it shouldn't need to have anything in
/tmp/
anymore and clear it.To Reproduce
Steps to reproduce the behavior:
shiori update
with a certain amount of entries, depending on how much ram you have. e.g. it took about 100 entries to fill up my 4GB ram./tmp/
prefixed with eitherarchive
orimage
and suffixed by random numbers. Notice that the/tmp/
directory has little or no space left in it.The text was updated successfully, but these errors were encountered: