-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add retention policy with GB or MB limitation #1885
Comments
If the cleaning (vacuuming?) require space it should also be taken into account. |
What do you mean sorry? |
Some useful queries for this task: -- sqlite
SELECT page_count * page_size as size FROM pragma_page_count(), pragma_page_size(); This will return total DB size, including free pages, so it should take into account VACUUM -- postgresql
SELECT pg_size_pretty( pg_database_size('dbname') ); |
The algo. that delete and move messages around require memory and disk space. |
ah yes! You are absolutely right! The user might need to leave some space. Interesting to add this recommendation in the |
Added the changes, with the test case, and tested on the machine several times. |
Weekly update: |
* chore: add retention policy with GB or MB limitation #1885 * chore: add retention policy with GB or MB limitation * chore: updated code post review- retention policy * ci: extract discordNotify to separate file Signed-off-by: Jakub Sokołowski <jakub@status.im> * ci: push images to new wakuorg/nwaku repo Signed-off-by: Jakub Sokołowski <jakub@status.im> * ci: enforce default Docker image tags strictly Signed-off-by: Jakub Sokołowski <jakub@status.im> * ci: push GIT_REF if it looks like a version Signed-off-by: Jakub Sokołowski <jakub@status.im> * fix: update wakuv2 fleet DNS discovery enrtree https://github.com/status-im/infra-misc/issues/171 * chore: resolving DNS IP and publishing it when no extIp is provided (#2030) * feat(coverage): Add simple coverage (#2067) * Add test aggregator to all directories. * Implement coverage script. * fix(ci): fix name of discord notify method Also use absolute path to load Groovy script. Signed-off-by: Jakub Sokołowski <jakub@status.im> * chore(networkmonitor): refactor setConnectedPeersMetrics, make it partially concurrent, add version (#2080) * chore(networkmonitor): refactor setConnectedPeersMetrics, make it partially concurrent, add version * add more metrics, refactor how most metrics are calculated * rework metrics table fillup * reset connErr to make sure we honour successful reconnection * chore(cbindings): Adding cpp example that integrates the 'libwaku' (#2079) * Adding cpp example that integrates the `libwaku` --------- Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com> * fix(ci): update the dependency list in pre-release WF (#2088) * chore: adding NetConfig test suite (#2091) --------- Signed-off-by: Jakub Sokołowski <jakub@status.im> Co-authored-by: Jakub Sokołowski <jakub@status.im> Co-authored-by: Anton Iakimov <yakimant@gmail.com> Co-authored-by: gabrielmer <101006718+gabrielmer@users.noreply.github.com> Co-authored-by: Álex Cabeza Romero <alex93cabeza@gmail.com> Co-authored-by: Vaclav Pavlin <vaclav@status.im> Co-authored-by: Ivan Folgueira Bande <128452529+Ivansete-status@users.noreply.github.com> Co-authored-by: NagyZoltanPeter <113987313+NagyZoltanPeter@users.noreply.github.com>
Weekly update: Undergoing: MUID concept on message level |
Closing this issue, seems feature complete. Also made changes related to Infra-level testing. |
Hey @ABresting! Although it is awesome the implementation so far, I think we cannot close this issue yet because we need to double-check that it works correctly on Postgres. |
Can be completely de-prioritized per nwaku PM meeting 2023-11-07. |
Does not seem to work for PostgreSQL:
Justification of the de-prioritization? |
The current implementation is correct from the The database size is not properly updated when deleting rows as per how Postgres works. i.e. the rows are marked as "deletable". Therefore, we have these options:
@fryorcraken @ABresting @apentori @jakubgs @yakimant - what is the best approach from your point of view to maintain the Postgres database size under a certain level? |
When using (1) I guess the trick is to ensure it is performed before the database reaches maximum size, right? So we may need to operate with some buffer where we delete and VACUUM before max size is reached. the 20% need to be selected so that the VACUUM Works efficiently and also, we don't end up VACUUM ing every 10 s. |
I believe that initially it was designed to work like this, when retention limit is reached, remove the 20% of rows. Like you suggest we can now assume if user has given X size limit then from this we can calculate the new retention limit (95% of input) just to be on the safer side. BUT the issue before was with SQLite, may be with this PR it has been resolved. |
Hey there! imo, the approach 1 - VACUUM would be the best option but we need to adapt the code because right now it would apply the database reduction each time the retention policy is executed. Let me elaborate with an example:
( cc @fryorcraken @ABresting ) |
@Ivansete-status, I need to read more to understand PG vacuuming. |
BTW, did you have a look at |
Thanks for the comments and suggestion! |
I don't really see the point of an automatic We don't care if filesystem space is used, we care that we don't hit the filesystem size limit and crash and burn. |
Best article on SQLITE vacuuming I've seen: |
Note we are rescoping this under waku-org/pm#119 to match new process. |
I'm closing this issue as per the work made in #2506 |
Background
We came across an issue in HK-prod node where the SQLite database reached 39GB.
We need a mechanism to apply a retention policy so that the database doesn't grow more than a certain amount of GB, or MB.
Acceptance criteria
The Waku node with Store mounted should be able to limit the maximum space occupied by the underlying database.
This applies to both sqlite and postgres.
The new retention policy should support the next formats:
The text was updated successfully, but these errors were encountered: