-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more examples how to use synapse_auto_compressor #97
Comments
After some hours it said in the end:
WOW! 150 Million rows in 3307*500=1,700,000 Rooms! unfortunately the database grew from 56 to 59GB in /var/lib/pistgres, although the gzipped sql export of the table |
After running So we should add this too to the Readme: use VACUUM after compressing to really free up the disk-space. (I thought we could delete the three extra tables in the database: |
+1 for more examples how to use synapse_auto_compressor and its VACUUM feature. |
the readme only suggests
This means, that it will only check in 100 rooms and quit as I understand is as a layman.
I got the result after 100 chunks:
this seems to have worked.
Then I started again running with
-n 10000
now to try to compress my 56GB database.Is this the best option? or could you provide more informatinon in the readme please?
The text was updated successfully, but these errors were encountered: