Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

postgres data growth is abnormal #828

Closed
KentaroWaki opened this issue Mar 2, 2023 · 8 comments
Closed

postgres data growth is abnormal #828

KentaroWaki opened this issue Mar 2, 2023 · 8 comments
Labels
question Further information is requested

Comments

@KentaroWaki
Copy link

Ask a question

We are running two cardano graphql systems, only one of which has seen an unusual increase in disk usage. The figure below illustrates the problem.
(The reason the disk usage is dropping in the middle is because we added more disks)

Screenshot from 2023-03-02 10-33-28

Do you know the cause?
And how can I solve it?

@KentaroWaki KentaroWaki added the question Further information is requested label Mar 2, 2023
@KentaroWaki
Copy link
Author

The following directories have a larger capacity than the normal one

/data/cardano/docker-volumes/postgresql-data/base/16384

@rhyslbw
Copy link
Contributor

rhyslbw commented Mar 2, 2023

pg-boss is used to managed off-chain asset sync. Maybe see if there's unusual growth in it's tables

@KentaroWaki
Copy link
Author

KentaroWaki commented Mar 15, 2023

@rhyslbw

Thank you for your advice.
I have confirmed that pgboss is taking up a lot of space.

cexplorer=# SELECT oid::regclass,
cexplorer-#        reltoastrelid::regclass,
cexplorer-#        pg_relation_size(reltoastrelid) AS toast_size
cexplorer-# FROM pg_class
cexplorer-# WHERE relkind = 'r'
cexplorer-#   AND reltoastrelid <> 0
cexplorer-# ORDER BY 3 DESC;
                       oid                       |       reltoastrelid        |  toast_size
-------------------------------------------------+----------------------------+--------------
 "Asset"                                         | pg_toast.pg_toast_17396    | 288902365184
 pgboss.archive                                  | pg_toast.pg_toast_586623   | 250579001344
 pgboss.job                                      | pg_toast.pg_toast_586603   | 199992918016
 tx_metadata                                     | pg_toast.pg_toast_17161    |   3312222208
 collateral_tx_out                               | pg_toast.pg_toast_16876    |     40542208
 script                                          | pg_toast.pg_toast_17106    |     33259520
 datum                                           | pg_toast.pg_toast_16892    |      2777088
 pg_rewrite                                      | pg_toast.pg_toast_2618     |       917504
 pg_statistic                                    | pg_toast.pg_toast_2619     |       688128
 tx_out                                          | pg_toast.pg_toast_17169    |       507904
 redeemer_data                                   | pg_toast.pg_toast_17062    |         8192
 cost_model                                      | pg_toast.pg_toast_16884    |         8192
 hdb_catalog.hdb_metadata                        | pg_toast.pg_toast_15529600 |         8192


what should I do?

@evilone144
Copy link

I had the same issue, the resolution was to launch vacuum on the Asset table.

vacuum verbose "Asset"

This is quite long operation, so I used screen to start the process. You may try to launch it on any other table. It also could grow your CPU usage.

Once done, the size decreased, and did not grow again.

@minzak
Copy link

minzak commented May 1, 2023

I have the same issues, hope vacuum helps.

@KentaroWaki
Copy link
Author

resolved

@ayka98i
Copy link

ayka98i commented Sep 5, 2023

I am also facing the same issue, and the vacuum command did not resolve it.
I couldn't compress the size. Does this mean there was no unnecessary data?

Also, I haven't tried it, but it seems that if you don't use the FULL option with the vacuum command, it doesn't return the disk capacity to the OS. Is there any other measure I should take?

Below are the results of implementing the solutions proposed in this Issue.

cexplorer=# SELECT oid::regclass,
cexplorer-#        reltoastrelid::regclass,
cexplorer-#        pg_relation_size(reltoastrelid) AS toast_size
cexplorer-# FROM pg_class
cexplorer-# WHERE relkind = 'r'
cexplorer-#   AND reltoastrelid <> 0
cexplorer-# ORDER BY 3 DESC;
                       oid                       |      reltoastrelid      |  toast_size
-------------------------------------------------+-------------------------+--------------
 "Asset"                                         | pg_toast.pg_toast_17793 | 406678102016
 tx_metadata                                     | pg_toast.pg_toast_16914 |   3630678016
 collateral_tx_out                               | pg_toast.pg_toast_17576 |     94806016
 script                                          | pg_toast.pg_toast_17116 |     44531712
 datum                                           | pg_toast.pg_toast_16597 |     34185216
 redeemer_data                                   | pg_toast.pg_toast_17621 |      3457024
 pg_statistic                                    | pg_toast.pg_toast_2619  |       696320
 tx_out                                          | pg_toast.pg_toast_16574 |       507904
 pg_rewrite                                      | pg_toast.pg_toast_2618  |       466944
 hdb_catalog.hdb_metadata                        | pg_toast.pg_toast_17410 |         8192
 cost_model                                      | pg_toast.pg_toast_17134 |         8192
 
cexplorer=# vacuum verbose "Asset";
INFO:  vacuuming "public.Asset"
INFO:  scanned index "Asset_pkey" to remove 6097 row versions
DETAIL:  CPU: user: 0.50 s, system: 0.23 s, elapsed: 3.99 s
INFO:  "Asset": removed 6097 row versions in 1652 pages
DETAIL:  CPU: user: 0.02 s, system: 0.00 s, elapsed: 0.10 s
INFO:  index "Asset_pkey" now contains 8746213 row versions in 104006 pages
DETAIL:  2909 index row versions were removed.
0 index pages have been deleted, 0 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "Asset": found 3589 removable, 1006403 nonremovable row versions in 24463 out of 320475 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 309382880
There were 85654 unused item pointers.
Skipped 0 pages due to buffer pins, 288629 frozen pages.
0 pages are entirely empty.
CPU: user: 0.58 s, system: 0.29 s, elapsed: 4.26 s.
INFO:  vacuuming "pg_toast.pg_toast_17793"
INFO:  scanned index "pg_toast_17793_index" to remove 85486 row versions
DETAIL:  CPU: user: 0.71 s, system: 1.73 s, elapsed: 27.47 s
INFO:  "pg_toast_17793": removed 85486 row versions in 21578 pages
DETAIL:  CPU: user: 0.07 s, system: 0.14 s, elapsed: 0.99 s
INFO:  index "pg_toast_17793_index" now contains 24325 row versions in 856374 pages
DETAIL:  85486 index row versions were removed.
855158 index pages have been deleted, 854935 are currently reusable.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
INFO:  "pg_toast_17793": found 82486 removable, 6917 nonremovable row versions in 23337 out of 49643323 pages
DETAIL:  0 dead row versions cannot be removed yet, oldest xmin: 309382880
There were 38817 unused item pointers.
Skipped 0 pages due to buffer pins, 49618069 frozen pages.
0 pages are entirely empty.
CPU: user: 1.34 s, system: 2.06 s, elapsed: 30.53 s.
VACUUM
cexplorer=# SELECT oid::regclass,
       reltoastrelid::regclass,
       pg_relation_size(reltoastrelid) AS toast_size
FROM pg_class
WHERE relkind = 'r'
  AND reltoastrelid <> 0
ORDER BY 3 DESC;
                       oid                       |      reltoastrelid      |  toast_size
-------------------------------------------------+-------------------------+--------------
 "Asset"                                         | pg_toast.pg_toast_17793 | 406678102016
 tx_metadata                                     | pg_toast.pg_toast_16914 |   3630678016
 collateral_tx_out                               | pg_toast.pg_toast_17576 |     94806016
 script                                          | pg_toast.pg_toast_17116 |     44531712
 datum                                           | pg_toast.pg_toast_16597 |     34185216
 redeemer_data                                   | pg_toast.pg_toast_17621 |      3457024
 pg_statistic                                    | pg_toast.pg_toast_2619  |       696320
 tx_out                                          | pg_toast.pg_toast_16574 |       507904
 pg_rewrite                                      | pg_toast.pg_toast_2618  |       466944
 hdb_catalog.hdb_metadata                        | pg_toast.pg_toast_17410 |         8192
 cost_model                                      | pg_toast.pg_toast_17134 |         8192

@minzak
Copy link

minzak commented Sep 5, 2023

Try to use several times, restart and again several times.
But %% ratio will be not too much ((

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

5 participants