You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We worked with the database group to see how much disk space is freed up now that we have started doing periodic reaping of old records. A full dump and restore of the DEV database took us to 581 GB as compared to 649 GB, took ~2 hours to dump and 8.5 hours to restore.
We are now testing a FULL VACUUM of the 649GB database to see how long that takes
From Olga Vlasova
"Is there any plan to partition the table dataproduct,
or even better to split it between the smaller tables?!
Or if you have any archive/historical data you can create and copy it to the tables with the same schema for easy access.
Currently, the size of the table (in dev 600G) and in production 1733GB is too large for the proper maintenance.
We cannot quickly complete backup and restore, and vacuum will take significant time as well locking your one big table - you will not have access during the vacuum process.
You need to take into account all this timing and difficulties in maintaining such large table, and think how to improve it without additional downtime as we have to do it now."
The text was updated successfully, but these errors were encountered:
Yes.. the number of rows in the tables is staying relatively constant, but the overall space taken by the database is still growing because we can't backup/restore or vacuum efficiently.
We worked with the database group to see how much disk space is freed up now that we have started doing periodic reaping of old records. A full dump and restore of the DEV database took us to 581 GB as compared to 649 GB, took ~2 hours to dump and 8.5 hours to restore.
We are now testing a FULL VACUUM of the 649GB database to see how long that takes
From Olga Vlasova
"Is there any plan to partition the table dataproduct,
or even better to split it between the smaller tables?!
Or if you have any archive/historical data you can create and copy it to the tables with the same schema for easy access.
Currently, the size of the table (in dev 600G) and in production 1733GB is too large for the proper maintenance.
We cannot quickly complete backup and restore, and vacuum will take significant time as well locking your one big table - you will not have access during the vacuum process.
You need to take into account all this timing and difficulties in maintaining such large table, and think how to improve it without additional downtime as we have to do it now."
The text was updated successfully, but these errors were encountered: