Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider partitioning large tables in the DB #103

Open
StevenCTimm opened this issue Jun 4, 2020 · 2 comments
Open

Consider partitioning large tables in the DB #103

StevenCTimm opened this issue Jun 4, 2020 · 2 comments

Comments

@StevenCTimm
Copy link

We worked with the database group to see how much disk space is freed up now that we have started doing periodic reaping of old records. A full dump and restore of the DEV database took us to 581 GB as compared to 649 GB, took ~2 hours to dump and 8.5 hours to restore.
We are now testing a FULL VACUUM of the 649GB database to see how long that takes

From Olga Vlasova
"Is there any plan to partition the table dataproduct,
or even better to split it between the smaller tables?!

Or if you have any archive/historical data you can create and copy it to the tables with the same schema for easy access.

Currently, the size of the table (in dev 600G) and in production 1733GB is too large for the proper maintenance.
We cannot quickly complete backup and restore, and vacuum will take significant time as well locking your one big table - you will not have access during the vacuum process.

You need to take into account all this timing and difficulties in maintaining such large table, and think how to improve it without additional downtime as we have to do it now."

@StevenCTimm StevenCTimm added production issue filed by the production team operations and removed production issue filed by the production team labels Jun 4, 2020
@jcpunk
Copy link
Contributor

jcpunk commented Nov 17, 2020

Now that the reaper channel is running, is the tablespace still enormous?

@StevenCTimm
Copy link
Author

Yes.. the number of rows in the tables is staying relatively constant, but the overall space taken by the database is still growing because we can't backup/restore or vacuum efficiently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants