You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On 3/8/24 the database ran out of space and became unavailable. I increased the allocation by hand, in the console, from 120GB to 140GB and enabled storage auto-scaling. That solved the immediate problem, but as follow-up, we should:
Check that the storage is being used how we expect. I.e. that the storage needs have grown due to the space needed by new analysis runs, not because there's cruft or temporary data being left around.
Decide what parameters we want, specifically
Whether to keep auto-scaling enabled
What max limit to set, if any
Whether to switch from gp2 to gp3 storage
Change parameters in the Terraform/tfvars to match those decisions
Note: if we decide to turn off storage auto-scaling, we should adjust the threshold on the CloudWatch alarm to make it go off at a higher value. If we decide to stick with auto-scaling, an alarm isn't as critical, though it would still make sense to have one that would go off before we hit the max limit.
The text was updated successfully, but these errors were encountered:
On 3/8/24 the database ran out of space and became unavailable. I increased the allocation by hand, in the console, from 120GB to 140GB and enabled storage auto-scaling. That solved the immediate problem, but as follow-up, we should:
Note: if we decide to turn off storage auto-scaling, we should adjust the threshold on the CloudWatch alarm to make it go off at a higher value. If we decide to stick with auto-scaling, an alarm isn't as critical, though it would still make sense to have one that would go off before we hit the max limit.
The text was updated successfully, but these errors were encountered: