gfix -sweep -clean #7732
Replies: 6 comments 20 replies
-
Why have you created this as a discussion and not as an issue? |
Beta Was this translation helpful? Give feedback.
-
Hello Fabiano, As you probably know, all requested features in 99% cases appear in the next major version of Firebird, in this case, I guess, it will be 6. Regards, |
Beta Was this translation helpful? Give feedback.
-
Rephrase this suggestion as "security measure: zero freed database pages" and it will get higher chance for implementation. |
Beta Was this translation helpful? Give feedback.
-
Agree that as security measure this may be useful. But what about your backup process... There is much faster way to perform regular backup using multilevel nbackup. Backups with level > 0 are executed much faster (nbackup saves only modified pages, moreover starting with FB3 it reads only them). I doubt your customers change all 500+ GB of data everyday. |
Beta Was this translation helpful? Give feedback.
-
If it's security... Wouldn't it be more effective and efficient if the garbage collector automatically (without gfix -sweep) set the deleted/unused parts to 0x00? (If it don't already does this?) It could also reset entire pages. |
Beta Was this translation helpful? Give feedback.
-
I also not see much better security with zeroed empty pages. As for backup purposes - nbackup might be tuned to not copy content of empty pages into backup of level 0. As for LGPD and "anonimize users data" - AFAIK, database under such conditions should not store any sensitive data in open/complete form, so this is completely another task with very different solutions. |
Beta Was this translation helpful? Give feedback.
-
Hello!
I would suggest that gfix -sweep have a feature to clean a database page. Nowadays a backup folowed by a gfix sweep only marks the data page as FREE but the data itself remains in the database page. If I ZIP the database file it became bigger than necessary due to this. I would like to mark the data page as FREE and ZERO (0x00) the content.
Why:
We have a handfull of customers where the Firebird database File have more than 500GB of size (our biggest customer have 666GB today and grows 40GB/month).
We have some problems related to this size databases. For example a simple gbak backup runs for over 6 hours to complete (on a Database server with 128GB RAM, 50+ processors). A database restore thakes more than 24 hours to complete.
I would like make a FAST backup, where I do a "alter database begin backup", copy the database file itself, then "alter dabase end backup". With the database copy I remove all Indexes (PK and FK, etc) do a gfix -sweep zeroing the freed database pages. This way when I ZIP the database file I can schrink it a lot.
Beta Was this translation helpful? Give feedback.
All reactions