Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Modify optimized compaction to cover edge cases #25594

Merged
merged 32 commits into from
Jan 14, 2025
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
32 commits
Select commit Hold shift + click to select a range
d631314
feat: Modify optimized compaction to cover edge cases
devanbenz Dec 16, 2024
67849ae
feat: Modify the PR to include optimized compaction
devanbenz Dec 17, 2024
827e859
feat: Use named variables for PlanOptimize
devanbenz Dec 17, 2024
5387ca3
feat: adjust test comments
devanbenz Dec 17, 2024
3153596
feat: code removal from debugging
devanbenz Dec 17, 2024
83d28ec
feat: setting BlockCount idx value to 1
devanbenz Dec 17, 2024
f896a01
feat: Adjust testing and add sprintf for magic vars
devanbenz Dec 18, 2024
f15d9be
feat: need to use int64 instead of int
devanbenz Dec 18, 2024
54c8e1c
feat: touch
devanbenz Dec 18, 2024
403d888
feat: Adjust tests to include lower level planning function calls
devanbenz Dec 18, 2024
23d12e1
feat: Fix up some tests that I forgot to adjust
devanbenz Dec 18, 2024
d3afb03
feat: fix typo
devanbenz Dec 18, 2024
cf657a8
feat: touch
devanbenz Dec 18, 2024
fc6ca13
feat: Call SingleGenerationReason() once by initializing a
devanbenz Dec 19, 2024
4fc4d55
feat: clarify file counts for reason we are not fully compacted
devanbenz Dec 19, 2024
c93bdfb
feat: grammar typo
devanbenz Dec 19, 2024
2dd5ef4
feat: missed a test when updating the variable! whoops!
devanbenz Dec 19, 2024
479de96
feat: Add test for another edge case found;
devanbenz Dec 20, 2024
c392906
feat: Remove some overlapping tests
devanbenz Dec 20, 2024
f444518
feat: Adds check for block counts and adjusts tests to use require.Ze…
devanbenz Dec 26, 2024
5e4e2da
feat: Adds test for planning lower level TSMs with block sizes at agg…
devanbenz Dec 26, 2024
c315b1f
chore: rerun ci
devanbenz Dec 26, 2024
eb0a77d
feat: Add a mock backfill test with mixed generations, mixed levels, …
devanbenz Dec 26, 2024
1bac192
Merge branch 'master-1.x' into db/4201/compaction-bugs
devanbenz Jan 6, 2025
371f960
feat: Fix a merge conflict where a var was renamed from fs -> fss
devanbenz Jan 6, 2025
5a614c4
feat: Adding more tests reversing and mixing up some of the
devanbenz Jan 9, 2025
3748c36
feat: Begin 'compacting' tests in to single test
devanbenz Jan 13, 2025
0799f00
feat: create loop for tests where there should be no further compaction
devanbenz Jan 13, 2025
3e69f2d
feat: cleanup
devanbenz Jan 13, 2025
976291a
feat: Add test names to the testing struct
devanbenz Jan 13, 2025
0a2ba1e
feat: Use t.Run instead of declaring the test name in the requires
devanbenz Jan 14, 2025
8c908c5
feat: Reverse block counts
devanbenz Jan 14, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions tsdb/engine/tsm1/compact.go
Original file line number Diff line number Diff line change
Expand Up @@ -469,7 +469,7 @@ func (c *DefaultPlanner) Plan(lastWrite time.Time) ([]CompactionGroup, int64) {
var skip bool

// Skip the file if it's over the max size and contains a full block and it does not have any tombstones
if len(generations) > 2 && group.size() > uint64(tsdb.MaxTSMFileSize) && c.FileStore.BlockCount(group.files[0].Path, 1) == tsdb.DefaultMaxPointsPerBlock && !group.hasTombstones() {
if len(generations) > 2 && group.size() > uint64(tsdb.MaxTSMFileSize) && c.FileStore.BlockCount(group.files[0].Path, 1) >= tsdb.DefaultMaxPointsPerBlock && !group.hasTombstones() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm slightly confused whether this is still what we want to do. We skip a group (i.e., a generation) here if it is large (sum of all files is larger than the largest permissible single file), and the first file has the default maximum points per block and there are no tombstones.

This seems to be mixing metrics from the first file in the generation (points per block) with metrics from the whole generation (combined file size). Do we need to look at the points per block of all the files in the generation? Why are we skipping a generation if it is larger than a single file can be? What's the significance of that?

I understand the original code had this strange mix of conditionals, but do we understand why, and whether we should continue with them? At the very least, the comment Skip the file if... is misleading, because we are skipping a generation which may contain more than one file, are we not?

Copy link
Author

@devanbenz devanbenz Jan 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I think the comment is a bit misleading. I was mostly just keeping Plan, and PlanLevel as is... I would have no problem with modifying the existing logic in them though. Perhaps instead of checking individual file block counts and the entire group size against 2 GB I take the approach checking all the files in the group and all the block sizes in the group? Some pseudo code:

if gens <= 1
	skip

filesAtMaxSize = 0
filesAtMaxBlocks = 0

for file in generation
	if file < maxSize
		filesAtMaxSize++
	if file.blocks < maxBlocks
		filesAtMaxBlocks++

if filesAtMaxSize >= 2 || filesAtMaxBlocks >= 2 || has tombstones
	plan

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After consideration, I think you were right, @devanbenz, to change Plan and PlanLevel minimally. While their algorithms are obtuse, we shouldn't change them in the PR or at this time, to minimize the risks in what is already a large change to compaction.

skip = true
}

Expand Down Expand Up @@ -545,7 +545,7 @@ func (c *DefaultPlanner) Plan(lastWrite time.Time) ([]CompactionGroup, int64) {
// Skip the file if it's over the max size and contains a full block or the generation is split
// over multiple files. In the latter case, that would mean the data in the file spilled over
// the 2GB limit.
if g.size() > uint64(tsdb.MaxTSMFileSize) && c.FileStore.BlockCount(g.files[0].Path, 1) == tsdb.DefaultMaxPointsPerBlock {
if g.size() > uint64(tsdb.MaxTSMFileSize) && c.FileStore.BlockCount(g.files[0].Path, 1) >= tsdb.DefaultMaxPointsPerBlock {
davidby-influx marked this conversation as resolved.
Show resolved Hide resolved
start = i + 1
}

Expand Down Expand Up @@ -589,7 +589,7 @@ func (c *DefaultPlanner) Plan(lastWrite time.Time) ([]CompactionGroup, int64) {
}

// Skip the file if it's over the max size and it contains a full block
if gen.size() >= uint64(tsdb.MaxTSMFileSize) && c.FileStore.BlockCount(gen.files[0].Path, 1) == tsdb.DefaultMaxPointsPerBlock && !gen.hasTombstones() {
if gen.size() >= uint64(tsdb.MaxTSMFileSize) && c.FileStore.BlockCount(gen.files[0].Path, 1) >= tsdb.DefaultMaxPointsPerBlock && !gen.hasTombstones() {
davidby-influx marked this conversation as resolved.
Show resolved Hide resolved
startIndex++
continue
}
Expand Down
186 changes: 127 additions & 59 deletions tsdb/engine/tsm1/compact_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -2272,15 +2272,15 @@ func TestDefaultPlanner_PlanOptimize_LargeMultiGeneration(t *testing.T) {
}

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Equal(t, 0, len(tsmP), "compaction group; Plan()")
require.Equal(t, int64(0), pLenP, "compaction group length; Plan()")
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

tsm, pLen, _ := cp.PlanOptimize()
require.Equal(t, 1, len(tsm), "compaction group")
Expand Down Expand Up @@ -2325,15 +2325,15 @@ func TestDefaultPlanner_PlanOptimize_SmallSingleGeneration(t *testing.T) {
}

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Equal(t, 0, len(tsmP), "compaction group; Plan()")
require.Equal(t, int64(0), pLenP, "compaction group length; Plan()")
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

tsm, pLen, gLen := cp.PlanOptimize()
require.Equal(t, 1, len(tsm), "compaction group")
Expand Down Expand Up @@ -2375,15 +2375,15 @@ func TestDefaultPlanner_PlanOptimize_SmallSingleGenerationUnderLevel4(t *testing
}

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Equal(t, 0, len(tsmP), "compaction group; Plan()")
require.Equal(t, int64(0), pLenP, "compaction group length; Plan()")
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

tsm, pLen, gLen := cp.PlanOptimize()
devanbenz marked this conversation as resolved.
Show resolved Hide resolved
require.Equal(t, 1, len(tsm), "compaction group")
Expand Down Expand Up @@ -2432,14 +2432,15 @@ func TestDefaultPlanner_FullyCompacted_SmallSingleGeneration(t *testing.T) {
require.False(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

_, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, int64(1), cgLen, "compaction group length")
Expand Down Expand Up @@ -2472,19 +2473,20 @@ func TestDefaultPlanner_FullyCompacted_SmallSingleGeneration_Halt(t *testing.T)
require.True(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

cgroup, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, []tsm1.CompactionGroup(nil), cgroup, "compaction group")
require.Equal(t, int64(0), cgLen, "compaction group length")
require.Equal(t, int64(0), genLen, "generation count")
require.Zero(t, cgLen, "compaction group length")
require.Zero(t, genLen, "generation count")
}

// This test is added to account for a single generation that has a group size
Expand Down Expand Up @@ -2552,14 +2554,15 @@ func TestDefaultPlanner_FullyCompacted_LargeSingleGenerationUnderAggressiveBlock
require.False(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

_, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, int64(1), cgLen, "compaction group length")
Expand Down Expand Up @@ -2603,14 +2606,15 @@ func TestDefaultPlanner_FullyCompacted_LargeSingleGenerationMaxAggressiveBlocks(
require.True(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

cgroup, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, []tsm1.CompactionGroup(nil), cgroup, "compaction group")
Expand Down Expand Up @@ -2656,14 +2660,15 @@ func TestDefaultPlanner_FullyCompacted_LargeSingleGenerationNoMaxAggrBlocks(t *t
require.True(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

cgroup, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, []tsm1.CompactionGroup(nil), cgroup, "compaction group")
Expand Down Expand Up @@ -2712,19 +2717,20 @@ func TestDefaultPlanner_FullyCompacted_ManySingleGenLessThen2GBMaxAggrBlocks(t *
require.True(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

cgroup, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, []tsm1.CompactionGroup(nil), cgroup, "compaction group")
require.Equal(t, int64(0), cgLen, "compaction group length")
require.Equal(t, int64(0), genLen, "generation count")
require.Zero(t, cgLen, "compaction group length")
require.Zero(t, genLen, "generation count")
}

// This test is added to account for a single generation that has a group size
Expand Down Expand Up @@ -2768,14 +2774,15 @@ func TestDefaultPlanner_FullyCompacted_ManySingleGenLessThen2GBNotMaxAggrBlocks(
require.False(t, compacted, "is fully compacted")

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

_, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, int64(1), cgLen, "compaction group length")
Expand Down Expand Up @@ -2853,21 +2860,82 @@ func TestDefaultPlanner_FullyCompacted_ManySingleGen2GBLastLevel2(t *testing.T)
}

_, cgLen := cp.PlanLevel(1)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(1)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(2)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Equal(t, int64(0), cgLen, "compaction group length; PlanLevel(3)")
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

_, cgLen = cp.Plan(time.Now().Add(-1))
require.Equal(t, int64(0), cgLen, "compaction group length; Plan()")
tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")

tsm, cgLen, genLen := cp.PlanOptimize()
require.Equal(t, int64(1), cgLen, "compaction group length")
require.Equal(t, int64(3), genLen, "generation count")
require.Equal(t, len(expFiles), len(tsm[0]), "tsm files in compaction group")
}

// This test will check to ensure that any TSM generations planned with the default planner
// using Plan() and PlanLevel() over default block size are skipped.
devanbenz marked this conversation as resolved.
Show resolved Hide resolved
func TestDefaultPlanner_PlanOverAggressiveBlocks(t *testing.T) {
data := []tsm1.FileStat{
{
Path: "01-02.tsm1",
Size: 251 * 1024 * 1024,
},
{
Path: "01-03.tsm1",
Size: 1 * 1024 * 1024,
},
{
Path: "02-02.tsm1",
Size: 251 * 1024 * 1024,
},
{
Path: "02-03.tsm1",
Size: 1 * 1024 * 1024,
},
{
Path: "03-02.tsm1",
Size: 251 * 1024 * 1024,
},
{
Path: "03-03.tsm1",
Size: 1 * 1024 * 1024,
},
}

fs := &fakeFileStore{
PathsFn: func() []tsm1.FileStat {
return data
},
}
blocks := []int{
tsdb.AggressiveMaxPointsPerBlock,
tsdb.AggressiveMaxPointsPerBlock,
tsdb.AggressiveMaxPointsPerBlock,
tsdb.AggressiveMaxPointsPerBlock,
tsdb.AggressiveMaxPointsPerBlock,
tsdb.AggressiveMaxPointsPerBlock,
}
err := fs.SetBlockCounts(blocks)
require.NoError(t, err, "SetBlockCounts")

cp := tsm1.NewDefaultPlanner(fs, tsdb.DefaultCompactFullWriteColdDuration)

_, cgLen := cp.PlanLevel(1)
require.Zero(t, cgLen, "compaction group length; PlanLevel(1)")
_, cgLen = cp.PlanLevel(2)
require.Zero(t, cgLen, "compaction group length; PlanLevel(2)")
_, cgLen = cp.PlanLevel(3)
require.Zero(t, cgLen, "compaction group length; PlanLevel(3)")

tsmP, pLenP := cp.Plan(time.Now().Add(-time.Second))
require.Zero(t, len(tsmP), "compaction group; Plan()")
require.Zero(t, pLenP, "compaction group length; Plan()")
}

func TestDefaultPlanner_PlanOptimize_Tombstones(t *testing.T) {
data := []tsm1.FileStat{
{
Expand Down
Loading