Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Iceberg backlog controller #24990

Merged

Conversation

mmaslankaprv
Copy link
Member

Added datalake backlog controller to dynamically adjust the datalake
scheduling group shares. Simple proportional controller tracks the
average size of partition translation backlog and calculates how far is
it from the desired value. The number of shares given to the datalake
scheduling group is proportional to the difference between the desired
and current backlog values.

The rationale behind the controller is that datalake translation is
almost always CPU bound and the translation rate (bytes of translated
bytes per second) can be controlled by giving the translator more CPU
time when required.

Backports Required

  • none - not a bug fix
  • none - this is a backport
  • none - issue does not exist in previous branches
  • none - papercut/not impactful enough to backport
  • v24.3.x
  • v24.2.x
  • v24.1.x

Release Notes

Improvements

  • controlling Iceberg translation backlog

@mmaslankaprv mmaslankaprv requested a review from a team as a code owner January 31, 2025 14:53
@mmaslankaprv mmaslankaprv marked this pull request as draft January 31, 2025 14:53
@mmaslankaprv mmaslankaprv force-pushed the icebreg-backlog-controller branch 2 times, most recently from 3f2293e to 29b8682 Compare January 31, 2025 15:41
@mmaslankaprv mmaslankaprv marked this pull request as ready for review January 31, 2025 15:55
@vbotbuildovich
Copy link
Collaborator

vbotbuildovich commented Jan 31, 2025

CI test results

test results on build#61441
test_id test_kind job_url test_status passed
rptest.tests.datalake.compaction_test.CompactionGapsTest.test_translation_no_gaps.cloud_storage_type=CloudStorageType.S3 ducktape https://buildkite.com/redpanda/redpanda/builds/61441#0194bd63-58b6-4a65-b64b-e15a9b2bc4d5 FLAKY 1/3
test results on build#61505
test_id test_kind job_url test_status passed
gtest_raft_rpunit.gtest_raft_rpunit unit https://buildkite.com/redpanda/redpanda/builds/61505#0194cc09-7557-4daf-a06c-7196f3f6bdd4 FLAKY 1/2
rptest.tests.compaction_recovery_test.CompactionRecoveryTest.test_index_recovery ducktape https://buildkite.com/redpanda/redpanda/builds/61505#0194cc26-fde4-4df0-8738-d63deeef3870 FLAKY 1/2
rptest.tests.compaction_recovery_test.CompactionRecoveryTest.test_index_recovery ducktape https://buildkite.com/redpanda/redpanda/builds/61505#0194cc56-d88e-4904-8a98-5e01c498521f FLAKY 1/4
rptest.tests.compaction_recovery_test.CompactionRecoveryUpgradeTest.test_index_recovery_after_upgrade ducktape https://buildkite.com/redpanda/redpanda/builds/61505#0194cc26-fde5-4d08-85ca-01d94d1976c4 FLAKY 1/2
rptest.tests.datalake.datalake_e2e_test.DatalakeE2ETests.test_topic_lifecycle.cloud_storage_type=CloudStorageType.S3.filesystem_catalog_mode=True ducktape https://buildkite.com/redpanda/redpanda/builds/61505#0194cc26-fde5-4d08-85ca-01d94d1976c4 FLAKY 1/2
rptest.tests.e2e_shadow_indexing_test.ShadowIndexingWhileBusyTest.test_create_or_delete_topics_while_busy.short_retention=True.cloud_storage_type=CloudStorageType.ABS ducktape https://buildkite.com/redpanda/redpanda/builds/61505#0194cc56-d88c-446f-84f3-b47ecae42e16 FLAKY 1/2
rptest.tests.scaling_up_test.ScalingUpTest.test_scaling_up_with_recovered_topic ducktape https://buildkite.com/redpanda/redpanda/builds/61505#0194cc56-d88c-446f-84f3-b47ecae42e16 FLAKY 1/2

Copy link
Contributor

@andrwng andrwng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is cool! Have you tried it out on a cluster yet? Curious if you have any empirical improvements worth sharing

@@ -1845,8 +1845,21 @@ uint64_t disk_log_impl::size_bytes_after_offset(model::offset o) const {
uint64_t size = 0;
for (size_t i = _segs.size(); i-- > 0;) {
auto& seg = _segs[i];
if (seg->offsets().get_base_offset() < o) {
break;
auto& offsets = seg->offsets();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious if you think this is worth backporting? Seems this function is used in Raft at least and I'm wondering the effects on other callers

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, i think we may backport it, although not sure we really need it as this code is only used by the tiered storage code so maybe it is not required

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I thought it was also used by Raft to bytes lag on learners. In any case, it isn't obvious that it's a big win anyway

@@ -340,6 +340,32 @@ partition_translator::do_translation_for_range(
co_return std::move(result.value());
}

std::optional<size_t> partition_translator::translation_backlog() const {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious if there's actually correlation between translation time and bytes on disk, particularly in the face of compressed batches, or complex schemas. I guess the thinking is larger messages likely imply more complex payload, but just thinking out loud I wonder if something as simple as number of records pending would be usable here.

Not really advocating for any changes, but just thought I'd pose the question, and also point out that this won't work for read replicas or restored/FPM-ed partitions without implementing a similar sizing method for cloud.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I strongly agree with your point i.e. the actual amount of work is definitely dependent from schema complexity, compression, message size, etc. I think this doesn't really change the relation that the backlog = some_coefficient * bytes_to_translate in other words cpu cycles required for translation will always be proportional to number of bytes to translate. I think the parameters you mentioned will influence the proportion coefficient.
In future we may modify this function to factor in the other complexities.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cpu cycles required for translation will always be proportional to number of bytes to translate

This is what I'm curious about specifically: if two records have the same schema but one has 1 byte per field while the other has 100 bytes per field, does the latter end up taking meaningfully more CPU for parquet writing? I guess size affects things like comparisons, and flush frequency maybe. Just curious if we know if translation is dominated by those factors, vs schema parsing.

src/v/config/configuration.cc Outdated Show resolved Hide resolved
src/v/config/configuration.cc Outdated Show resolved Hide resolved
src/v/datalake/backlog_controller.h Outdated Show resolved Hide resolved
src/v/datalake/backlog_controller.cc Outdated Show resolved Hide resolved
Copy link
Contributor

@bharathv bharathv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm generally, few clarifying questions

ss::timer<> _sampling_timer;
long double _current_sample{0};

int _min_shares{1};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wondering if min should be at least default 100.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think not necessary, if the translator can keep up with the work with the lowest possible priority we can use it, otherwise the priority will be increased. From my experiments when producer throughput fluctuates even 1 share is enough to promptly translate the remaining backlog when ingest rate is lower.

update,
update);

_scheduling_group.set_shares(static_cast<float>(update));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any use to just adjust the weights of translation and produce, something like

translation_shares + delta
produce_shares - delta

In the current form I think it shuffles the entire share distribution which effectively means (for example) fetch gets fewer shares if there is a translation lag, is that the intention?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is good question. What we change is the ratio between the datalake and other scheduling groups. This introduces a penalty for other tasks. Let me chat with the perf team to see what they think about that.

src/v/config/configuration.cc Outdated Show resolved Hide resolved
src/v/datalake/datalake_manager.cc Outdated Show resolved Hide resolved
"the "
"backlog controller will try to maintain.",
{.needs_restart = needs_restart::no, .visibility = visibility::tunable},
5_MiB,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was initially considering whether the default size should be slightly larger than 5 MiB. However, a smaller default has the advantage of smaller proportional changes to scheduling weights. This allows the system to correct itself more quickly than with a larger default (e.g., 1 GiB), where weight changes would be drastic and throttling more significant (bigger sawtooth). Therefore, we likely prefer smaller values for faster system adjustment, am I thinking about it correctly?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason i used the small value here is a bit different. I think the absolute change of the backlog size will always be the same doesn't matter if the setpoint is 5_MiB or 1_GiB. I set the value small to promote reading data from batch_cache in my experiments the larger the value the higher the rate of cache misses.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm ya, batch cache hits reasoning make sense.

Previously the `disk_log_impl::size_bytes_after_offset` was returning a
very rough approximation of the partition size after offset as it
limited the result accuracy to the segment sized. Changed the code to
return a better approximation of size after offset by using a segment
index. When requested offset is within the segment a closest segment
index entry is queried, the entry contains a position in segment file,
the position is an offset in bytes from the beginning of the file. By
subtracting the position from file size the code calculates the
remaining segment part with the accuracy of segment index entry interval
which by default is equal to 32KiB.

Signed-off-by: Michał Maślanka <[email protected]>
Added `partition_translator` method that calculates the translation
backlog. A backlog is a number of bytes in partition that are ready to
be translated but did not yet went through the translation process.

Signed-off-by: Michał Maślanka <[email protected]>
Added properties that are responsible for changing behavior of backlog
controller.

Signed-off-by: Michał Maślanka <[email protected]>
@mmaslankaprv mmaslankaprv force-pushed the icebreg-backlog-controller branch 2 times, most recently from e8b76f6 to f57a03b Compare February 3, 2025 11:00
Added datalake backlog controller to dynamically adjust the `datalake`
scheduling group shares. Simple proportional controller tracks the
average size of partition translation backlog and calculates how far is
it from the desired value. The number of shares given to the `datalake`
scheduling group is proportional to the difference between the desired
and current backlog values.

The rationale behind the controller is that datalake translation is
almost always CPU bound and the translation rate (bytes of translated
bytes per second) can be controlled by giving the translator more CPU
time when required.

Signed-off-by: Michał Maślanka <[email protected]>
@mmaslankaprv mmaslankaprv force-pushed the icebreg-backlog-controller branch from f57a03b to bf78e62 Compare February 3, 2025 11:51
"the "
"backlog controller will try to maintain.",
{.needs_restart = needs_restart::no, .visibility = visibility::tunable},
5_MiB,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm ya, batch cache hits reasoning make sense.

Copy link
Contributor

@andrwng andrwng left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Handling for read replicas and topic recovery I think I can handled in some follow-up work

@@ -1845,8 +1845,21 @@ uint64_t disk_log_impl::size_bytes_after_offset(model::offset o) const {
uint64_t size = 0;
for (size_t i = _segs.size(); i-- > 0;) {
auto& seg = _segs[i];
if (seg->offsets().get_base_offset() < o) {
break;
auto& offsets = seg->offsets();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I thought it was also used by Raft to bytes lag on learners. In any case, it isn't obvious that it's a big win anyway

@mmaslankaprv mmaslankaprv merged commit a24b40d into redpanda-data:dev Feb 4, 2025
16 checks passed
@mmaslankaprv mmaslankaprv deleted the icebreg-backlog-controller branch February 4, 2025 07:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants