Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor of ActiveDefrag to reduce latencies #1242

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

JimB123
Copy link
Contributor

@JimB123 JimB123 commented Oct 31, 2024

Refer to: #1141

This update refactors the defrag code to:

  • Make the overall code more readable and maintainable
  • Reduce latencies incurred during defrag processing

With this update, the defrag cycle time is reduced to 500us, with more frequent cycles. This results in much more predictable latencies, with a dramatic reduction in tail latencies.

(See #1141 for more complete details.)

This update is focused mostly on the high-level processing, and does NOT address lower level functions which aren't currently timebound (e.g. activeDefragSdsDict(), and moduleDefragGlobals()). These are out of scope for this update and left for a future update.

I fixed kvstoreDictLUTDefrag because it was using up to 7ms on a CME single shard.

During unit tests, the following max latencies were measured (in verbose mode):

Unit test name Old Latency ms New Latency ms
Active defrag main dictionary: cluster 8 0
Active defrag big keys: cluster 8 0
Active defrag main dictionary: standalone 8 0
Active defrag - AOF loading 20 0
Active defrag big keys: standalone 8 0
Active defrag big list: standalone 8 0

In addition, the following test was run on both old and new versions of the software:

// Create fragmented host
./src/valkey-benchmark -r 10000000 -n 10000000 -d 3 -t set 
./src/valkey-benchmark -r 9000000 -n 10000000 -d 11 -t set 
./src/valkey-benchmark -r 8000000 -n 10000000 -d 19 -t set 
./src/valkey-benchmark -r 7000000 -n 10000000 -d 27 -t set

// Enable defrag while running some traffic
./src/valkey-cli config set activedefrag yes; ./src/valkey-benchmark -r 7000000 -c 1 -n 1000000 -l -t get

Configuration was set so that both old/new clusters would use 10% defrag CPU.
Defrag time OLD: 120 sec
Defrag time NEW: 105 sec
Times were based on a 5-second poll. (I didn't have logs running.)

The improvement in run time is believed to be due to the unskewed nature of the new code which provides a more accurate 10% of the CPU.

This is the OLD distribution for the GET benchmark while defragging:

Latency by percentile distribution:
0.000% <= 0.023 milliseconds (cumulative count 5)
50.000% <= 0.031 milliseconds (cumulative count 953602)
96.875% <= 0.039 milliseconds (cumulative count 980927)
98.438% <= 0.047 milliseconds (cumulative count 990566)
99.219% <= 0.055 milliseconds (cumulative count 993349)
99.609% <= 0.351 milliseconds (cumulative count 996097)
99.805% <= 0.535 milliseconds (cumulative count 998320)
99.902% <= 0.559 milliseconds (cumulative count 999088)
99.951% <= 0.759 milliseconds (cumulative count 999516)
99.976% <= 10.191 milliseconds (cumulative count 999765)
99.988% <= 10.231 milliseconds (cumulative count 999889)
99.994% <= 10.263 milliseconds (cumulative count 999941)
99.997% <= 10.335 milliseconds (cumulative count 999971)
99.998% <= 10.687 milliseconds (cumulative count 999985)
99.999% <= 10.759 milliseconds (cumulative count 999994)
100.000% <= 10.823 milliseconds (cumulative count 999997)
100.000% <= 11.167 milliseconds (cumulative count 999999)
100.000% <= 11.175 milliseconds (cumulative count 1000000)
100.000% <= 11.175 milliseconds (cumulative count 1000000)

This is the equivalent NEW distribution:

Latency by percentile distribution:
0.000% <= 0.023 milliseconds (cumulative count 8)
50.000% <= 0.031 milliseconds (cumulative count 934278)
93.750% <= 0.039 milliseconds (cumulative count 969396)
98.438% <= 0.055 milliseconds (cumulative count 985215)
99.219% <= 0.535 milliseconds (cumulative count 993656)
99.609% <= 0.543 milliseconds (cumulative count 997369)
99.805% <= 0.551 milliseconds (cumulative count 998365)
99.902% <= 0.583 milliseconds (cumulative count 999070)
99.951% <= 0.783 milliseconds (cumulative count 999515)
99.976% <= 1.023 milliseconds (cumulative count 999775)
99.988% <= 1.047 milliseconds (cumulative count 999903)
99.994% <= 1.071 milliseconds (cumulative count 999945)
99.997% <= 1.175 milliseconds (cumulative count 999970)
99.998% <= 1.295 milliseconds (cumulative count 999985)
99.999% <= 1.527 milliseconds (cumulative count 999994)
100.000% <= 1.543 milliseconds (cumulative count 999997)
100.000% <= 1.551 milliseconds (cumulative count 999999)
100.000% <= 1.735 milliseconds (cumulative count 1000000)
100.000% <= 1.735 milliseconds (cumulative count 1000000)

You can see in the new distribution that there is a very slight increase in latencies <= 99.951%, in exchange for a huge reduction in tail latencies.

A CONTROL test WITHOUT Active Defrag running shows a very similar distribution with a variation of roughly 500us in the tail latencies (as expected):

Latency by percentile distribution:
0.000% <= 0.023 milliseconds (cumulative count 7)
50.000% <= 0.031 milliseconds (cumulative count 958519)
96.875% <= 0.039 milliseconds (cumulative count 983380)
98.438% <= 0.047 milliseconds (cumulative count 991683)
99.219% <= 0.055 milliseconds (cumulative count 993548)
99.609% <= 0.319 milliseconds (cumulative count 996120)
99.805% <= 0.527 milliseconds (cumulative count 998137)
99.902% <= 0.543 milliseconds (cumulative count 999134)
99.951% <= 0.567 milliseconds (cumulative count 999575)
99.976% <= 0.615 milliseconds (cumulative count 999759)
99.988% <= 0.767 milliseconds (cumulative count 999879)
99.994% <= 0.911 milliseconds (cumulative count 999940)
99.997% <= 1.015 milliseconds (cumulative count 999975)
99.998% <= 1.031 milliseconds (cumulative count 999991)
99.999% <= 1.039 milliseconds (cumulative count 999995)
100.000% <= 1.055 milliseconds (cumulative count 999998)
100.000% <= 1.063 milliseconds (cumulative count 999999)
100.000% <= 1.071 milliseconds (cumulative count 1000000)
100.000% <= 1.071 milliseconds (cumulative count 1000000)

Copy link

codecov bot commented Oct 31, 2024

Codecov Report

Attention: Patch coverage is 94.70405% with 17 lines in your changes missing coverage. Please review.

Project coverage is 70.77%. Comparing base (a37dee4) to head (825da4f).
Report is 91 commits behind head on unstable.

Files with missing lines Patch % Lines
src/defrag.c 94.49% 17 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff              @@
##           unstable    #1242      +/-   ##
============================================
+ Coverage     70.62%   70.77%   +0.15%     
============================================
  Files           114      114              
  Lines         61694    63182    +1488     
============================================
+ Hits          43569    44718    +1149     
- Misses        18125    18464     +339     
Files with missing lines Coverage Δ
src/ae.c 75.00% <100.00%> (+0.09%) ⬆️
src/config.c 78.83% <ø> (+0.13%) ⬆️
src/dict.c 97.33% <100.00%> (+0.07%) ⬆️
src/kvstore.c 96.31% <100.00%> (+0.16%) ⬆️
src/server.c 87.69% <100.00%> (-0.97%) ⬇️
src/server.h 100.00% <ø> (ø)
src/defrag.c 91.71% <94.49%> (+5.95%) ⬆️

... and 86 files with indirect coverage changes

@JimB123 JimB123 force-pushed the active-defrag branch 4 times, most recently from ffdf32a to 4054b1f Compare November 5, 2024 18:44
@madolson
Copy link
Member

madolson commented Nov 5, 2024

@zvi-code Would you mind also helping taking a look on this PR.

@zvi-code
Copy link

zvi-code commented Nov 7, 2024

@JimB123 , I skimmed through the code, at a very high level, it looks good (and the results you posted are very promising!). I will followup with a more detailed review of the code.
Can you, please, provide an overview of the change in how the new running time is calculated? Specifically, you removed the server.hz from the timelimit calculation, are you saying we should stop using it in other places (like in expiry) ? is there a tradeoff ?, could there be imbalance between cron jobs that are scheduled based on different mechanisms ? Can you benchmark your change with large number of clients? [I think dynamic_hz is set to true by default]

@JimB123
Copy link
Contributor Author

JimB123 commented Nov 7, 2024

@zvi-code The Valkey event loop supports creation of various timer events. server.hz is used to control the rate for 1 specific timer event, the serverCron timer. This timer handles various periodic maintenance events and is, by default, executed every 100ms (10 times/sec as indicated by the server.hz value). It makes sense to have a single timer for these periodic maintenance event as we don't need a separate timer for every little periodic action.

But this serverCron is not the only timer. Evictions, for example, use a timer with a 0ms delay - it fires continuously (every event loop cycle) if we are over maxmemory and critical eviction processing is required. Modules have the ability to create various timers. (At Amazon, we have other internal timers.)

For defrag, there are 2 things that need to happen. First, we need to make a decision if we should begin to defrag. It makes sense to decide this using the main serverCron. We don't need a special timer just to check if we should perform defrag. However, once that decision is made, and we begin the defrag process, it does not make sense to perform the defragmentation using the 100ms server cron. A dedicated timer is warranted - just like is done with eviction.

In the old code, if we target 10% of the CPU, that was done by letting defrag run for (a continuous) 10ms every time the 100ms serverCron fired. That's just a bad idea!

In the new code, with the same 10% CPU target, we run the defrag job for only 500us, but schedule it on it's own dedicated timer so that it can run more often. The code modulates the frequency rather than the duration. (In extreme cases, anti-starvation must still modulate the duration as starvation impacts the frequency.)

are you saying we should stop using [server.hz] in other places (like in expiry) ?

I haven't looked at the lazy expiration before, but looking at it now, that's exactly what I'm saying. We shouldn't be doing ANYTHING in serverCron that attempts to use a measured percentage of the CPU. The server cron should remain a fast and predictable timer.

I think dynamic_hz is set to true by default

OK, wow, this is another thing I haven't looked at before. IMO, dynamic_hz is misguided. We have this nice periodic job (serverCron) that runs every 100ms. But one of the things it does is to perform certain actions that iterate over a variable list of clients. It doesn't want to create latency - so it only iterates over a few clients each cycle. BUT - now it might take a very long time to iterate over the list of clients, given the 100ms frequency. So we've decided to perform serverCron more often. This means that we are performing a lot of miscellaneous tasks (like checking if we need to defrag) more often - just because there are more clients? That doesn't make sense. A better solution would be to have a separate timer for a clients-cron. And THAT timer could be run at a higher rate (and dynamically adjusted) based on the number of clients.

This dynamic_hz creates the very weird behavior that with the OLD defrag, if there are more clients, latency caused by defrag would be reduced. That's a super strange behavior. In the new defrag code, the number of clients doesn't affect defrag latencies. It's now consistently low. In fact, the default 500us setting will result in latencies that are half of the lowest possible (1%) setting on the old defrag (at the standard HZ rate).

@madolson madolson self-requested a review November 7, 2024 20:26
Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly looks good, much more readable than before! Some minor comments.

src/ae.c Outdated Show resolved Hide resolved
valkey.conf Show resolved Hide resolved
valkey.conf Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
Copy link
Contributor Author

@JimB123 JimB123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replied to some of @madolson comments. Will publish update.

src/ae.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
src/defrag.c Outdated Show resolved Hide resolved
valkey.conf Outdated Show resolved Hide resolved
Copy link
Member

@madolson madolson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lgtm. @zvi-code would still like you to take a look if you have time.

@madolson
Copy link
Member

madolson commented Nov 12, 2024

@valkey-io/core-team there is a small major change here in the form of a config that determines how much time we spend on active Defrag. Please take a look and provide 👍 or 👎 . You can see the definition here:

# The time spent (in microseconds) of the periodic active defrag process.  This
# affects the latency impact of active defrag on client commands.  Smaller numbers
# will result in less latency impact at the cost of increased defrag overhead.
# active-defrag-cycle-us 500

Other than that, this change just better amortizes the active-defrag work to reduce latency spikes.

@madolson madolson added release-notes This issue should get a line item in the release notes major-decision-pending Major decision pending by TSC team labels Nov 12, 2024
@zvi-code
Copy link

@JimB123 , added several comments, will continue to complete the review tomorrow

serverLog(LL_VERBOSE, "Starting active defrag, frag=%.0f%%, frag_bytes=%zu, cpu=%d%%",
frag_pct, frag_bytes, cpu_pct);
}
return cpu_pct;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is very weird code, I suggest to just set the value of server.active_defrag_cpu_percent internally as it was

all_stages_finished = 1;
defrag_stage = 0;
}
// This function shouldn't be called unless there's work to do.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this confusing, this function is called from event loop, so it's not that someone could call it when in wrong state... why is it needed?

* or making it more aggressive. */
run_with_period(1000) {
computeDefragCycles();
static defragKeysCtx ctx; // STATIC - this persists

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why we need all these static? can't we avoid them and carry the context with the already existing context in privatdata (or similar)?

defrag.current_stage = listNodeValue(listFirst(defrag.remaining_stages));
listDelNode(defrag.remaining_stages, listFirst(defrag.remaining_stages));
// Initialize the stage with endtime==0
doneStatus status = defrag.current_stage->stage_fn(0, defrag.current_stage->target, defrag.current_stage->privdata);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't we just have explicit init function that will be called when state is removed from the list and set to start? It makes the code less clear and mixes 2 totally separate logic paths

} else {
/* The timeproc will go away when we return AE_NOMORE.
* Mark this now, indicating a normal termination. */
defrag.timeproc_id = AE_DELETED_EVENT_ID;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is it needed? don't you override it anyway in endDefragCycle?

* cycle or adjusting the CPU percentage for an existing cycle. */
if (hasActiveChildProcess()) return;

server.active_defrag_cpu_percent = computeDefragCpuPercent();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will make us query jemalloc [in zmalloc_get_allocator_info] every time the databasesCron is called, previous code tried to avoid this frequent call, is this change intentional?

long long timeproc_id;
monotime timeproc_end_time;
long timeproc_overage_us;
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we have explanation on the structs and their members?
can we have all structs defined at the beginning together with the documentation you already have there?

monotime timeproc_end_time;
long timeproc_overage_us;
};
static struct DefragContext defrag;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would also move to the begining

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
major-decision-pending Major decision pending by TSC team release-notes This issue should get a line item in the release notes
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants