Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wait for handover state update before serving the requests #7028

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

yux0
Copy link
Contributor

@yux0 yux0 commented Dec 20, 2024

What changed?

Wait for handover state update before serving the requests.

Why?

We need to wait on handover state update before the redirect interceptor. It will serve the request once the replication state is updated and routes to the correct endpoint.

How did you test it?

TODO

Potential risks

Documentation

Is hotfix candidate?

@yux0 yux0 requested a review from a team as a code owner December 20, 2024 23:57
@yux0 yux0 requested review from yycptt and hai719 December 20, 2024 23:57
err := ni.checkReplicationState(nsEntry, info.FullMethod)
if err != nil {
var nsErr error
nsEntry, nsErr = ni.namespaceRegistry.RefreshNamespaceById(nsEntry.ID())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmmm this will keep refreshing the namespace? I mean all requests will perform a refresh upon retry.

also if the handover error is from history service then the refresh here is unnecessary

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense. We can also just wait for ns refresh not do this proactive refresh.

return err
}
return ni.checkReplicationState(namespaceEntry, fullMethod)
// Only retry on ErrNamespaceHandover, other errors will be handled by the retry interceptor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we just let retry interceptor handle the error? and this interceptor just block until ns is no longer in handover state?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not go with this because retry interceptor should be the most inner interceptor. If that is the case, can I add a handover interceptor after the retry interceptor?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't remember why we moved retry interceptor to be the most inner interceptor, but I think there must be a reason. I think we should fully understand that to make sure the retry logic here won't cause any issue.

can I add a handover interceptor after the retry interceptor?

hmmm then why not just retry handover error in retry interceptor?

I guess I was proposing something different in my comment, i.e. block on ns state change instead of retry (by registering a callback to namespace registry).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I cannot wait for ns callback as the error might return from history nodes and there is no callback from frontend.

@@ -215,10 +215,14 @@ func (ti *TelemetryInterceptor) RecordLatencyMetrics(ctx context.Context, startT
userLatencyDuration = time.Duration(val)
metrics.ServiceLatencyUserLatency.With(metricsHandler).Record(userLatencyDuration)
}
handoverRetryLatency := time.Duration(0)
if val, ok := metrics.ContextCounterGet(ctx, metrics.NamespaceHandoverRetryLatency.Name()); ok {
handoverRetryLatency = time.Duration(val)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so we don't actually emit metric for this latency?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follow the command to exclude the backoff retry delay from the no user latency.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to chat about this.

Comment on lines 266 to 270
response, err = handler(ctx, req)
if retryCount > 1 {
retryLatency = retryLatency + handoverRetryPolicy.ComputeNextDelay(0, retryCount, nil)
}
retryCount++
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a bit hard to follow as it's calculating the retryLatency before the current attempt, but the code lives after the handler call...

return err
}
return ni.checkReplicationState(namespaceEntry, fullMethod)
// Only retry on ErrNamespaceHandover, other errors will be handled by the retry interceptor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't remember why we moved retry interceptor to be the most inner interceptor, but I think there must be a reason. I think we should fully understand that to make sure the retry logic here won't cause any issue.

can I add a handover interceptor after the retry interceptor?

hmmm then why not just retry handover error in retry interceptor?

I guess I was proposing something different in my comment, i.e. block on ns state change instead of retry (by registering a callback to namespace registry).

@yux0 yux0 changed the title Retry handover error during graceful failover Wait for handover state update before serving the requests Feb 3, 2025
if namespaceData.ReplicationState() == enumspb.REPLICATION_STATE_HANDOVER {
cbID := uuid.New()
waitReplicationStateUpdate := make(chan struct{})
i.namespaceRegistry.RegisterStateChangeCallback(cbID, func(ns *namespace.Namespace, deletedFromDb bool) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we probably should be change the impl to listen on changes for a specific namespace...
Can be done in a later PR.

if i.enabledForNS(namespaceName.String()) {
startTime := i.timeSource.Now()
defer func() {
metrics.HandoverWaitLatency.With(i.metricsHandler).Record(time.Since(startTime))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we only want to emit the metric when namespace actually waited on the handover state.

@@ -267,6 +269,7 @@ func GrpcServerOptionsProvider(
namespaceLogInterceptor.Intercept, // TODO: Deprecate this with a outer custom interceptor
metrics.NewServerMetricsContextInjectorInterceptor(),
authInterceptor.Intercept,
namespaceHandoverInterceptor.Intercept,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add some comments explaining why it must be in this place?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the handover state checking logic in StateValidationIntercept removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did not remove in this PR as I added the feature flag. Once we can confident we can remove it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to disabled that the check in StateValidationIntercept if the feature flag is enabled?

@@ -267,6 +269,7 @@ func GrpcServerOptionsProvider(
namespaceLogInterceptor.Intercept, // TODO: Deprecate this with a outer custom interceptor
metrics.NewServerMetricsContextInjectorInterceptor(),
authInterceptor.Intercept,
namespaceHandoverInterceptor.Intercept,
redirectionInterceptor.Intercept,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess theoretically we can retry in redirection Interceptor? since it has the knowledge of whether redirect will actually happen or not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess so but the logic could be weird. What should we do if the redirect will not happen?

case <-waitReplicationStateUpdate:
}
i.namespaceRegistry.UnregisterStateChangeCallback(cbID)
if err != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we don't need this check?

@@ -267,6 +269,7 @@ func GrpcServerOptionsProvider(
namespaceLogInterceptor.Intercept, // TODO: Deprecate this with a outer custom interceptor
metrics.NewServerMetricsContextInjectorInterceptor(),
authInterceptor.Intercept,
namespaceHandoverInterceptor.Intercept,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you want to disabled that the check in StateValidationIntercept if the feature flag is enabled?

methodName string,
) (waitTime *time.Duration, retErr error) {
if _, ok := allowedMethodsDuringHandover[methodName]; ok {
return
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: returning nil, nil is surprising I think for the caller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants