-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add semi-sync watcher to unblock primaries blocked on semi-sync ACKs #17763
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Manan Gupta <[email protected]>
Review ChecklistHello reviewers! 👋 Please follow this checklist when reviewing this Pull Request. General
Tests
Documentation
New flags
If a workflow is added or modified:
Backward compatibility
|
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #17763 +/- ##
==========================================
- Coverage 67.96% 67.90% -0.07%
==========================================
Files 1586 1587 +1
Lines 255195 255374 +179
==========================================
- Hits 173451 173415 -36
- Misses 81744 81959 +215 ☔ View full report in Codecov by Sentry. |
Signed-off-by: Manan Gupta <[email protected]>
} | ||
|
||
// NewWatcher creates a new Watcher. | ||
func NewWatcher(env tabletenv.Env) *Watcher { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we skip all of this work when durability policy != semi_sync?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will look into it when I hook this up with the state manager. i think I do it in the Open function, but nowhere else, specifically to take care of the use-case where the durability policy changes in a running cluster.
func (w *Watcher) stillBlocked() bool { | ||
w.mu.Lock() | ||
defer w.mu.Unlock() | ||
return w.isOpen && w.isBlocked | ||
} | ||
|
||
// checkAndSetIsWriting checks if the watcher is already writing to the DB. | ||
// If it is not, then it sets the isWriting field and signals the caller. | ||
func (w *Watcher) checkAndSetIsWriting() bool { | ||
w.mu.Lock() | ||
defer w.mu.Unlock() | ||
if w.isWriting { | ||
return false | ||
} | ||
w.isWriting = true | ||
return true | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this affects the query hot path, right? If it does, then it might be worth e.g. using 1 byte for the status and using bits in there for isWriting, isBlocked, isOpen etc. so that we can use atomics for reading them, CAS for optional changes, etc. If nothing else, it's probably worth moving these to atomic.Bool so that e.g. checkAndSetIsWriting can be one atomic call:
return w.isWriting(false, true)
It makes the code simpler, clearer, and less contentious / efficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think performance is too much of a concern, but the usage of having multiple bool fields behind a mutex vs atomic.Bool I think becomes a matter of preference. I for one, like to have the former because that means that only one boolean value transitions at a point in time, but with atomic bool values it can change even when you've just read that value.
Signed-off-by: Manan Gupta <[email protected]>
Signed-off-by: Manan Gupta <[email protected]>
Signed-off-by: Manan Gupta <[email protected]>
Signed-off-by: Manan Gupta <[email protected]>
Description
Related Issue(s)
#17709
Checklist
Deployment Notes