-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CORE-8618 crash_tracker: record uncaught startup exceptions #24854
base: dev
Are you sure you want to change the base?
Conversation
CI test resultstest results on build#60910
test results on build#60955
test results on build#61003
|
1dc3a86
to
09ccff8
Compare
force-push: rebase to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is looking pretty cool!
_prepared_cd.crash_message.resize(4096, '\0'); | ||
_prepared_cd.stacktrace.resize(4096, '\0'); | ||
_prepared_cd.addition_info.resize(4094, '\0'); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
\0
is the default value to fill with using resize(size)
// Create the crash recorder file | ||
auto f = co_await ss::open_file_dma( | ||
_crash_report_file_name.c_str(), | ||
ss::open_flags::create | ss::open_flags::rw); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
truncate
exclusive
_crash_report_file_name.c_str(), | ||
ss::open_flags::create | ss::open_flags::rw); | ||
co_await f.close(); | ||
co_await ss::sync_directory(_crash_report_file_name.parent_path().string()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
are you protecting against a certain scenario by syncing the directory here? a comment would probably be helpful since it's likely important?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added this comment:
Sync the parent dir to ensure that the new file is observable to the ::open() call below and to later restarts of the process
Let me know if that explains it well enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it doesn't hurt anything, but you don't really need to do the sync for these cases--it's only good for if the actual node crashes.
for (const auto& frag : _serde_output) { | ||
auto written = ::write(_fd, frag.get(), frag.size()); | ||
if (written == -1) { | ||
return false; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't expect us to be able to continue to write to the file descriptor if an earlier ::write()
call failed. Am I missing something here? But I also don't see any problems with continuing to try to write (the man page doesn't say that this would be a problem), so I've changed the code to continue to try to write here.
co_await ss::sync_directory( | ||
_crash_report_file_name.parent_path().string()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sync is probably unnecessary, but maybe you could provide a little context on what you're trying to protect against?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the comment I've added:
Sync the parent dir to ensure that later restarts of the process cannot observe this file
Let me know if it explains it well enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
visibility is handled by the kernel. process restarts don't need the sync. it doesn't hurt anything, except performance if you were doing many syncs.
src/v/crash_tracker/recorder.cc
Outdated
"{}_{}{}", time_now, random_int, crash_report_suffix); | ||
if (co_await ss::file_exists(try_name.string())) { | ||
// Try again in the rare case of a collision | ||
co_await ss::sleep(1ms); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the sleep for? you're not going to try again with the same filename, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah this is not needed. I've removed it now
std::lock_guard<ss::util::spinlock> g(_writer_lock); | ||
co_await _writer.release(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you comment on the use of a spinlock here compared to a blocking mutex? generally i would think that holding a spinlock for anything other than the tiniest of cpu-bound critical sections (and release here is performing I/O) would not perform as well as a blocking mutex where the kernel can put a blocking thread to sleep and understand the chain of dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added this comment now:
The writer has shared state, so accessing it from multiple threads
is guarded by this lock. We are using the async-signal safe
ss::util::spinlock instead of more efficient locking mechanisms like the
seastar primitives or std::mutex to ensure that we can safely use the
lock in a singal handler context.
This lock will be used in signal handlers in a follow-up PR so it needs to be async-signal safe. The pthread mutex isn't: https://man7.org/linux/man-pages/man3/pthread_mutex_lock.3.html.
The standard seastar concurrency primitives are also not available in the signal handler because the signal handler runs outside of the reactor. ss::util::spinlock
is specifically an async-signal safe spinlock that seastar uses in its signal handlers: https://github.com/scylladb/seastar/blob/c3f33c116ecde6f37cea6f656e1dfed2a4cd26a2/include/seastar/util/spinlock.hh#L85-L88
I am not worried about the performance overhead of the spinlock because all of the code paths where the lock is used are outside of performance-critical sections (startup, shutdown, mid-crash), and I don't expect the lock to be contended except for some highly unlikely scenarios (eg. segfault while already recording a failure).
// If the recorder is already locked, give up to prevent possible deadlocks. | ||
// For example, an assertion failure while recording a segfault may lead to | ||
// a deadlock if we unconditionally waited for the lock here. | ||
std::unique_lock<ss::util::spinlock> g(_writer_lock, std::try_to_lock); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here. why a spinlock?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Answered here: #24854 (comment)
09ccff8
to
c7e2ba2
Compare
force-push: address code review feedback |
Defines it under `${datadir}/crash_reports` to store crash reports. It needs to be special-cased in the `compute_storage.py` script to ensure it isn't mistaken for a log file directory.
Define a custom exception for `crash_loop_limit_reached` to allow pattern matching for it in a follow up commit. We specifically do not want to record `crash_loop_limit_reached` errors as crashes because they are generally not informative crashes. Recording them as real crashes would build up garbage on disk, which would lead to real crash logs expiring on disk earlier.
Introduces the core stateful writer which is used to write out `crash_description` serde objects to crash report files. It is able to write out the crash reports in an async-signal-safe way by: * using only async signal safe syscalls * pre-allocating all the memory necessary to construct, serialize and write our `crash_description` objects to disk * using only non-allocating methods The `prepared_writer` is hooked up to the `recorder` in a later commit of this PR. Ref: https://man7.org/linux/man-pages/man7/signal-safety.7.html
Implement the logic to choose a unique crash log file name, and hook up the code for preparing the crash recording code and the crash file cleanup code on clean shutdown.
Implement the recording of startup exceptions into crash files.
Add some simple ducktape tests to assert that crash reports are being generater (or not generated) as expected. The contents of the crash files are going to be validated as a follow up once the contents are actually printed to logs.
c7e2ba2
to
cfae5ae
Compare
force-push: rebased to dev now that the base PR (#24828) merged |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, just one remaining question
|
||
void prepared_writer::write() { | ||
vassert(_state == state::filled, "Unexpected state: {}", _state); | ||
_state = state::written; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this transition correct if try_write_crash()
fails? Should failure be represented in the state machine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this transition correct if
try_write_crash()
fails?
Yes, it's correct though I get that the naming is a bit misleading. Maybe consumed
would be a better name to represent this state.
Should failure be represented in the state machine?
We could add a separate failure
state in the state machine but at the moment it is identical to the written
state so I think it's simpler as is.
cd.stacktrace.begin() + pos, | ||
cd.stacktrace.size() - pos, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick: You could use std::next/prev if there's a lint complaint about pointer arithmetic here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, thank you! Let me carry this onto a follow-up PR unless any more changes come up for this PR.
while (written < frag.size()) { | ||
auto res = ::write( | ||
_fd, frag.get() + written, frag.size() - written); | ||
if (res == -1) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did you consider retrying on EINTR
? it's niche, but still possible. and you did state that you were avoiding seastar I/O because you cared about signal handling...
auto f = co_await ss::open_file_dma( | ||
_crash_report_file_name.c_str(), | ||
ss::open_flags::create | ss::open_flags::rw | ss::open_flags::truncate | ||
| ss::open_flags::exclusive); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my original comment about using exclusive
here was actually a note to myself to revisit my comment after finishing the review, but i forgot. the problem with exclusive
is that this open will fail if the file already exists on disk. is that going to be an issue, or is exclusive behavior desired? my hunch is that you may not want it, but the note to myself was just to ask if you wanted it.
co_await ss::sync_directory( | ||
_crash_report_file_name.parent_path().string()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
visibility is handled by the kernel. process restarts don't need the sync. it doesn't hurt anything, except performance if you were doing many syncs.
@pgellert the note about exclusive flag is probably the importnat one |
This PR adds functionality to record crashes caused by an exception during startup as crash reports to disk. This is the first step to implement the core crash recording project which aims to persist information about redpanda broker crashes to disk to help debugging.
The PR implements the core functionality of preparing state to allow writing crash reports in an async-signal-safe way, so that this code can be reused in signal handlers in follow-up PRs.
The crash reports are going to be stored under the data directory. The filenames are constructed as:
${datadir}/crash_reports/${broker_start_time_ms}_${random}.crash
.Reports are serialized using
serde
andiobuf
(with pre-reserved) memory, and the implementation relies on their allocation-free behaviour to ensure that writing crash files in signal handlers is reliable even in OOM scenarios. (This will become important in a follow-up PR when the signal handlers are actually hooked up.)To write to the crash reports, we need to use the low-level
open()
function here instead of the seastar API or higher-level C++ primitives because we need to be able to manipulate the file using async-signal-safe, allocation-free functions inside signal handlers. Ref: https://man7.org/linux/man-pages/man7/signal-safety.7.htmlFixes https://redpandadata.atlassian.net/browse/CORE-8618
Backports Required
Release Notes