-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Garbage collection thread safety issues on 1.11 #56871
Comments
If I run the same code on 1.10, I don’t see any hanging, so I think this is a regression. (When I say “hang”, I mean it completely freezes up and I can’t even |
Just tried backporting the simpler fix for #56801 but doesn’t seem to fix this issue on 1.11, so this error seems independent. |
Did some preliminary investigation. Looks like it's hanging in the "wait-for-the-world" phase. Patch: diff --git a/src/gc.c b/src/gc.c
index fd9ad71d8a..5e2f34c81c 100644
--- a/src/gc.c
+++ b/src/gc.c
@@ -3880,7 +3880,9 @@ JL_DLLEXPORT void jl_gc_collect(jl_gc_collection_t collection)
jl_fence();
gc_n_threads = jl_atomic_load_acquire(&jl_n_threads);
gc_all_tls_states = jl_atomic_load_relaxed(&jl_all_tls_states);
+ jl_safe_printf("Before STW\n");
jl_gc_wait_for_the_world(gc_all_tls_states, gc_n_threads);
+ jl_safe_printf("After STW\n");
JL_PROBE_GC_STOP_THE_WORLD();
uint64_t t1 = jl_hrtime(); Results:
|
What OS/Chip are you running on? I am having a hard time reproducing on Linux+Amd x86 |
I could reproduce reliably on a stock M2, at least (macOS+aarch64). |
The bug was in darwin-specific code paths (#53868) |
|
P.S., could we add this as a test? I can do a PR if so |
I think the garbage collection might have some thread safety issues on 1.11?
After running into various GC issues with SymbolicRegression.jl and PySR (reported in other issues #56735 #56759), I've been trying to break the GC with more minimal examples in the hopes of generating fixes.
Here is one such example I found, that results in the GC completely freezing.
The idea is to spawn multiple tasks that allocate and occasionally trigger GC.
This pushes the GC into running sweeps concurrently and potentially reveals data races in the parts of the code that modify the allocation map or page metadata.
If you run this loop ~2-3 times in a REPL, you should hit a freeze.
I wonder if this is from alloc map and page metadata being accessed by multiple GC threads (??). Or maybe it's just from the malloc memory leak identified in the other thread.
I didn't see obvious locking here, so I think there might be thread races? If multiple collector threads modify metadata concurrently (like changing page states), this may cause corruption.
Now, the good news is that this seems to be fixed on nightly. I'm not sure what the issue is from. Maybe someone can point me to an issue I missed that is now fixed. (Though it doesn't seem to be fixed yet on the
release-1.11
branch)In any case, it might be useful to have some of these tests in the CI, so such issues will not show up again?
The text was updated successfully, but these errors were encountered: