-
Notifications
You must be signed in to change notification settings - Fork 98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix deadlock when a path is re-locked while the cache is locked globally #133
base: master
Are you sure you want to change the base?
Conversation
I hate watching PRs sit but I don't know enough about this to be able to verify the change without reproduction steps. Could I trouble you to tell me how to trigger this bug? |
Can anyone test this fix? I like the approach, but I don't know how to test the solution. |
It's been a long time, but I dug up this test case that I believe was triggering the problem. It's a little arcane (and may no longer work), but the gist is you run a bunch of workers which all read some random bytes from yas3fs, then, while they're running, you reset the caches. Before this patch, they were locking up every time.
|
Finally managed to get a test case that reliably triggers the deadlock and this patch does resolve the issue. Could I get someone who actually knows python to take a look at the test and make sure I'm not testing nothing? https://gist.github.com/Liath/49fece9fc6dca640a4d9ecedb7b3c4a7 |
See comments for details on the deadlock condition.
I don't like that it relies on the internal property
_RLock__owner
, but it was the least invasive way I could think to do this. I'd appreciate someone else more in-the-know finding a better way.