-
Notifications
You must be signed in to change notification settings - Fork 161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process-Level Locking and Connection Management in redb for Multi-Process Access #932
Comments
I suggest switching to a database like lmdb that supports multi process
concurrency. Or implementing an IPC mechanism where one process opens the
redb file and the other processes make requests to the database process.
Closing and opening a redb database is not going to be fast
…Sent from my phone
On Mon, Jan 6, 2025, 4:15 AM Ahmad Kemsan ***@***.***> wrote:
Hi @cberner <https://github.com/cberner>,
First of all, great work on redb! I am currently utilizing it in a project
that requires concurrent database access from multiple processes. I
understand that redb supports multi-threaded access within a single process
through its MVCC architecture. However, I have encountered challenges when
extending this access model across multiple processes, as redb does not
inherently provide process-level locking mechanisms and, as noted in
previous discussions, there are no plans to include this functionality.
To address this, I implemented an external file-based locking system to
manage inter-process synchronization. This involves acquiring a file lock
each time a database connection is established and releasing it upon
closure. I observed that locking the connection is essential, and I cannot
keep the connection open for long durations, as this would prevent other
processes from establishing their own connections.
My primary concern is that this approach may result in frequent disk I/O
operations, which could negatively impact performance. Am I missing
something, or is there a more efficient way to manage this scenario? Your
suggestions and guidance would be highly appreciated.
—
Reply to this email directly, view it on GitHub
<#932>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGNXQH5HPOIFJ5NCS6M4D32JJXXDAVCNFSM6AAAAABUVPVNAGVHI2DSMVQWIX3LMV43ASLTON2WKOZSG43TANJTGM4TQMQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Thank you for your response and suggestions! Switching to a database like LMDB is certainly a viable option, but it defeats the purpose of using a pure Rust-based database, which was one of the main reasons for choosing redb in the first place. Implementing an IPC mechanism is an interesting idea, but it introduces additional complexity and overhead for managing communication between processes. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi @cberner,
First of all, great work on redb! I am currently utilizing it in a project that requires concurrent database access from multiple processes. I understand that redb supports multi-threaded access within a single process through its MVCC architecture. However, I have encountered challenges when extending this access model across multiple processes, as redb does not inherently provide process-level locking mechanisms and, as noted in previous discussions, there are no plans to include this functionality.
To address this, I implemented an external file-based locking system to manage inter-process synchronization. This involves acquiring a file lock each time a database connection is established and releasing it upon closure. I observed that locking the connection is essential, and I cannot keep the connection open for long durations, as this would prevent other processes from establishing their own connections.
My primary concern is that this approach may result in frequent disk I/O operations, which could negatively impact performance. Am I missing something, or is there a more efficient way to manage this scenario? Your suggestions and guidance would be highly appreciated.
The text was updated successfully, but these errors were encountered: