You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The other bit that might be worth mentioning is that the advantage of a door vs. say a unix socket is that during a door call, the scheduler directly transfers control to the server thread -- so there is a very minimal scheduler overhead compared to other forms of IPC. There's a bit more to it, but probably a bit much for a tutorial.
It would be good to nail down the language as precisely as possible here. Is this the same as bypassing the scheduler? Do we avoid a context switch? Are we guaranteed that the server thread shares the same cpu timeslice as the client thread? Is it appropriate to make a comparison to cooperative scheduling, or has that got different implications
The text was updated successfully, but these errors were encountered:
Or, instead of explaining it (which is above my head for the moment anyhow), it might be suitable to just claim that doors are faster than other forms of IPC and either cite the comparison given in the Stevens book or whip up an alternate client & server using sockets, and do the same data transfer comparison on that.
It's probably easier to just say it's faster, especially for small (in terms of CPU usage) requests.
It doesn't bypass the scheduler (that'd be bad -- a nefarious process could hog some/all of the available CPUs if it did that). ISTR the net effect is that the client's remaining time slice is effectively loaned to the server thread, so either thread can still go off cpu if the time slice is exceed.
Normally, an RPC request over say a pipe or localhost socket means the client gets put on a wait queue, it has to wait until the scheduler decides to run the server process, then wait until the scheduler decides to run the client thread again (which could be a while on a busy system). With a door, the scheduler just immediately switches to the server process and then back to the client and skips all the queueing (so you could say it takes a short cut through the scheduler), though still subject to time slice limits.
As u/jking13 puts it:
It would be good to nail down the language as precisely as possible here. Is this the same as bypassing the scheduler? Do we avoid a context switch? Are we guaranteed that the server thread shares the same cpu timeslice as the client thread? Is it appropriate to make a comparison to cooperative scheduling, or has that got different implications
The text was updated successfully, but these errors were encountered: