You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running a Matrix server (Synapse) with Caddy in front of it as reverse proxy. Synapse is single-threaded and the only way to handle requests on multiple cores is to have several worker processes and use a reverse proxy to load balance between them.
In some cases Synapse's documentation recommends choosing the worker based on the user who made the request or on the room ID that the request relates to. This improves cache efficiency because all the requests that relate to the same user or room go to the same worker and workers don't share caches. The reverse proxy has to extract the room ID/username from a HTTP header or from the URI path and pick a worker based on that. The documentation provides an example of how to do this in nginx. https://element-hq.github.io/synapse/latest/workers.html#load-balancing
As far as I can tell this setup is not currently possible in Caddy. For example the nginx configuration in the Synapse documentation extracts the username of the user who is making the request from an access token that is passed in a HTTP header. header Authorization does not do the same thing because the same user could have multiple different access tokens if they're logged in on multiple devices. header Authorization could route requests coming from the same user but two different devices to two different upstreams.
Same thing with balancing by room ID. In the Matrix API the room ID part of the URI path (for example https://spec.matrix.org/v1.13/server-server-api/#backfilling-and-retrieving-missing-events) so there is no way to route all requests for the same room to the same upstream. Using uri_hash as load balancing policy doesn't work because it doesn't only use the room ID but also the rest of the endpoint, so two requests that reference the same room using two different endpoints could be routed to two different upstreams.
One way to do this is to implement a load balancing policy that uses a regular expression to extract something from the URI or header and then uses the result to pick an upstream.
The text was updated successfully, but these errors were encountered:
I'm running a Matrix server (Synapse) with Caddy in front of it as reverse proxy. Synapse is single-threaded and the only way to handle requests on multiple cores is to have several worker processes and use a reverse proxy to load balance between them.
In some cases Synapse's documentation recommends choosing the worker based on the user who made the request or on the room ID that the request relates to. This improves cache efficiency because all the requests that relate to the same user or room go to the same worker and workers don't share caches. The reverse proxy has to extract the room ID/username from a HTTP header or from the URI path and pick a worker based on that. The documentation provides an example of how to do this in nginx. https://element-hq.github.io/synapse/latest/workers.html#load-balancing
As far as I can tell this setup is not currently possible in Caddy. For example the nginx configuration in the Synapse documentation extracts the username of the user who is making the request from an access token that is passed in a HTTP header.
header Authorization
does not do the same thing because the same user could have multiple different access tokens if they're logged in on multiple devices.header Authorization
could route requests coming from the same user but two different devices to two different upstreams.Same thing with balancing by room ID. In the Matrix API the room ID part of the URI path (for example https://spec.matrix.org/v1.13/server-server-api/#backfilling-and-retrieving-missing-events) so there is no way to route all requests for the same room to the same upstream. Using
uri_hash
as load balancing policy doesn't work because it doesn't only use the room ID but also the rest of the endpoint, so two requests that reference the same room using two different endpoints could be routed to two different upstreams.One way to do this is to implement a load balancing policy that uses a regular expression to extract something from the URI or header and then uses the result to pick an upstream.
The text was updated successfully, but these errors were encountered: