Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a reverse proxy server #250

Closed
twifkak opened this issue May 6, 2022 · 11 comments · Fixed by #359
Closed

Create a reverse proxy server #250

twifkak opened this issue May 6, 2022 · 11 comments · Fixed by #359
Assignees

Comments

@twifkak
Copy link
Collaborator

twifkak commented May 6, 2022

Create a reverse proxy server in Rust, similar to Web Packager Server as a wrapper around the sxg_rs library. This could be run as a typical server or as a service on Google Cloud Run (see docs).

@twifkak
Copy link
Collaborator Author

twifkak commented May 17, 2022

Of the available libraries for implementing a reverse proxy, hyper-reverse-proxy seems like the most accurate (removing hop-by-hop headers including those in Connection, etc.). It's built on top of Hyper so it supports a bunch of things (notably, streamed responses).

We should also set X-Forwarded-Host, Forwarded: host=, and Via in outgoing requests; not done by that lib.

We'll need some storage/sync mechanism for ACME & OCSP.

We could use routerify for the non-proxied URLs (cert-chain+cbor and validity), but we'll need to test well that it avoids inner URL != outer URL bugs that plagued several Go routers. Alternatively, we could build our own simple router using something like RegexSet or trie_rs

Alternatives considered include:

@twifkak twifkak self-assigned this May 17, 2022
@twifkak
Copy link
Collaborator Author

twifkak commented May 19, 2022

Another server exists in playground/ but it would take more effort to productionize (e.g. add all the above + the features in hyper-reverse-proxy), and it would add node as a dependency. Would be nice to have something that runs in Rust only.

@twifkak
Copy link
Collaborator Author

twifkak commented May 19, 2022

To start with, I'll implement it with file storage, for the basic "run a server on a VM" case.

I split off #276 for Google Cloud Run support.

@twifkak
Copy link
Collaborator Author

twifkak commented May 24, 2022

@antiphoton @quangIO It appears hyper requires async fns to implement Send for multithreading (see details). SxgWorker fns do not implement Send for a variety of reasons.

  1. js_sys doesn't implement Send.
  2. dyn doesn't implement Send unless specified (dyn Foo + Send) -- which would break js_sys support.
  3. Probably more reasons.

Options to solve this:

  1. block_on() SxgWorker fns. Presumably this eliminates parallelism -- perhaps that's OK for computations that aren't I/O bound?
  2. Remove dyn from the code base; use type parameters instead. Simplest conceptually, but adds line noise and brittleness -- whenever we add a new type parameter, we have to update all callers.
  3. Change types like Runtime to traits with Send and non-Send impls. Least line noise, but a bit of duplication.

I'm leaning towards 3 where possible, falling back to 2 for less pervasive types. However, I'm not an expert in Rust. Would appreciate your opinions. It seems @quangIO was considering 2 in #278; what are the implications for WasmWorker?

@antiphoton
Copy link
Collaborator

antiphoton commented May 24, 2022

js_sys::Fucntion does not implement Send, hence Fetcher, Signer, and Storage do not implement Send, hence struct Runtime does not implement Send.
Box<dyn Fetcher>, Arc<dyn Fetcher> and Arc<F: Fetcher> can't implement Send as long as Fetcher does not implement Send.

I would suggest constructing Runtime inside hyper's handler function, so that the Runtime no longer need to be sent between threads.

@twifkak
Copy link
Collaborator Author

twifkak commented May 24, 2022

I would suggest constructing Runtime inside hyper's handler function, so that the Runtime no longer need to be sent between threads.

I'm already attempting to construct it inside the closure passed to service_fn, and that's resulting in the "Future is not Send" error [1]. I'm not positive, but I believe the multithreading runtime needs each individual task to implement Send because it reserves the right to reschedule it on a different thread; see Tokio docs.

[1] In particular, it's not the call to service_fn but to server::Builder::serve that triggers the error.

@twifkak
Copy link
Collaborator Author

twifkak commented May 26, 2022

I'll give it another day of effort, but if that fails, I may just try spawn_blocking and test that it can handle concurrent requests. I note that warp calls the handler synchronously and warp probably knows what it's doing.

I found myself wanting for language features:

  • Constraint kinds so I didn't have to duplicate code between RuntimeSend and RuntimeNoSend impls.
  • Type-changing struct update since our tests use the struct update syntax quite heavily, but that won't work if we get rid of dyn.

@antiphoton
Copy link
Collaborator

antiphoton commented May 31, 2022

In WebAssembly applications with JsValue, Runtime is !Send; in pure Rust applications, Runtime is Send.

We may consider using the conditional compilation in Rust (like pre-defined macro in C++), and declare Runtime, Fetcher, Signer as !Send only when the wasm feature is enabled in sxg-rs (like this).

@twifkak
Copy link
Collaborator Author

twifkak commented May 31, 2022

Oh, really good idea.

@twifkak
Copy link
Collaborator Author

twifkak commented Jun 22, 2022

TODO: Refactor this pattern from #314 into a macro in utils:

#[cfg_attr(feature = "wasm", async_trait(?Send))]
#[cfg_attr(not(feature = "wasm"), async_trait)]

@twifkak
Copy link
Collaborator Author

twifkak commented Aug 12, 2022

Abandoning the attribute macro idea, as the maintenance overhead is not worth the benefit. It would require an additional proc-macro crate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants