Multiprocess manager does not restart worker if there is only 1 #2390
Replies: 2 comments 1 reply
-
Is anyone able to shed some light on this? The FastAPI documentation suggests to use a single worker process when running in a container cluster, but this approach is incompatible with also using Is this a bug or is there some other way to accomplish this that I am missing? |
Beta Was this translation helpful? Give feedback.
-
I think your understanding is on point. In the main try:
if config.should_reload:
sock = config.bind_socket()
ChangeReload(config, target=server.run, sockets=[sock]).run()
elif config.workers > 1:
sock = config.bind_socket()
Multiprocess(config, target=server.run, sockets=[sock]).run()
else:
server.run() You are in the case config.workers > 1 Now in In this function, you can see that when a child process dies, another process is started. def keep_subprocess_alive(self) -> None:
if self.should_exit.is_set():
return # parent process is exiting, no need to keep subprocess alive
for idx, process in enumerate(self.processes):
if process.is_alive():
continue
process.kill() # process is hung, kill it
process.join()
if self.should_exit.is_set():
return # pragma: full coverage
logger.info(f"Child process [{process.pid}] died")
process = Process(self.config, self.target, self.sockets)
process.start()
self.processes[idx] = process Now in the Server class, in if self.config.limit_max_requests is not None:
return self.server_state.total_requests >= self.config.limit_max_requests So from my understanding, the limit_max_requests is actually respected in the sense that it kills your child process when the nb of requests is exceeded. However, the implementation in Multiprocess is such that if a process is killed, another is created. |
Beta Was this translation helpful? Give feedback.
-
If I start uvicorn with just 1 worker, and set a limit for the maximum number of requests, then the application shuts down after that number of requests. If I instead use 2 workers, then the workers will be restarted after handling the specified number of requests.
I can tell from the logging that when I use 2 workers a parent process is started (
INFO: Started parent process
). I assume that this parent process is responsible for starting new workers and it does not run when using 1 worker. So in a sense I understand what is happening but I find it quite unintuitive. Is there some reason to not let a solitary worker be restarted?main.py
Start application with
uvicorn --limit-max-requests 3 --workers 1 main:app
, send three requests,curl localhost:8000
, and the entire application shuts down.Start with
uvicorn --limit-max-requests 3 --workers 2 main:app
instead, send 3 requests and only the worker that handled the requests will be shut down and a new one is started.I'm using python 3.12.1, fastapi 0.111.0, and uvicorn 0.30.1.
Beta Was this translation helpful? Give feedback.
All reactions