From 180db933bfef8f0d8a90ca7321f71d53f452d4b9 Mon Sep 17 00:00:00 2001 From: Bedapudi Praneeth Date: Sun, 3 Dec 2023 12:07:52 +0530 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index ed2b511..63a1902 100644 --- a/README.md +++ b/README.md @@ -67,6 +67,6 @@ docker run -it -p8080:8080 fastdeploy_echo_json ### Where not to use fastDeploy? - non cpu/gpu heavy models that are better of running parallely rather than in batch -- if your predictor calls some external API or uploads to s3 etc +- if your predictor calls some external API or uploads to s3 etc in a blocking way - io heavy non batching use cases (eg: query ES or db for each input) - for these cases better to directly do from rest api code (instead of consumer producer mechanism) so that high concurrency can be achieved