-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Download collected data #62
Conversation
2f1a783
to
fc88a80
Compare
Hmm flake8 complains about |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, except for the issue on line 270 in api.py
3f13b1d
to
d0631d6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you very much for the PR.
It is really good overall, but I pointed out some details worth fixing.
Also, please note there is a merge conflict in
Sure, I will add it in a separate PR :-)
I thought it is recommended to use
|
Very good points Jirka and valid concerns. In this stage of our prototype we need to balance between production and The best thing to do now is to make sure we track all of this debt in our backlog. Once this is merged, please open 2 separate issues for both of these. |
d0631d6
to
3b38fa9
Compare
thanks for the reviews! PR updated, PTAL.
yes, it's one of recommended usages, but since gunicorn is not asynchronous server we couldn't use it in the backend code if we want (that's one of the features of uvicorn and fastapi) - but now it doesn't matter since we don't use it. And since we will probably use nginx in prod (#61) we could freely switch to uvicorn when needed since uvicorn + nginx is one of recommended usages for prod
+1, I somehow learned how to do the multi container solution so only the gunicorn concern remains - but switching with nginx (#61) shouldn't be hard to do. |
5e337f0
to
43c9189
Compare
I spent most of my today trying to deploy this and unfortunately it can't work in the current design in the openshift environment we got :/
this can't be configured: So... what can we do?
|
I would argue for |
23e94f5
to
4105ed2
Compare
4105ed2
to
004ca02
Compare
This solution raises two concerns for the future once we have a lot of data collected: - it will be a while until the data will be tarred - creating delay before actual download (after cca 200 feedbacks relly noticable delay :/) look at example with only 300 (tiny!!) feedbacks: [root@backend persistent]# time tar -zcf results.tar.gz results/ real m9.473s user m9.328s sys m0.483s - downloading takes also some time -> thus blocking the whole worker during this. IIRC we have 8 workers thus 8 downloads and API is unresponsive. Solution: - how to solve the delay before download: fedora-copr#64 - the issue above is the slowest, once that will be resolved and people still complain, do this: fedora-copr#65
004ca02
to
b313655
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, let's deploy and test it out :)
Makefile
Outdated
@@ -1,6 +1,9 @@ | |||
CONTAINER_ENGINE ?= $(shell command -v podman 2> /dev/null || echo docker) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't look right to me. At least, the default must docker-compose
not docker
.
[jkadlcik@zeratul log-detective-website]$ make build-prod
/usr/bin/podman -f docker-compose.prod.yaml build --no-cache
Error: unknown shorthand flag: 'f' in -f
make: *** [Makefile:5: build-prod] Error 125
This works though
CONTAINER_ENGINE=docker-compose make build-prod
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you are right, I forgot to add -compose
to it, thanks for noticing :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed, could you please try if it works with docker-compose for you?
- ... creating env files (TODO: environment variables are still hardcoded on some places) - moving useful tools from base image to result image in production to have these tools available in terminal in openshift - adjusting openshifft deployement documentation
b313655
to
e3133c3
Compare
already deployed at https://log-detective.com/ :) |
Fixes #49
This solution raises two concerns for the future once we have a
lot of data collected:
actual download (after cca 200 feedbacks relly noticable delay :/)
look at example with only 300 (tiny!!) feedbacks:
-> thus blocking the whole worker during this. IIRC we have 8 workers thus
8 downloads and API is unresponsive.
Solution:
Data taring before downloads will be really slow #64
still complain, do this:
Downloads may block API #65
This also creates good base for resolving #33 - only thing needed is to periodically download the data to somewhere.