-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dockerfile: fix port bindings #184
Comments
Thanks. This has been an issue for some time, and it's not specific to Docker by the way. I normally just set Maybe I'll incorporate your workaround. But I'm not sure how solid it is. Do you think we could properly fix this? Like providing two separate settings? I don't understand what the first commit (the healthcheck change) has to do with any of this. Can you expain? |
We're using a proxy in our environment, so we set the HTTPS_PROXY (etc.) env vars globally in all our docker containers so that we generally don't have to think about it. However, for the healthchecks, we don't want the requests to use a proxy, so we empty the proxy env vars. I suppose it's mainly an issue with some of our other docker containers, so might not be necessary for the Pipeline 2 image. Then again, it peobably won't hurt either. I don't know of a better way to condiguee it. The docker-entrypoint.sh method feels slightly like a hack, but it works fine, and I don't know of a better way to configure it. Should there be a PIPELINE2_WS_HOST_INTERNAL or similar that could be set to for instance 0.0.0.0, to tell the engine what interface to bind to, while using PIPELINE2_WS_HOST only for the URLs in the API? |
I see. Yes, I guess the change might be useful for the Pipeline 2 image. But I still don't really understand why you need it. Why exactly don't you want to use a proxy for the health checks? Sorry, I guess my knowledge about HTTP proxy is just too limited.
Sure, but I'm thinking also about situations without Docker. A proper solution would be preferable because people would not have to do workarounds like this.
Yes, something like that I was thinking about. So you think this makes sense? I was just wondering whether I was missing something and there would be another obvious solution. Maybe let's call it |
The public domain/port mapping is routed through the web tree (a routing table, as I understand it) of our host provider (the national library), and to a specific docker swarm host and port. By at least using the hostname and port of one of the docker swarm nodes, the container won't fail if there's an issue in the web tree for some reason. Docker swarm maps the ports of all its nodes to ports on docker services. When the request is made to the docker service, then the request will be sent to a healthy container belonging to the service. So the container has to already be healthy to be able to receive the request. An in any case, if there had been multiple containers belonging to the service (I don't think it would be possible in the case of DP2 though), then the request would likely end up in another container part of the same service. So the container needs to reference itself directly for the healthcheck. The hostname of the container is not known until the container is running so we use We set
Sure, sounds good. Could we have a separate one for the listen port maybe? Just in case it turns out we need to configure override the hostname for some reason in the future? |
OK, updated the issue daisy/pipeline-framework#153. Thanks, I think now I understand the purpose of disabling the proxy, even though I can't follow everything in your explanation. Basically, using a proxy for the health check doesn't make much sense because the service is running within the container itself. That's it, right? But then why are we even using
It should work with plain old Docker (I use it in the example on http://daisy.github.io/pipeline/Get-Help/User-Guide/Installation/), so maybe your issue is a Swarm thing? (By the way, this other issue, which I forgot about, seems related: daisy/pipeline#542.) |
Right.
Yeah, slightly related. Outbound HTTP(S) requests, except those going to the WS (for any reason), should go through a proxy if a proxy is configured. But we don't need that at NLB anymore, and I don't know if anyone else needs it. |
… it in href attributes This makes the href attributes actually usable (note that they are currently not used in the CLI and web UI client). Before, the org.daisy.pipeline.ws.host, org.daisy.pipeline.ws.port and org.daisy.pipeline.ws.path properties, which are used to configure the "listen address" of the web server, were also used in the href attributes, but this doesn't always make sense. Related issues: - daisy/pipeline-framework#153 - daisy/pipeline-assembly#184
… it in href attributes This makes the href attributes actually usable (note that they are currently not used in the CLI and web UI client). Before, the org.daisy.pipeline.ws.host, org.daisy.pipeline.ws.port and org.daisy.pipeline.ws.path properties, which are used to configure the "listen address" of the web server, were also used in the href attributes, but this doesn't always make sense. Related issues: - daisy/pipeline-framework#153 - daisy/pipeline-assembly#184
…attributes This makes the href attributes actually usable (note that they are currently not used in the CLI and web UI client). Before, the org.daisy.pipeline.ws.host, org.daisy.pipeline.ws.port and org.daisy.pipeline.ws.path properties, which are used to configure the "listen address" of the web server, were also used in the href attributes, but this doesn't always make sense. Related issues: - #153 - daisy/pipeline-assembly#184
The current setup has issues with port bindings when running in Docker. I don't remember if this is only an issue when running in docker swarm (and docker compose), or also when running with host networking.
See also: nlbdev/nordic-epub3-dtbook-migrator#386
It might be possible to cherry-pick these two commits:
The text was updated successfully, but these errors were encountered: