.. index:: deployment, docker, docker-compose, compose
- Docker 1.10+.
- Docker Compose 1.6+
Before you begin, check out the production.yml
file in the root of this project. Keep note of how it provides configuration for the following services:
django
: your application running behindGunicorn
;postgres
: PostgreSQL database with the application's relational data;redis
: Redis instance for caching;caddy
: Caddy web server with HTTPS on by default.
Provided you have opted for Celery (via setting use_celery
to y
) there are two more services:
celeryworker
running a Celery worker process;celerybeat
running a Celery beat process.
The majority of services above are configured through the use of environment variables. Just check out :ref:`envs` and you will know the drill.
To obtain logs and information about crashes in a production setup, make sure that you have access to an external Sentry instance (e.g. by creating an account with sentry.io), and set the SENTRY_DSN
variable.
You will probably also need to setup the Mail backend, for example by adding a Mailgun API key and a Mailgun sender domain, otherwise, the account creation view will crash and result in a 500 error when the backend attempts to send an email to the account owner.
If you are deploying to AWS, you can use the IAM role to substitute AWS credentials, after which it's safe to remove the AWS_ACCESS_KEY_ID
AND AWS_SECRET_ACCESS_KEY
from .envs/.production/.django
. To do it, create an IAM role and attach it to the existing EC2 instance or create a new EC2 instance with that role. The role should assume, at minimum, the AmazonS3FullAccess
permission.
SSL (Secure Sockets Layer) is a standard security technology for establishing an encrypted link between a server and a client, typically in this case, a web server (website) and a browser. Not having HTTPS means that malicious network users can sniff authentication credentials between your website and end users' browser.
It is always better to deploy a site behind HTTPS and will become crucial as the web services extend to the IoT (Internet of Things). For this reason, we have set up a number of security defaults to help make your website secure:
- If you are not using a subdomain of the domain name set in the project, then remember to put the your staging/production IP address in the
DJANGO_ALLOWED_HOSTS
environment variable (see :ref:`settings`) before you deploy your website. Failure to do this will mean you will not have access to your website through the HTTP protocol. - Access to the Django admin is set up by default to require HTTPS in production or once live.
The Caddy web server used in the default configuration will get you a valid certificate from Lets Encrypt and update it automatically. All you need to do to enable this is to make sure that your DNS records are pointing to the server Caddy runs on.
You can read more about this here at Automatic HTTPS in the Caddy docs.
Postgres is saving its database files to the postgres_data
volume by default. Change that if you want something else and make sure to make backups since this is not done automatically.
You will need to build the stack first. To do that, run:
docker-compose -f production.yml build
Once this is ready, you can run it with:
docker-compose -f production.yml up
To run the stack and detach the containers, run:
docker-compose -f production.yml up -d
To run a migration, open up a second terminal and run:
docker-compose -f production.yml run --rm django python manage.py migrate
To create a superuser, run:
docker-compose -f production.yml run --rm django python manage.py createsuperuser
If you need a shell, run:
docker-compose -f production.yml run --rm django python manage.py shell
To check the logs out, run:
docker-compose -f production.yml logs
If you want to scale your application, run:
docker-compose -f production.yml scale django=4 docker-compose -f production.yml scale celeryworker=2
Warning
don't try to scale postgres
, celerybeat
, or caddy
.
To see how your containers are doing run:
docker-compose -f production.yml ps
Once you are ready with your initial setup, you want to make sure that your application is run by a process manager to
survive reboots and auto restarts in case of an error. You can use the process manager you are most familiar with. All
it needs to do is to run docker-compose -f production.yml up
in your projects root directory.
If you are using supervisor
, you can use this file as a starting point:
[program:{{cookiecutter.project_slug}}] command=docker-compose -f production.yml up directory=/path/to/{{cookiecutter.project_slug}} redirect_stderr=true autostart=true autorestart=true priority=10
Move it to /etc/supervisor/conf.d/{{cookiecutter.project_slug}}.conf
and run:
supervisorctl reread supervisorctl start {{cookiecutter.project_slug}}
For status check, run:
supervisorctl status