Getting your Django app on the internet is - pardon my language - a pain in the ass.
You need to daemonize your app with something like Supervisor, then you need to setup a production ready database, then you need to setup a web-server using the likes of Nginx or Apache. And when it’s all done, you need to setup files and directories to collect and maintain logs from your app.
You could do all that, or you could use Docker.
Here’s what you’ll learn
In this tutorial you’ll learn to deploy a Django app with Docker and docker-compose. By the end of the tutorial, you’ll have a single command that you can use with *any *Django app to deploy it to a server.
The prerequisites
To complete this tutorial, you will need:
- Familiarity with using the command line.
- Familiarity with Python virtual environments. We will use a virtual environment to isolate the dependencies of our project. To install the python virtual environment module use:
pip install virtualenv
. - A basic understanding of YAML files.
Step 0 — Installing Dependencies
To follow this tutorial, you will need both Docker and Docker-Compose on your machine. Follow the steps below depending on your operating system.
Installing on Ubuntu
You can install Docker Engine on your system by using the convenience script provided by Docker.
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
To install Docker-Compose, first download the binaries from the Github repo. Then apply executable permissions to the binary.
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Installing on a Mac
Docker Desktop for Mac already includes Docker-Compose. To install Docker Desktop on Mac, download the package from here and run the installer.
Note: If the above steps don’t work for you, you can go to this link and find more information depending on your operating system.
Step 1 — Setting Up The Django Project
For following along, you can make a sample Django project or follow the steps with your existing project. I suggest making a sample app and then implementing it on your existing project so that you can tweak things to suit your purpose.
Create a new virtual environment.
$ mkdir deployment-project && cd deployment-project
$ virtualenv venv
$ source venv/bin/activate
Install Django and make a new Django project.
$ pip install django
$ django-admin startproject djangoproject
List out the requirements in a requirements.txt
file.
$ pip freeze > djangoproject/requirements.txt
In the djangoproject/settings.py
file, change the ALLOWED_HOSTS
to the following:
ALLOWED_HOSTS = ["localhost", "127.0.0.1", "0.0.0.0"]
Doing this allows us to access the Django app from outside the container.
Step 1 — Writing The Dockerfile
The first step to containerising our Django app is making a Dockerfile. In the djangoproject
directory, make a new file named Dockerfile
and put the following in it:
# Use the official Python image from the Docker Hub
FROM python:3.8.2
# These two environment variables prevent __pycache__/ files.
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
# Make a new directory to put our code in.
RUN mkdir /code
# Change the working directory.
# Every command after this will be run from the /code directory.
WORKDIR /code
# Copy the requirements.txt file.
COPY ./requirements.txt /code/
# Upgrade pip
RUN pip install --upgrade pip
# Install the requirements.
RUN pip install -r requirements.txt
# Copy the rest of the code.
COPY . /code/
This Dockerfile is used to build the container for your Django app. Each line is a step of the build. We will make a lot of changes in this, but for now, it works.
Why you should copy the requirements.txt file before copying the code While building the container, Docker caches stages of the build. So if you change your code but the
requirements.txt
file is the same, Docker doesn’t have to install all the requirements again for building the container.
Step 2 — Writing the Docker-Compose File
To use Docker-Compose, we need to make a docker-compose.yaml
file. In this file, we will define three different containers as services and then connect them with each other. Then we can use a single command to run our containers together. These services are:
- the Django app container,
- the database container and
- the Nginx webserver container.
We will add each of these services to the docker-compose.yaml
file one by one, starting with the web app. Make a new docker-compose.yaml
file in the deployment-project
directory with the following contents:
version: '3'
services:
web:
container_name: django
build: djangoproject/
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
Above, we defined a service for the web app. Let’s it breakdown and see what each option does:
container_name
defines the hostname of the container. This can be used by other containers to refer to this container.build
takes the path of the Dockerfile directory.command
takes a command you can run in the container after start up. In this case, we run the Django server on all available interfaces at port8000
.ports
takes a list of ports to bind to the ports of your host machine. Above, we bind the port8000
of the container to the port8000
of your host. This means that you can access the Django app atlocalhost:8000
of your machine.
Let’s test if everything works until now. Run the following command.
$ docker-compose up --build
You should see the Django app running inside the container. The --build
flag is used to tell Docker to build the container.
Step 3 — Setting Up The Database
Let’s set up the database now. We will be using a PostgreSQL database for our app. To set this up, we will have to complete the following steps:
- Add a database service to the docker-compose file.
- Define environment variables for the PostgreSQL database.
- Configure our Django app to use the database we just created. This includes connecting to the database, making migrations and migrating.
Change the docker-compose.yaml
file to look like the following:
version: '3'
services:
db:
container_name: postgresdb
image: postgres:latest
restart: always
env_file:
- project.env
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
web:
container_name: django
build: djangoproject/
command: python manage.py runserver 0.0.0.0:8000
env_file:
- project.env
ports:
- 8000:8000
depends_on:
- db
volumes:
postgres-data:
Again, a breakdown of the changes:
image
tells docker which image to use. In this case, we tell it to pull whichever image is associated with thelatest
tag for postgres.restart: always
makes sure the database always restarts when it exits. Other options for this areno
,on-failure
andunless-stopped
env_file
is used to put the environment variables in a file and tell Docker to initialize the container with those variables. We will create this file soon.volumes
is used for data persistency. It would be a shame if our database was empty every time we started it and destroyed its data everytime we turned it off. Docker uses volumes to store data that must be persistent on disk. Above, we define a new volume and tell docker to save the contents of/var/lib/postgresql/data
inside that volume. Note that we have to define volumes in thevolumes
section in the bottom.depends_on
tells Docker that the web service depends on the db service.
Now, lets create the project.env
file we mentioned above. Make a new file called project.env
in the deployment-project
directory with the following contents. We’ll add more to this file later.
POSTGRES_USER=userone
POSTGRES_PASSWORD=secretpassword
POSTGRES_DB=project_db
DATABASE=postgres
DATABASE_HOST=postgresdb
DATABASE_PORT=5432
Next, we will configure the Django app to connect to the database. To connect to the database, Django needs to have a driver installed. For PostgreSQL, that driver is psycopg2
.
$ pip install psycopg2-binary
You should install psycopg2-binary
instead of psycopg2
if you want to install psycopg2 without install PostgreSQL on your system. Now, update the requirements.txt
file
$ pip freeze > djangoproject/requirements.txt
Now we will configure the Django settings. First, make a new file called keyconfig.py
in the djangoproject/djangoproject
directory. We will save credentials and other sensitive information here. The purpose of using a keyconfig instead of directly loading environment variables in settings.py
is that you can separate parts of your config using classes. Add the following contents to the keyconfig.py
file:
import os
class Database:
NAME = os.getenv('POSTGRES_DB')
USER = os.getenv('POSTGRES_USER')
PASSWORD = os.getenv('POSTGRES_PASSWORD')
HOST = os.getenv('DATABASE_HOST')
PORT = os.getenv('DATABASE_PORT')
class Secrets:
SECRET_KEY = "SuperSecretSecretKey"
Open djangoproject/djangoproject/settings.py
and import the keyconfig.py
file like this:
...
from djangoproject.keyconfig import Database, Secrets
settings.py
Now, set the Secret key:
SECRET_KEY = Secrets.SECRET_KEY
settings.py
And the database:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": Database.NAME,
"USER": Database.USER,
"PASSWORD": Database.PASSWORD,
"HOST": Database.HOST,
"PORT": Database.PORT,
}
}
Now, we want to makemigrations
and migrate
everytime the container starts up. But the problem is that the database container can take longer to initialise and if we run the above commands before the database is up and running, it can result in an error. So we need a way to make sure the database has started. For this, we use an entrypoint script. In the djangoproject
directory, make a new file called entrypoint.sh
with the following contents.
#!/bin/sh
if [ "$DATABASE" = "postgres" ]; then
echo "Waiting for postgres..."
while ! nc -z $DATABASE_HOST $DATABASE_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
# Make migrations and migrate the database.
echo "Making migrations and migrating the database. "
python manage.py makemigrations main --noinput
python manage.py migrate --noinput
exec "$@"
The above script first waits for the database to start up. We do this by using netcat
to ping the server in a while loop. Then it makes migrations and performs the migrations. The --noinput
flag is used when we are using the command inside a script so that it does not prompt the user for input. Next, we need to make the file executable so that Docker can run it.
$ sudo chmod +x entrypoint.sh
Now change the Dockerfile to use this script. Add the following lines at the end of the Dockerfile:
RUN apt-get update && apt-get install -y netcat
ENTRYPOINT ["/code/entrypoint.sh"]
We need to install netcat as it is not installed by default. Once again, let’s test if everything is running. Go to the deployment-project
directory and spin up the containers.
$ docker-compose up --build -d
You can use the -d
flag to make the containers run in the background.
Step 4 — Using Nginx to Serve the Django app
In this step, we will set up another container to use Nginx to serve our django project. For this, we will do the following:
- Install and use gunicorn as our WSGI server
- Configure nginx as a reverse proxy for the gunicorn server.
- Add an nginx service to the docker-compose file.
Gunicorn is a production level WSGI server we will use to serve our Django project. To begin let’s, install gunicorn.
$ pip install gunicorn
To use gunicorn as our WSGI server, we use the following command:
$ gunicorn djangoproject.wsgi:application --bind 0.0.0.0:8000 --workers=4
In the above command, djangoproject.wsgi
is the name of the WSGI module to use. The --bind
takes address of the socker to bind to. In this case, we tell it to bind to port 8000
of all available interfaces. The --workers
flag takes the number of workers to be initialized by gunicorn, we set it to 4 here.
Once again, update the requirements.txt
file.
$ pip freeze > djangoproject/requirements.txt
Next, let’s make a configuration file for the nginx
server.
Inside the deployment-project
directory, make a directory called nginx
.
$ mkdir nginx && cd nginx
$ touch nginx.conf
Open the nginx.conf
file and add the following:
upstream djangoapp {
server django:8000;
}
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://djangoapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
}
Next, we add the nginx
service to the docker-compose file and also use gunicorn for the Django app. Open the docker-compose.yaml
file and change it to the following:
version: '3'
services:
db:
container_name: postgresdb
image: postgres:latest
restart: always
env_file:
- project.env
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
web:
container_name: django
build: djangoproject/
command: >
gunicorn djangoproject.wsgi:application --bind 0.0.0.0:8000 --workers=4
env_file:
- project.env
expose:
- 8000
depends_on:
- db
nginx:
container_name: nginx
image: nginx:mainline-alpine
restart: always
ports:
- 1337:80
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- web
volumes:
postgres-data:
- In the
volumes
section in nginx, we mount the local./nginx
directory to the/etc/nginx/conf.d
directory in the container. - In
depends_on
, we say that thenginx
service depends on theweb
service. - Instead of using
ports
in the web service, we useexpose
to expose the port8000
of the container to other containers on the network. But, in the nginx service, we useports
to bind the port80
of the container to the port1337
of the host machine.
The difference between ports and expose _ Use ports when you want to access the port from the host machine’s localhost. Use expose when you want to expose the port to other containers in the network. _
Again, test that everything is working by spinning up the containers.
$ docker-compose up --build
Test that you can see the webserver by going to localhost:1337
in your webbrowser.
Step 5 — The Dockerfile, Revisited
Before configuring the staticfiles, we will take another look at the Dockerfile and restructure it. Previously, we were running the processes inside the Django container as a root user. But, according to the Docker documentation, the best practice is to run your processes as non-privileged user within the containers. So, in this step, we will restructure the Dockerfile and also make a new user to run our app inside the container.
Make a new Dockerfile with the following contents.
FROM python:3.8.2
ENV PYTHONBUFFERED 1
ENV PYTHONWRITEBYTECODE 1
RUN apt-get update \
&& apt-get install -y netcat
# Create an app user in the app group.
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash app
ENV APP_HOME=/home/app/web
# Create the staticfiles directory. This avoids permission errors.
RUN mkdir -p $APP_HOME/staticfiles
# Change the workdir.
WORKDIR $APP_HOME
COPY requirements.txt $APP_HOME
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
Copy the code and make the app user the owner of the directory.
COPY . $APP_HOME
RUN chown -R app:app $APP_HOME
Change the user to app
and run the entrypoint.
USER app:app
ENTRYPOINT ["/home/app/web/entrypoint.sh"]
Test that everything works by spinning up the containers.
$ docker-compose up --build -d
Verify that the server is running by going to localhost:1337
in your browser.
Step 6 — Serving Static Files with Nginx
We need to configure nginx to serve our staticfiles because Django does not serve staticfiles in production. First, we configure Django to use a staticfiles
directory. Open settings.py
and add the following in the staticfiles section:
STATIC_URL = "/static/"
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
We tell Django that the staticfiles are going to be served at the /static/
path. And that the files are going to be in the staticfiles
directory.
Next, open the nginx.conf
file and add the following:
server {
...
location /static/ {
alias /home/app/web/staticfiles/;
}
}
We tell Nginx to serve the contents of the staticfiles
directory at the /static/
path.
Next, we need to edit the entrypoint script to collect the static files. Add the following at the end of the entrypoint.sh
file.
#!/bin/sh
...
python manage.py collectstatic --noinput
exec "$@"
To let Nginx access the staticfiles collected by Django, we will store them in a volume and add this volume to both the web
and nginx
services.
web:
container_name: django
build: djangoproject/
command: >
gunicorn djangoproject.wsgi:application --bind 0.0.0.0:8000 --workers=4
env_file:
- project.env
expose:
- 8000
depends_on:
- db
volumes:
- staticfiles:/home/app/web/staticfiles
nginx:
container_name: nginx
image: nginx:mainline-alpine
restart: always
ports:
- 80:1337
volumes:
- ./nginx:/etc/nginx/conf.d
- staticfiles:/home/app/web/staticfiles
depends_on:
- web
And add this volume to the volumes section:
volumes:
postgres-data:
staticfiles:
Check that the staticfiles have loaded by spinning up the containers and going to the admin page at localhost:1337/admin/
you should see the CSS loaded.
Conclusion
If you reached until here, you will have a ready to deploy Django app which will be up and running in just one command. You can use this configuration for your own Django apps and deploy them to a server with only a few extra steps.
The full code for this guide is available on Github here.