The steps below will get you up and running with a local development environment. All of these commands assume you are in the root of your generated project.
- Docker; if you don't have it yet, follow the installation instructions;
- Docker Compose; refer to the official documentation for the installation guide.
Currently PostgreSQL (psycopg2
python package) is not installed inside
Docker containers for Windows users, while it is required by the
generated Django project. To fix this, add psycopg2
to the list of
requirements inside requirements/base.txt
:
# Python-PostgreSQL Database Adapter
psycopg2==2.6.2
Doing this will prevent the project from being installed in an
Windows-only environment (thus without usage of Docker). If you want to
use this project without Docker, make sure to remove psycopg2
from the
requirements again.
This can take a while, especially the first time you run this particular command on your development system:
$ docker-compose -f local.yml build
Generally, if you want to emulate production environment use
production.yml
instead. And this is true for any other actions you
might need to perform: whenever a switch is required, just do it!
This brings up both Django and PostgreSQL. The first time it is run it might take a while to get started, but subsequent runs will occur quickly.
Open a terminal at the project root and run the following for local development:
$ docker-compose -f local.yml up
You can also set the environment variable COMPOSE_FILE
pointing to
local.yml
like this:
$ export COMPOSE_FILE=local.yml
And then run:
$ docker-compose up
To run in a detached (background) mode, just:
$ docker-compose up -d
As with any shell command that we wish to run in our container, this is
done using the docker-compose -f local.yml run --rm
command: :
$ docker-compose -f local.yml run --rm django python manage.py migrate
$ docker-compose -f local.yml run --rm django python manage.py createsuperuser
Here, django
is the target service we are executing the commands
against.
When DEBUG
is set to True
, the host is validated against
['localhost', '127.0.0.1', '[::1]']
. This is adequate when running a
virtualenv
. For Docker, in the config.settings.local
, add your host
development server IP to INTERNAL_IPS
or ALLOWED_HOSTS
if the
variable exists.
This is the excerpt from your project's local.yml
: :
# ...
postgres:
build:
context: .
dockerfile: ./compose/production/postgres/Dockerfile
volumes:
- local_postgres_data:/var/lib/postgresql/data
- local_postgres_data_backups:/backups
env_file:
- ./.envs/.local/.postgres
# ...
The most important thing for us here now is env_file
section enlisting
./.envs/.local/.postgres
. Generally, the stack's behavior is governed
by a number of environment variables (env(s), for short) residing in
envs/
, for instance, this is what we generate for you: :
.envs
├── .local
│ ├── .django
│ └── .postgres
└── .production
├── .caddy
├── .django
└── .postgres
By convention, for any service sI
in environment e
(you know
someenv
is an environment when there is a someenv.yml
file in the
project root), given sI
requires configuration, a .envs/.e/.sI
service configuration file exists.
Consider the aforementioned .envs/.local/.postgres
: :
# PostgreSQL
# ------------------------------------------------------------------------------
POSTGRES_HOST=postgres
POSTGRES_DB=<your project slug>
POSTGRES_USER=XgOWtQtJecsAbaIyslwGvFvPawftNaqO
POSTGRES_PASSWORD=jSljDz4whHuwO3aJIgVBrqEml5Ycbghorep4uVJ4xjDYQu0LfuTZdctj7y0YcCLu
The three envs we are presented with here are POSTGRES_DB
,
POSTGRES_USER
, and POSTGRES_PASSWORD
(by the way, their values have
also been generated for you). You might have figured out already where
these definitions will end up; it's all the same with django
and
caddy
service container envs.
One final touch: should you ever need to merge .envs/production/*
in a
single .env
run the merge_production_dotenvs_in_dotenv.py
: :
$ python merge_production_dotenvs_in_dotenv.py
The .env
file will then be created, with all your production envs
residing beside each other.
This tells our computer that all future commands are specifically for
the dev1 machine. Using the eval
command we can switch machines as
needed.:
$ eval "$(docker-machine env dev1)"
If you are using the following within your code to debug: :
import ipdb; ipdb.set_trace()
Then you may need to run the following for it to work as desired: :
$ docker-compose -f local.yml run --rm --service-ports django
In order for django-debug-toolbar
to work designate your Docker
Machine IP with INTERNAL_IPS
in local.py
.
When developing locally you can go with
MailHog for email testing
provided use_mailhog
was set to y
on setup. To proceed,
- make sure
mailhog
container is up and running; - open up
http://127.0.0.1:8025
.
Flower is a "real-time monitor and web admin for Celery distributed task queue".
Prerequisites:
use_docker
was set toy
on project initialization;use_celery
was set toy
on project initialization.
By default, it's enabled both in local and production environments
(local.yml
and production.yml
Docker Compose configs, respectively)
through a flower
service. For added security, flower
requires its
clients to provide authentication credentials specified as the
corresponding environments' .envs/.local/.django
and
.envs/.production/.django
CELERY_FLOWER_USER
and
CELERY_FLOWER_PASSWORD
environment variables. Check out
localhost:5555
and see for yourself.
developing-locally-docker.md is based on developing-locally-docker.rst