- Python3 and Java 11 or higher
- For local running/development: The postgres client library (For apt users:
apt install libpq-dev
) - Docker and Docker Compose if running the system or the database from a container
- For running without docker: Set up a Postgres instance on port 5432
Python dependencies are managed via pip-compile, starting from a core set of dependencies (contained in the requirements.in
and requirements_dev.in
files) which are then used to generate the full set of dependencies (contained in the requirements.txt
and requirements_dev.txt
files). Dependencies are furthermore split between proiduction ones (requirements.in
and requirements.txt
) and development-only ones (requirements_dev.in
and requirements_dev.txt
): this allows to reduce the size of the Docker image and reduce the dependency tracking workload (and Dependabot pull requests).
pip-compile should be installed by running pip install pip-tools
.
The dependencies are generated by running ./create_python_requirements
. This also checks for the latest dependencies available.
-
Create a python environment (one time instruction):
python3 -m venv env
-
Activate it using:
source env/bin/activate
-
Install the requirements (one time instruction):
pip install -r requirements.txt -r requirements_dev.txt
-
Set up the database via docker or connect your own Postgres instance
docker compose up db
-
Run the migrations from the root folder:
python src/manage.py migrate
-
Create the InfluxDB buckets:
python src/manage.py initbuckets
-
Run the server from the root folder:
python src/manage.py runserver
The server runs on http://127.0.0.1:8000/ -
To run the tests:
python src/manage.py test
-
To run pylint:
find src -name "*.py" | xargs pylint
This method should be sufficient for most development tasks as any (file) changes you make are monitored by StatReloader. To start the environment, run:
docker compose up --build
Note: remove --build
to skip building the container, will use the cached one (last build)
After this, you can access the application on http://127.0.0.1:8000/ and a pgAdmin instance on http://127.0.0.1:5050/ with admin@admin.com:admin
for username:password. From pgAdmin add a new server by giving it a name and the following database credentials:
InfluxDB can accessed at http://localhost:8086/, username:admin, password:adminpwd.
Grafana runs on http://localhost:3000/, username:admin, password:adminpwd.
The datasource and dashboards confing for Grafana can be changed from grafana/provisioning/grafana-datasources.yml
and grafana/dashboards/grafana-dashboard.yml
respectively. New dashboards can also be created in Grafana and exported as json, then added to grafana/dashboards
, to be loaded when the container restarts.
To reset the containers and remove the volumes run the ./reset_docker.sh
script.
-
In case SSL certificates are used, create a volume named delfitlm_certificates example and copy inside server.pem and server.key. Ensure they are owned by root and that permissions are 644 before copying them.
-
Set up the firewall bouncer for CrowdSec, instructions in
crowdsec/README.md
. -
Configure the `.env`` file with the preferred settings and add the website hostname. Example .env:
SECRET_KEY=
MY_HOST=localhost
SMTP_HOST=
SMTP_PORT=25
FROM_EMAIL=delfi@tudelft.nl
POSTGRES_PORT=5432
POSTGRES_HOST=db
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres
POSTGRES_DB=delfitlm
INFLUX_USERNAME=admin
INFLUX_PASSWORD=adminpwd
INFLUX_BUCKET=default
INFLUXDB_V2_TOKEN=adminpwd
INFLUXDB_V2_ORG=Delfi Space
GRAFANA_ADMIN_USER=admin
GRAFANA_ADMIN_USER_PWD=adminpwd
GF_SERVER_DOMAIN=localhost
GF_INFLUXDB_V2_TOKEN=adminpwd
SATNOGS_TOKEN=
CROWDSEC_LAPI=
-
Build and run Docker deployment script (runs on port 80 - default web port):
docker compose -f docker-compose.yml -f docker-compose-deploy.yml up --build
-
Access the container to initialize Django (only required the first time):
docker exec -it delfitlm-app-1 /bin/bash
-
Run the database migration to create the tables (only required the first time):
python manage.py migrate
-
Create a superuser (admin user) (only required the first time):
python manage.py createsuperuser
-
Generate a django keys with
python manage.py djecrety
and copy it to the .env file. -
Create the InfluxDB buckets:
python src/manage.py initbuckets
Note: remove --build
to skip building the container, will use the cached one (last build)
-
To run the unit tests execute
python manage.py test
from within thesrc
folder -
To compile the coverage report run the
./run_coverage.sh
script and the report will appear insrc/htmlcov/index.html
To backup the database run: docker exec -t your-db-container pg_dumpall -c -U your-db-user > dump.sql
For this project: docker exec -t delfitlm-db-1 pg_dumpall -c -U postgres > dump.sql
To restore the database run: cat dump.sql | docker exec -i your-db-container psql -U your-db-user -d your-db-name
For this project: cat dump.sql | docker exec -i delfitlm-db-1 psql -U postgres -d delfitlm
To change the password of the postgres user:
- Update
reset_postgres_password.sql
with the new password. - Run:
cat reset_postgres_password.sql | docker exec -i delfitlm-db-1 psql -U postgres
- Enter the container exec
docker exec -it delfitlm-influxdb-1 /bin/bash
- Change the password using:
influx user password -n admin -t INFLUXDB_V2_TOKEN
- Find the tokenID using:
influx auth find -t OLD_INFLUXDB_V2_TOKEN
- Create new token with:
influx auth create --org 'Delfi Space' --all-access -t OLD_INFLUXDB_V2_TOKEN
- Delete the old token:
influx auth delete --id OLD_INFLUXDB_V2_TOKEN_ID --t NEW_INFLUXDB_V2_TOKEN
To change the admin password: docker exec -it delfitlm-grafana-1 grafana-cli admin reset-admin-password newpassword
The django admin page can be used to elevate user permissions, assign roles or block accounts. The admin account can be used to manage the user roles and permissions.
The file src/transmission/processing/satellites.py
maintains the satellites we are operating. It contains the NoradID and activity status used for map tracking and other monitoring purposes. When new satellites are launched or decommissioned, the status in this file should be updated accordingly. Status: Operational
(functional satellite in orbit) and Status: Non Operational
(orbiting satellite no more functional) mean the location of the satellite will be tracked, while the other statuses are simply displayed on the front page.