Skip to content

Local development setup

sjoh2006 edited this page Aug 3, 2024 · 68 revisions

Requirements

This document is being written on an Ubuntu 22.04 desktop, so the instructions may need some adjustment on other distributions (etc).

Ubuntu 22.04 uses Python 3.10, so that's the version of Python we'll be using (where it's needed).

Windows WSL2

These instructions have been reported to work without any changes on Windows WSL2.

Set up the prerequisites

Install needed packages

$ sudo apt -y install docker.io docker-buildx docker-compose-v2
## NOTE: You may need to remove the corresponding docker plugins first if the above command fails
$ sudo apt -y install build-essential curl docker-compose pwgen python3-venv xvfb

Note

If you're installing on GitHub Codespaces, you need run the following commands:
$ sudo apt purge moby-buildx && sudo apt install containerd

Add your user to the "docker" group

$ sudo usermod -aG docker $USER
$ newgrp docker
$ sudo systemctl restart docker
$ sudo chmod 666 /var/run/docker.sock

Install Node Version Manager

$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
## You may need to save as a script file first, then change she-bang to point to correct shell

Now log out of your desktop, then back in again, for the group change to become effective and nvm to be available

Install NodeJS version 18

$ nvm install --lts 18
$ nvm alias default 18
$ nvm use 18

Confirm version 18 of NodeJS is active:

$ nvm list

Install Yarn 1.x

$ npm install -g yarn@1.22.22

Clone the Redash source code and install the NodeJS dependencies

$ git clone https://github.com/getredash/redash
$ cd redash
$ yarn

Compile and build

Redash uses GNU Make to run things, so if you're not sure about something it's often a good idea to take a look over the Makefile which can help. 😄

Build the Redash front end

$ make build

Build local Redash Docker image

$ make compose_build

On my desktop (Ryzen 5600X) that took about 12 minutes to complete the first time. After that though, it's much faster at about a minute and a half each time.

It's a good idea to check that the docker images were built ok. We do that by telling docker to show us the local "docker images", which should include these three new ones. It's important the "created" time shows them to be very recent... if it's not, then they're old images left over from something else. 😉

$ docker image list
REPOSITORY         TAG       IMAGE ID       CREATED         SIZE
redash_scheduler   latest    85bc2dc57801   2 minutes ago   1.38GB
redash_server      latest    85bc2dc57801   2 minutes ago   1.38GB
redash_worker      latest    85bc2dc57801   2 minutes ago   1.38GB

Start Redash locally, using the docker images you just built

$ make create_database
$ make up

The docker compose ps command should show all of the docker pieces are running:

$ docker compose ps
       Name                     Command                  State                                  Ports                            
---------------------------------------------------------------------------------------------------------------------------------
redash_email_1       bin/maildev                      Up (healthy)   1025/tcp, 1080/tcp, 0.0.0.0:1080->80/tcp,:::1080->80/tcp    
redash_postgres_1    docker-entrypoint.sh postg ...   Up             0.0.0.0:15432->5432/tcp,:::15432->5432/tcp                  
redash_redis_1       docker-entrypoint.sh redis ...   Up             6379/tcp                                                    
redash_scheduler_1   /app/bin/docker-entrypoint ...   Up             5000/tcp                                                    
redash_server_1      /app/bin/docker-entrypoint ...   Up             0.0.0.0:5001->5000/tcp,:::5001->5000/tcp,                   
                                                                     0.0.0.0:5678->5678/tcp,:::5678->5678/tcp                    
redash_worker_1      /app/bin/docker-entrypoint ...   Up             5000/tcp

The Redash web interface should also be available at http://localhost:5001, ready to be configured:

image

Once you've finished confirming everything works the way you want, then shut down the containers with:

$ make down

Set up Python for local backend development

Install the Ubuntu packages needed by various data sources:

$ sudo apt install -y --no-install-recommends default-libmysqlclient-dev freetds-dev libffi-dev libpq-dev \
    python3-dev libsasl2-dev libsasl2-modules-gssapi-mit libssl-dev unixodbc-dev xmlsec1

Then create a Python virtual environment, for safely installing Python libraries without affecting Python on the rest of the system:

$ python3 -m venv ~/redashvenv1
$ source ~/redashvenv1/bin/activate

When the Python virtual environment is active in your session, it changes the prompt to look like this:

(redashvenv1) $

With that done, install the rest of the Python dependencies:

(redashvenv1) $ pip3 install wheel  # "wheel" needs to be installed by itself first
(redashvenv1) $ pip3 install --upgrade black ruff launchpadlib pip setuptools
(redashvenv1) $ pip3 install poetry
(redashvenv1) $ poetry install --only main,all_ds,dev

Configuring Pre-commit

Before committing changes to GitHub or creating a pull request, the source code needs to be checked and formatted for certain quality standards:

(redashvenv1) $ make format
pre-commit run --all-files
isort....................................................................Passed
black....................................................................Passed
flake8...................................................................Passed

Enabling Pre-commit check before commit.

(redashvenv1) $ pre-commit install
(redashvenv1) $ git commit -m 'Added xxx'

Next step: Testing

Clone this wiki locally