Warning This is for local development only, and deliberately removes authentication for ease.
A small project for running CloudQuery locally, using Docker.
It includes:
- CloudQuery
- Postgres
- Grafana
- Docker
- Get deployTools credentials from Janus
- In the project root, run the following, and follow the resulting instructions:
./scripts/setup.sh
-
Start Docker
-
Run:
npm start -w dev-environment
OR:
./packages/dev-environment/script/start
This will start the Docker containers, and CloudQuery will start collecting data.
-
Wait for tables to start being populated. Usually the first tables show up after a few seconds, but this could take as long as a minute.
-
Open Grafana on http://localhost:3000, and start querying the data
-
To restart on your local machine, delete the container in docker and go back to step 2.
Note You can also use other Postgres clients, such as
psql
to query the data, or even your IDE!
To test the prisma migration container you can run the following script:
./packages/dev-environment/script/start-prisma-migrate-test
This will run the run-prisma-migrate.sh script.
To develop locally once the tables have been populated follow the steps in the repocop README
The local instance of cloudquery executes sequentially according to the order of the plugins in the config file. If you're particularly interested in Snyk data, you can move the Snyk plugin to the top of the list in the config file, and that data will be collected first.
If cloudquery can't detect credentials for Snyk or GitHub, it will skip those jobs. If you're not interested in GitHub data, you don't need to generate a token. It will still collect data from other sources.
- Use the same configuration files as PROD?