We're excited that you're interested in contributing to Opik! There are many ways to contribute, from writing code to improving the documentation.
The easiest way to get started is to:
- Submit bug reports and feature requests
- Review the documentation and submit Pull Requests to improve it
- Speaking or writing about Opik and letting us know
- Upvoting popular feature requests to show your support
- Review our Contributor License Agreement
Thanks for taking the time to submit an issue, it's the best way to help us improve Opik!
Before submitting a new issue, please check the existing issues to avoid duplicates.
To help us understand the issue you're experiencing, please provide steps to reproduce the issue included a minimal code snippet that reproduces the issue. This helps us diagnose the issue and fix it more quickly.
Feature requests are welcome! To help us understand the feature you'd like to see, please provide:
- A short description of the motivation behind this request
- A detailed description of the feature you'd like to see, including any code snippets if applicable
If you are in a position to submit a PR for the feature, feel free to open a PR !
The Opik project is made up of five main sub-projects:
apps/opik-documentation
: The Opik documentation websitedeployment/installer
: The Opik installersdks/python
: The Opik Python SDKapps/opik-frontend
: The Opik frontend applicationapps/opik-backend
: The Opik backend server
In addition, Opik relies on:
- Clickhouse: Used to trace traces, spans and feedback scores
- MySQL: Used to store metadata associated with projects, datasets, experiments, etc.
- Redis: Used for caching
In order to run the development environment, you will need to have the following tools installed:
-
minikube - https://minikube.sigs.k8s.io/docs/start
-
More tools:
bash
completion /zsh
completionkubectx
andkubens
- easy switch context/namespaces for kubectl - https://github.com/ahmetb/kubectx
The local development environment is based on minikube. Once you have minikube installed, you can run it using:
minikube start
You can then run Opik and it's dependencies (Clickhouse, Redis, MySQL, etc) using:
./build_and_run.sh
This script supports the following options:
--no-build Skip the build process
--no-fe-build Skip the FE build process
--no-helm-update Skip helm repo update
--local-fe Run FE locally (For frontend developers)
--help Display help message
Note
The first time you run the build_and_run
script, it can take a few minutes to install everything.
To check the application is running, you can access the FE using: http://localhost:5173
Connecting to Clickhouse
You can run the clickhouse-client
with:
kubectl exec -it chi-opik-clickhouse-cluster-0-0-0 clickhouse-client
After the client is connected, you can check the databases with
show databases;
Minikube commands
List the pods that are running
kubectl get pods
To restart a pod just delete the pod, k8s will start a new one
kubectl delete pod <pod name>
There is no clean way to delete the databases, so if you need to do that, it's better to delete the namespace and then install again. Run
kubectl delete namespace opik
and in parallel (in another terminal window/tab) run
kubectl patch chi opik-clickhouse --type json --patch='[ { "op": "remove", "path": "/metadata/finalizers" } ]'
after the namespace is deleted, run
./build_and_run.sh --no-build
to install everything again
Stop minikube
minikube stop
Next time you will start the minikube, it will run everything with the same configuration and data you had before.
The documentation is made up of three main parts:
apps/opik-documentation/documentation
: The Opik documentation websiteapps/opik-documentation/python-sdk-docs
: The Python reference documentationapps/opik-documentation/rest-api-docs
: The REST API reference documentation
The documentation website is built using Docusaurus and is located in apps/opik-documentation/documentation
.
In order to run the documentation website locally, you need to have npm
installed. Once installed, you can run the documentation locally using the following command:
cd apps/opik-documentation/documentation
# Install dependencies - Only needs to be run once
npm install
# Run the documentation website locally
npm run start
You can then access the documentation website at http://localhost:3000
. Any change you make to the documentation will be updated in real-time.
The Python SDK reference documentation is built using Sphinx and is located in apps/opik-documentation/python-sdk-docs
.
In order to run the Python SDK reference documentation locally, you need to have python
and pip
installed. Once installed, you can run the documentation locally using the following command:
cd apps/opik-documentation/python-sdk-docs
# Install dependencies - Only needs to be run once
pip install -r requirements.txt
# Run the python sdk reference documentation locally
make dev
The Python SDK reference documentation will be built and available at http://127.0.0.1:8000
. Any change you make to the documentation will be updated in real-time.
Setting up your development environment:
In order to develop features in the Python SDK, you will need to have Opik running locally. You can follow the instructions in the Configuring your development environment section or by running Opik locally with Docker Compose:
cd deployment/docker-compose
# Starting the Opik platform
docker compose up --detach
# Configure the Python SDK to point to the local Opik deployment
opik configure --use_local
The Opik server will be running on http://localhost:5173
.
Submitting a PR:
The Python SDK is available under sdks/python
and can be installed locally using pip install -e sdks/python
.
Before submitting a PR, please ensure that your code passes the test suite:
cd sdks/python
pytest tests/
and the linter:
cd sdks/python
pre-commit run --all-files
Note
If you changes impact public facing methods or docstrings, please also update the documentation. You can find more information about updating the docs in the documentation contribution guide.
The Opik frontend is a React application that is located in apps/opik-frontend
.
In order to run the frontend locally, you need to have npm
installed. Once installed, you can run the frontend locally using the following command:
# Run the backend locally with the flag "--local-fe"
./build_and_run.sh --local-fe
cd apps/opik-frontend
# Install dependencies - Only needs to be run once
npm install
# Run the frontend locally
npm run start
You can then access the development frontend at http://localhost:5173/
. Any change you make to the frontend will be updated in real-time.
You will need to open the FE using
http://localhost:5173/
ignoring the output from thenpm run start
command which will recommend to openhttp://localhost:5174/
. In casehttp://localhost:5174/
is opened, the BE will not be accessible.
Before submitting a PR, please ensure that your code passes the test suite, the linter and the type checker:
cd apps/opik-frontend
npm run e2e
npm run lint
npm run typecheck
In order to run the external services (Clickhouse, MySQL, Redis), you can use the build_and_run.sh
script or docker-compose
:
cd deployment/docker-compose
docker compose up clickhouse redis mysql -d
The Opik backend is a Java application that is located in apps/opik-backend
.
In order to run the backend locally, you need to have java
and maven
installed. Once installed, you can run the backend locally using the following command:
cd apps/opik-backend
# Build the Opik application
mvn clean install
# Start the Opik application
java -jar target/opik-backend-{project.pom.version}.jar server config.yml
Replace {project.pom.version}
with the version of the project in the pom file.
Once the backend is running, you can access the Opik API at http://localhost:8080
.
Before submitting a PR, please ensure that your code passes the test suite:
cd apps/opik-backend
mvn test
Tests leverage the testcontainers
library to run integration tests against a real instances of the external services. Ports are randomly assigned by the library to avoid conflicts.
Health Check
To see your applications health enter url http://localhost:8080/healthcheck
Run migrations
DDL migrations
The project handles it using liquibase. Such migrations are located at apps/opik-backend/src/main/resources/liquibase/{{DB}}/migrations
and executed via apps/opik-backend/run_db_migrations.sh
. This process is automated via Docker image and helm chart.
In order to run DB DDL migrations manually, you will need to run:
- Check pending migrations
java -jar target/opik-backend-{project.pom.version}.jar {database} status config.yml
- Run migrations
java -jar target/opik-backend-{project.pom.version}.jar {database} migrate config.yml
- Create schema tag
java -jar target/opik-backend-{project.pom.version}.jar {database} tag config.yml {tag_name}
- Rollback migrations
java -jar target/opik-backend-{project.pom.version}.jar {database} rollback config.yml --count 1
ORjava -jar target/opik-backend-{project.pom.version}.jar {database} rollback config.yml --tag {tag_name}
Replace {project.pom.version}
with the version of the project in the pom file. Replace {database}
with db for MySQL migrations and with dbAnalytics
for ClickHouse migrations.
Requirements:
- Such migrations have to be backward compatible, which means:
- New fields must be optional or have default values
- In order to remove a column, all references to it must be removed at least one release before the column is dropped at the DB level.
- Renaming the column is forbidden unless the table is not currently being used.
- Renaming the table is forbidden unless the table is not currently being used.
- For more complex migration, apply the transition phase. Refer to Evolutionary Database Design
- It has to be independent of the code.
- It must not cause downtime
- It must have a unique name
- It must contain a rollback statement or, in the case of Liquibase, the word
empty
is not possible. Refer to link
DML migrations
In such cases, migrations will not run automatically. They have to be run manually by the system admin via the database client. These migrations are documented via CHANGELOG.md
and placed at apps/opik-backend/data-migrations
together with all instructions required to run them.
Requirements:
- Such migrations have to be backward compatible, which means:
- Data shouldn't be deleted unless 100% safe
- It must not prevent rollback to the previous version
- It must not degrade performance after running
- For more complex migration, apply the transition phase. Refer to Evolutionary Database Design
- It must contain detailed instructions on how to run it
- It must be batched appropriately to avoid disrupting operations
- It must not cause downtime
- It must have a unique name
- It must contain a rollback statement or, in the case of Liquibase, the word
empty
is not possible. Refer to link.
Accessing Clickhouse
You can curl the ClickHouse REST endpoint with echo 'SELECT version()' | curl -H 'X-ClickHouse-User: opik' -H 'X-ClickHouse-Key: opik' 'http://localhost:8123/' -d @-
.
SHOW DATABASES
Query id: a9faa739-5565-4fc5-8843-5dc0f72ff46d
┌─name───────────────┐
│ INFORMATION_SCHEMA │
│ opik │
│ default │
│ information_schema │
│ system │
└────────────────────┘
5 rows in set. Elapsed: 0.004 sec.
Sample result: 23.8.15.35