Skip to content

Latest commit

 

History

History
155 lines (119 loc) · 6.25 KB

quickstart.mdx

File metadata and controls

155 lines (119 loc) · 6.25 KB
title description
Quickstart
Start with Vibranium Dome

Setup

The project has multiple components, the easiest way to start is by using Docker Compose or Helm Charts. VibraniumDome supports both OpenAI API & Azure AI API.

[OpenSearch](https://opensearch.org/docs/latest/quickstart/#starting-your-cluster) requires max virtual memory areas `vm.max_map_count` to be `262144`, and in some environments it's `65530`, so check first and apply workaround to make sure it's the correct value. Run first: `docker run -u root -it opensearchproject/opensearch:2.9.0 bash -c "cat /proc/sys/vm/max_map_count"` if that command return `65530`, run that command to increase it temporarily: `docker run --rm --privileged alpine sysctl -w vm.max_map_count=262144` and run again the previos command to make sure the new value set to `262144` `docker run -u root -it opensearchproject/opensearch:2.9.0 bash -c "cat /proc/sys/vm/max_map_count"` OpenSearch on Linux machine need to change the owner of the filesystem volumes, so run: `mkdir vibraniumdome-opensearch/vibraniumdome-opensearch-data1` `mkdir vibraniumdome-opensearch/vibraniumdome-opensearch-data2` `sudo chown -R 1000:1000 vibraniumdome-opensearch/vibraniumdome-opensearch-data1` `sudo chown -R 1000:1000 vibraniumdome-opensearch/vibraniumdome-opensearch-data2` Set `OPENAI_API_KEY` environment variable in [environment file](https://github.com/genia-dev/vibraniumdome/blob/main/vibraniumdome-shields/.env.example), you can generate one [here](https://platform.openai.com/api-keys) ```git clone https://github.com/genia-dev/vibraniumdome``` ```cd vibraniumdome``` With docker-compose: ```docker-compose up``` ; With Helm: ```helm install test-release helm/``` Navigate to [Vibranium-App](http://localhost:3000/)

There are two authentication providers in the app: Google and Basic credentials. To make the Google login work, you need to register an OAuth 2.0 application on Google Cloud Platform. The default username and password for the login are: username: admin@admin.com and password: admin. You can change it here.

The previous command of `docker-compose up` use existing docker images in docker-hub, if you would like to modify & build the project from source locally, you can run the docker-compose in build mode by: ``` cp vibraniumdome-app/.env.example .env docker-compose -f docker-compose.build.yml up --build ```

Analyze your first LLM interaction

There are two ways to start scanning your LLM application: by code, or by using a Streamlit app that is included in the Docker Compose & Helm Chart.

Analyze LLM Interactions by streamlit app

After you run the the whole system like in the above instructions, navigate to VibraniumDome-Streamlit-App.

In the left pane of the streamlit app in the Environment Variables section you need to configure the OpenAI API Key. The Streamlit app, by default, uses http://localhost:5001 for the Vibranium Dome Base URL (optional). You can change this to point to a different URL.

Now you can chat in the app, and see the results in Vibranium-App.

Analyze LLM Interactions by code

Both the old OpenAI SDK version 0.28.1 and the new 1.* OpenAI SDK version are supported. Make sure to set OPENAI_API_KEY environment variable in both .env files here and here.

Install the Vibranium Dome SDK

This step should be done in the Agent code, can be skipped in the server installation.

Create demo application with the OLD OpenAI SDK in main.py

python3 -m venv venv
source venv/bin/activate
pip3 install vibraniumdome-sdk==0.3.0 openai==0.28.1
import os
import openai

from vibraniumdome_sdk import VibraniumDome

openai.api_key = os.getenv("OPENAI_API_KEY")

VibraniumDome.init(app_name="set_you_agent_name_here")

def main():
    response = openai.ChatCompletion.create(
    model="gpt-3.5-turbo",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Who won the world series in 2020?"},
        {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
        {"role": "user", "content": "Where was it played?"}
    ],
    temperature=0,
    request_timeout=5,
    user="user-123456",
    headers={"x-session-id": "abcd-1234-cdef"},)

    print(response)


if __name__ == "__main__":
    main()

Create demo application with the NEW OpenAI SDK in main.py

python3 -m venv venv
source venv/bin/activate
pip3 install vibraniumdome-sdk openai

In newer versions of VibraniumDome, it requires to set VIBRANIUM_DOME_API_KEY; If not provided, it used the default one defined in VibraniumDome System

import os
from openai import OpenAI

from vibraniumdome_sdk import VibraniumDome

VibraniumDome.init(app_name="set_you_agent_name_here")

client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
completion = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Tell me a joke about opentelemetry"}],
    user="user-123456",
    extra_headers={"x-session-id": "abcd-1234-cdef"},
)

print(completion.choices[0].message.content)

Run the demo application

python3 main.py

Navigate to Vibranium-App to see the results.


You can control the Vibranium Dome service URL by setting this environment variable `VIBRANIUM_DOME_BASE_URL` (default: http://localhost:5001)