Skip to content

Commit

Permalink
Add dashboard to documentation (#585)
Browse files Browse the repository at this point in the history
* dashboard docs

* Update azure.mdx

* Update anyscale.mdx

* Update project.mdx

* Update prompts.mdx

---------

Co-authored-by: Dhruv Chawla <43818888+Dominastorm@users.noreply.github.com>
  • Loading branch information
shrjain1312 and Dominastorm authored Mar 5, 2024
1 parent 4c24609 commit 737b82f
Show file tree
Hide file tree
Showing 15 changed files with 253 additions and 1 deletion.
Binary file added docs/assets/dashboard/dashboard_home.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/dashboard/dashboard_project1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/dashboard/eval.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/dashboard/eval_logs.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/dashboard/eval_select_metrics.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/dashboard/prompt.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/assets/dashboard/prompt_select.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
69 changes: 69 additions & 0 deletions docs/dashboard/evaluations.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
title: Evaluations
---

### What are Evaluations?

Using UpTrain you can run evaluations on 20+ pre-configured metrics like:
1. [Context Relevance](/predefined-evaluations/context-awareness/context-relevance): Evaluates how relevant the retrieved context is to the question specified.

2. [Factual Accuracy](/predefined-evaluations/context-awareness/factual-accuracy): Evaluates whether the response generated is factually correct and grounded by the provided context.

3. [Response Completeness](/predefined-evaluations/response-quality/response-completeness): Evaluates whether the response has answered all the aspects of the question specified

You can look at the complete list of UpTrain's supported metrics [here](/predefined-evaluations/overview)

### How does it work?

<Steps>
<Step title = "Create a new Project">
Click on `Create New Project` from Home
<Frame>
<img src="/assets/dashboard/dashboard_home.png" />
</Frame>
</Step>
<Step title = "Enter Project Information">
<Frame>
<img src="/assets/dashboard/dashboard_project1.png" />
</Frame>
* `Project name:` Create a name for your project
* `Dataset name:` Create a name for your dataset
* `Project Type:` Select project type: `Evaluations`
* `Choose File:` Upload your Dataset
Sample Dataset:
```jsonl
{"question":"","response":"","context":""}
{"question":"","response":"","context":""}
```
* `Evaluation LLM:` Select an LLM to run evaluations
</Step>
<Step title = "Select Evaluations to Run">
<Frame>
<img src="/assets/dashboard/eval_select_metrics.png" />
</Frame>
</Step>
<Step title = "View Evaluations">
You can see all the evaluations ran using UpTrain
<Frame>
<img src="/assets/dashboard/eval.png" />
</Frame>

You can also see individual logs
<Frame>
<img src="/assets/dashboard/eval_logs.png" />
</Frame>
</Step>
</Steps>

<CardGroup cols={1}>
<Card
title="Have Questions?"
href="https://join.slack.com/t/uptraincommunity/shared_invite/zt-1yih3aojn-CEoR_gAh6PDSknhFmuaJeg"
icon="slack"
color="#808080"
>
Join our community for any questions or requests
</Card>

</CardGroup>

38 changes: 38 additions & 0 deletions docs/dashboard/getting_started.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
---
title: Getting Started
---

### What is UpTrain Dashboard?

The UpTrain dashboard is a web-based interface that allows you to evaluate your LLM applications.

It is a self-hosted dashboard that runs on your local machine. You don't need to write any code to use the dashboard.

You can use the dashboard to evaluate your LLM applications, view the results, manage prompts, run experiments, and perform root cause analysis.

<Note>Before you start, ensure you have docker installed on your machine. If not, you can install it from [here](https://docs.docker.com/get-docker/). </Note>

### How to install?

The following commands will download the UpTrain dashboard and start it on your local machine:
```bash
# Clone the repository
git clone https://github.com/uptrain-ai/uptrain
cd uptrain

# Run UpTrain
bash run_uptrain.sh
```

<CardGroup cols={1}>
<Card
title="Have Questions?"
href="https://join.slack.com/t/uptraincommunity/shared_invite/zt-1yih3aojn-CEoR_gAh6PDSknhFmuaJeg"
icon="slack"
color="#808080"
>
Join our community for any questions or requests
</Card>

</CardGroup>

52 changes: 52 additions & 0 deletions docs/dashboard/project.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
---
title: Create a Project
---

### What is a Project?

Using UpTrain Dashboard you can manage all your projects.

There are 2 types of projects we support:
* **[Evaluations](/dashboard/evaluations):** Run evaluations on your queries, documents and LLM responses
* **[Prompts](/dashboard/prompts):** Find the best way to ask questions to your LLM using prompt iteration, experimentation and evaluations

### How does it work?

<Steps>
<Step title = "Create a new Project">
Click on `Create New Project` from Home
<Frame>
<img src="/assets/dashboard/dashboard_home.png" />
</Frame>
</Step>
<Step title = "Enter Project Information">
* `Project name:` Create a name for your project
* `Dataset name:` Create a name for your dataset
* `Project Type:` Select a project type between `Evaluations` and `Prompts`
* `Choose File:` Upload your Dataset
Sample Dataset:
```jsonl
{"question":"", "response":"", "context":""}
{"question":"", "response":"", "context":""}
```
* `Evaluation LLM:` Select an LLM to run evaluations
<Frame>
<img src="/assets/dashboard/dashboard_project1.png" />
</Frame>
</Step>
</Steps>

Now that you have created a project, you can run evaluations or experiment with prompts

<CardGroup cols={1}>
<Card
title="Have Questions?"
href="https://join.slack.com/t/uptraincommunity/shared_invite/zt-1yih3aojn-CEoR_gAh6PDSknhFmuaJeg"
icon="slack"
color="#808080"
>
Join our community for any questions or requests
</Card>

</CardGroup>

69 changes: 69 additions & 0 deletions docs/dashboard/prompts.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
title: Prompts
---

### What are Prompts?

You can manage your prompt iterations and experiment with them using UpTrain on 20+ pre-configured evaluation metrics like:
1. [Context Relevance](/predefined-evaluations/context-awareness/context-relevance): Evaluates how relevant the retrieved context is to the question specified.

2. [Factual Accuracy](/predefined-evaluations/context-awareness/factual-accuracy): Evaluates whether the response generated is factually correct and grounded by the provided context.

3. [Response Completeness](/predefined-evaluations/response-quality/response-completeness): Evaluates whether the response has answered all the aspects of the question specified

You can look at the complete list of UpTrain's supported metrics [here](/predefined-evaluations/overview)

### How does it work?

<Steps>
<Step title = "Create a new Project">
Click on `Create New Project` from Home
<Frame>
<img src="/assets/dashboard/dashboard_home.png" />
</Frame>
</Step>
<Step title = "Enter Project Information">
<Frame>
<img src="/assets/dashboard/dashboard_project1.png" />
</Frame>
* `Project name:` Create a name for your project
* `Dataset name:` Create a name for your dataset
* `Project Type:` Select project type: `Prompts`
* `Choose File:` Upload your Dataset
Sample Dataset:
```jsonl
{"question":"","response":"","context":""}
{"question":"","response":"","context":""}
```
* `Evaluation LLM:` Select an LLM to run evaluations
</Step>
<Step title = "Enter your Prompt">
<Frame>
<img src="/assets/dashboard/prompt_select.png" />
</Frame>
</Step>
<Step title = "Select Evaluations to Run">
<Frame>
<img src="/assets/dashboard/eval_select_metrics.png" />
</Frame>
</Step>
<Step title = "View Prompts">
You can see all the evaluations ran on your prompts using UpTrain
<Frame>
<img src="/assets/dashboard/prompt.png" />
</Frame>
</Step>
</Steps>

<CardGroup cols={1}>
<Card
title="Have Questions?"
href="https://join.slack.com/t/uptraincommunity/shared_invite/zt-1yih3aojn-CEoR_gAh6PDSknhFmuaJeg"
icon="slack"
color="#808080"
>
Join our community for any questions or requests
</Card>

</CardGroup>

5 changes: 5 additions & 0 deletions docs/llms/anyscale.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,11 @@ ANYSCALE_API_KEY = "esecret_***********************"

settings = Settings(model='anyscale/mistralai/Mistral-7B-Instruct-v0.1', anyscale_api_key=ANYSCALE_API_KEY)
```
<Note>
The model name should start with `anyscale/` for UpTrain to recognize you are using models hosted on Anyscale.

For example if you are using `mistralai/Mistral-7B-Instruct-v0.1` via Anyscale, the model name should be `anyscale/mistralai/Mistral-7B-Instruct-v0.1`
</Note>

We have used Mistral-7B-Instruct-v0.1 for this example. You can find a full list of available models [here](https://docs.endpoints.anyscale.com/category/supported-models).

Expand Down
5 changes: 5 additions & 0 deletions docs/llms/azure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,11 @@ You can use your Azure API key to run LLM evaluations using UpTrain.
settings = Settings(model = 'azure/*', azure_api_key=AZURE_API_KEY, azure_api_version=AZURE_API_VERSION, azure_api_base=AZURE_API_BASE)
eval_llm = EvalLLM(settings)
```
<Note>
The model name should start with `azure/` for UpTrain to recognize you are using models hosted on Azure.

For example if you are using `gpt-35-turbo` via Azure, the model name should be `azure/gpt-35-turbo`
</Note>
</Step>

<Step title="Evaluate data using UpTrain">
Expand Down
6 changes: 5 additions & 1 deletion docs/llms/mistral.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,11 @@ You can use your Mistral API key to run LLM evaluations using UpTrain.
settings = Settings(model = 'mistral/mistral-tiny', mistral_api_key=MISTRAL_API_KEY)
eval_llm = EvalLLM(settings)
```
We use GPT 3.5 Turbo be default, you can use any other OpenAI models as well
<Note>
The model name should start with `mistral/` for UpTrain to recognize you are using Mistral.

For example if you are using `mistral-tiny`, the model name should be `mistral/mistral-tiny`
</Note>
</Step>

<Step title="Evaluate data using UpTrain">
Expand Down
10 changes: 10 additions & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -156,6 +156,16 @@
}
]
},
{
"group": "Dashboard",
"version": "v1",
"pages": [
"dashboard/getting_started",
"dashboard/project",
"dashboard/evaluations",
"dashboard/prompts"
]
},
{
"group": "Supported LLMs",
"version": "v1",
Expand Down

0 comments on commit 737b82f

Please sign in to comment.