Skip to content

Commit

Permalink
chore: update cookbooks (#37)
Browse files Browse the repository at this point in the history
* fix: update ab-testing cookbook

* fix: context relevancy -> context utilization

* fix: create a dataset cookbook

* fix: evaluate agent runs

* fix: distributed tracing

* fix: evaluate user satisfaction cookbook

* fix: langgraph example cookbook

* fix: RAG LlamaIndex cookbook

* fix: monitor conversational ai agent

* fix: multimodal conversational ai cookbook

* fix: add example image

* fix: relevancy -> utilization

* fix: swap LLMs

* fix: literalai version

* fix: version literalai

* fix: typescript cookbooks versions & co

* fix: add experiment comparison
  • Loading branch information
desaxce authored Nov 13, 2024
1 parent 42a192f commit d79587a
Show file tree
Hide file tree
Showing 66 changed files with 9,104 additions and 6,618 deletions.
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,5 +5,8 @@
**/dist
**/node_modules

# Uvicorn CPython
**/*.pyc

.env
.python-version
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,16 +14,16 @@ For more information, find the full documentation [here](https://docs.getliteral

| Name | Category | Description |
| -------------------------------------------------------------------------------------------------- | ---------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
| [Context Relevancy with Ragas](/python/context-relevancy-ragas/) | Evaluation | Build a RAG application and evaluate this with RAGAS based on context relevancy. |
| [Context Utilization with Ragas](/python/context-utilization-ragas/) | Evaluation | Build a RAG application and evaluate it with RAGAS based on context utilization. |
| [Evaluate User Satisfaction - Customer Support Conversations](/python/evaluate-user-satisfaction/) | Evaluation | Retrieve your Customer Support Conversations from Literal AI and evaluate user satisfaction on this conversational data. |
| [LlamaIndex Integration](/python/llamaindex-integration/) | Observability | Build a Q&A application with LLamaIndex and monitor it with Literal AI. |
| [Evaluate Agent Runs with Tools](/python/evaluate-agent-runs/) | Observability (Tools) & Evaluation | Build a simple agent which can use two tools. Monitor and evaluate the tool usage. |
| [A/B Testing Client-Side](/python/ab-testing-client-side/) | Evaluation | Build two prompts, randomly assign to new conversations and A/B test on a metric. |
| [Create a Dataset](/python/create-a-dataset/) | Evaluation | Create a Literal AI Dataset from the SDK |
| [Distributed Tracing](/python/distributed-tracing/) | Observability | Trace a distributed (TS and Py) service |
| [Monitor a Conversational AI agent](/python/monitor-conversational-ai-agent/) | Observability | Monitor a Conversational AI agent, built in FastAPI |
| [Monitor a Multimodal chatbot](/python/multimodal-conversational-ai/) | Observability | Monitor a multimodal Conversational AI agent, built with OpenAI |
| [LangGraph example](/python/langgraph-example/) | Observability | Example with LangGraph : a graph flow with tool use |
| [Create a Dataset](/python/create-a-dataset/) | Evaluation | Create a Literal AI Dataset from the SDK |
| [Distributed Tracing](/python/distributed-tracing/) | Observability | Trace a distributed (TS and Py) service |
| [Monitor a Conversational AI agent](/python/monitor-conversational-ai-agent/) | Observability | Monitor a Conversational AI agent, built in FastAPI |
| [Monitor a Multimodal chatbot](/python/multimodal-conversational-ai/) | Observability | Monitor a multimodal Conversational AI agent, built with OpenAI |
| [LangGraph example](/python/langgraph-example/) | Observability | Example with LangGraph : a graph flow with tool use |

### TypeScript

Expand All @@ -35,6 +35,6 @@ For more information, find the full documentation [here](https://docs.getliteral
| [Chatbot using Next.js, OpenAI and Literal AI](/typescript/nextjs-openai/) | Observablity | Create a personalized and monitored chatbot with OpenAI, Next.js and Literal AI. |
| [Chatbot using Vercel ai SDK and Literal AI](/typescript/vercel-ai-sdk/) | Observablity | Create a personalized and monitored chatbot with Vercel ai SDK and Literal AI. |
| [Simple RAG using LanceDB, OpenAI and Literal AI](/typescript/lancedb-rag) | Observability | Create Simple RAG on Youtube Transcripts stored using LanceDB |
| [Speech-to-Emoji: Next.js app to summarize audio with OpenAI Whisper, GPT-4o and Literal AI](/typescript/speech-to-emoji) | Observability | Create a simple web app that transcribes and summarizes audio using OpenAI Whisper, GPT-4 and Literal AI. |
| [Speech-to-Emoji: Next.js app to summarize audio with OpenAI Whisper, GPT-4o and Literal AI](/typescript/speech-to-emoji) | Observability | Create a simple web app that transcribes and summarizes audio using OpenAI Whisper, GPT-4o and Literal AI. |
| [Interactive map with a copilot chat bot](/typescript/leaflet-interactive-map/) | Observability | A world map with a chat bot that is context aware and react to the user position on the map |
| [LangChain and LangGraph examples](/typescript/langchain-langgraph/) | Observability | Three examples with LangChain/LangGraph : a basic RAG, a graph flow with tool use, and a multi-agent flow |
1 change: 0 additions & 1 deletion python/ab-testing-client-side/.env.example
Original file line number Diff line number Diff line change
@@ -1,3 +1,2 @@
OPENAI_API_KEY=
LITERAL_API_KEY=
LITERAL_API_URL=
11 changes: 7 additions & 4 deletions python/ab-testing-client-side/README.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,18 @@
# Client-side A/B testing two prompts

In this notebook, you will learn how to do a simple A/B test between two different prompts, using Literal AI's Thread observability but running the test client-side.
In this notebook, you will learn how to do a simple A/B test between two different prompts,
using Literal AI's Thread observability and running the test client-side.

## Setup

To install dependencies, run:
To install dependencies, run:

```bash
pip install -r requirements.txt
```
```

Create and set your Literal AI and OpenAI API keys in `.env`:

```bash
cp .env.example .env
```
```
95 changes: 49 additions & 46 deletions python/ab-testing-client-side/ab-testing-client-side.ipynb

Large diffs are not rendered by default.

Binary file modified python/ab-testing-client-side/img/threads.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion python/ab-testing-client-side/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
python-dotenv>=1.0.1
literalai>=0.0.503
literalai>=0.1.0
openai
matplotlib
Loading

0 comments on commit d79587a

Please sign in to comment.