Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ez-openai: better openai python library for assistants and function calling #683

Open
1 task
irthomasthomas opened this issue Mar 4, 2024 · 2 comments
Open
1 task
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Code-Interpreter OpenAI Code-Interpreter data-validation Validating data structures and formats llm-function-calling Function Calling with Large Language Models programming-languages Topics related to programming languages and their features. software-engineering Best practice for software engineering

Comments

@irthomasthomas
Copy link
Owner

TITLE: skorokithakis/ez-openai: Ez API, ez life

DESCRIPTION:

AGPL-3.0 license

Ez OpenAI

My opinion of the openai Python library is best illustrated by the fact that if you ask ChatGPT about it, it will usually hallucinate a more reasonable API. So, I wrote this library, because if I had to manually poll for a tool update again I would instigate the robot uprising myself.

Installation

Run this somewhere:

pip install ez-openai

Usage

Basic usage

Using Ez OpenAI is (hopefully) straightforward, otherwise I've failed at the one thing I've set out to make:

from ez_openai import Assistant

# To use a previously-created assistant:
ass = Assistant.get("asst_someassistantid")

# To create a new one:
ass = Assistant.create(
    name="Weatherperson",
    instructions="You are a helpful weatherperson.",
)

# You can store the ID for later.
assistant_id = ass.id

# Delete it when you're done.
ass.delete()

Function calling

No more wizardry, just plain Python functions:

from ez_openai import Assistant, openai_function
from enum import Enum

@openai_function(descriptions={
        "city": "The city to get the weather for.",
        "unit": "The temperature unit , either `c` or `f`.",
    })
def get_weather(city: str, unit: Enum("unit", ["c", "f"])):
    """Get the weather for a given city, and in the given unit."""
    # ...do some magic here to get the weather...
    print(f"I'm getting the weather for {city} woooooo")
    return {"temperature": 26, "humidity": "60%"}

ass = Assistant.create(
    name="Weatherperson",
    instructions="You are a helpful weatherperson.",
    functions=[get_weather]
)

# Or, if you already have one, you can fetch it (but still
# need to specify the functions).
ass = Assistant.get("asst_O5ZAsccgOOtgjrcgHhUMloSA", functions=[get_weather])

conversation = ass.conversation.create()

# Similarly, you can store the ID to fetch later:
old_conversation = ass.conversation.get(old_conversation.id)

# The library will handle all the background function calls itself:
conversation.ask("Hi, what's the weather like in Thessaloniki and Athens right now?")
> I'm getting the weather for Thessaloniki woooooo
> I'm getting the weather for Athens woooooo
> "The weather today in both Thessaloniki and Athens is quite similar, with a
   temperature of 26°C and a humidity level at 60%. Enjoy a pleasant and comfortable
   day!"

Because assistants change (eg if you want to add some more functions), and it's tedious to create new ones every time, there's a helper method that will update an assistant with new functions/instructions:

from ez_openai import Assistant

ass = Assistant.get_and_modify(
    id="asst_someassistantid",
    name="Weatherperson",
    instructions="These are your new instructions.",
    functions=[get_weather, some_new_function]
)
gg ez

**URL:** [https://github.com/skorokithakis/ez-openai](https://github.com/skorokithakis/ez-openai)

#### Suggested labels
#### 
@irthomasthomas irthomasthomas added AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Code-Interpreter OpenAI Code-Interpreter data-validation Validating data structures and formats llm-function-calling Function Calling with Large Language Models programming-languages Topics related to programming languages and their features. software-engineering Best practice for software engineering labels Mar 4, 2024
@irthomasthomas
Copy link
Owner Author

Related issues

#396: astra-assistants-api: A backend implementation of the OpenAI beta Assistants API

### DetailsSimilarity score: 0.89 - [ ] [datastax/astra-assistants-api: A backend implementation of the OpenAI beta Assistants API](https://github.com/datastax/astra-assistants-api)

Astra Assistant API Service

A drop-in compatible service for the OpenAI beta Assistants API with support for persistent threads, files, assistants, messages, retrieval, function calling and more using AstraDB (DataStax's db as a service offering powered by Apache Cassandra and jvector).

Compatible with existing OpenAI apps via the OpenAI SDKs by changing a single line of code.

Getting Started

  1. Create an Astra DB Vector database
  2. Replace the following code:
client = OpenAI(
    api_key=OPENAI_API_KEY,
)

with:

client = OpenAI(
    base_url="https://open-assistant-ai.astra.datastax.com/v1", 
    api_key=OPENAI_API_KEY,
    default_headers={
        "astra-api-token": ASTRA_DB_APPLICATION_TOKEN,
    }
)

Or, if you have an existing astra db, you can pass your db_id in a second header:

client = OpenAI(
    base_url="https://open-assistant-ai.astra.datastax.com/v1", 
    api_key=OPENAI_API_KEY,
    default_headers={
        "astra-api-token": ASTRA_DB_APPLICATION_TOKEN,
        "astra-db-id": ASTRA_DB_ID
    }
)
  1. Create an assistant
assistant = client.beta.assistants.create(
  instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.",
  model="gpt-4-1106-preview",
  tools=[{"type": "retrieval"}]
)

By default, the service uses AstraDB as the database/vector store and OpenAI for embeddings and chat completion.

Third party LLM Support

We now support many third party models for both embeddings and completion thanks to litellm. Pass the api key of your service using api-key and embedding-model headers.

For AWS Bedrock, you can pass additional custom headers:

client = OpenAI(
    base_url="https://open-assistant-ai.astra.datastax.com/v1", 
    api_key="NONE",
    default_headers={
        "astra-api-token": ASTRA_DB_APPLICATION_TOKEN,
        "embedding-model": "amazon.titan-embed-text-v1",
        "LLM-PARAM-aws-access-key-id": BEDROCK_AWS_ACCESS_KEY_ID,
        "LLM-PARAM-aws-secret-access-key": BEDROCK_AWS_SECRET_ACCESS_KEY,
        "LLM-PARAM-aws-region-name": BEDROCK_AWS_REGION,
    }
)

and again, specify the custom model for the assistant.

assistant = client.beta.assistants.create(
    name="Math Tutor",
    instructions="You are a personal math tutor. Answer questions briefly, in a sentence or less.",
    model="meta.llama2-13b-chat-v1",
)

Additional examples including third party LLMs (bedrock, cohere, perplexity, etc.) can be found under examples.

To run the examples using poetry:

  1. Create a .env file in this directory with your secrets.
  2. Run:
poetry install
poetry run python examples/completion/basic.py
poetry run python examples/retreival/basic.py
poetry run python examples/function-calling/basic.py

Coverage

See our coverage report here.

Roadmap

  • Support for other embedding models and LLMs
  • Function calling
  • Pluggable RAG strategies
  • Streaming support

Suggested labels

{ "key": "llm-function-calling", "value": "Integration of function calling with Large Language Models (LLMs)" }

#129: Few-shot and function calling - API - OpenAI Developer Forum

### DetailsSimilarity score: 0.87 - [ ] [Few-shot and function calling - API - OpenAI Developer Forum](https://community.openai.com/t/few-shot-and-function-calling/265908/10)

The thing to understand here is that function calling introduced a new role for the chat prompt messages ("role": "function"). To use few-shot examples with chat model prompts you provide a series of alternating (possibly 'fake') messages that show how the assistant did / should respond to a given user input. With function calling the principle is the same but rather than providing a series of alternating user-assistant example messages, you provide alternating user-function messages.

e.g.

schema = {
    "type": "object",
    "properties": {
        "object_type": {"type": "string"},
        "geometry": {
            "type": "array",
            "items": {
                "type": "number"
            }
        }
    },
    "required": ["object_type", "geometry"]
}

example_response_1 = "{\"object_type\": \"point\", \"geometry\": [2.3, 1.0]}\}"
example_response_2 = "{\"object_type\": \"line\", \"geometry\": [[1.0, 2.0], [3.0, 4.0]]\}"

few_shot_function_calling_example = openai.ChatCompletion.create(
    model = "gpt-3.5-turbo-0613",
        messages = [
            {"role": "system", "content": "You are a system for returning geometric objects in JSON."},
            {"role": "user", "content": "give me a point"},
            {"role": "function", "name": "example_func", "content": example_response_1,},
            {"role": "user", "content": "give me a line"},
            {"role": "function", "name": "example_func", "content": example_response_2,},
            {"role": "user", "content": "give me a polygon"}
        ],
    functions=[{"name": "example_func", "parameters": schema}],
    function_call={"name": "example_func"},
    temperature=0
)

print(few_shot_function_calling_example.choices[0].message)

{
  "content": null,
  "function_call": {
    "arguments": "{\"object_type\": \"polygon\", \"geometry\": [[0, 0], [0, 5], [5, 5], [5, 0]]}",
    "name": "example_func"
  },
  "role": "assistant"
}

#132: Examples - OpenAI API

### DetailsSimilarity score: 0.87 - [ ] [Examples - OpenAI API](https://platform.openai.com/examples/default-time-complexity?lang=curl)
Calculate time complexity

TransformCode
Find the time complexity of a function.
Prompt
SYSTEM
You will be provided with Python code, and your task is to calculate its time complexity.
USER
def foo(n, k):
accum = 0
for i in range(n):
for l in range(k):
accum += i
return accum
Sample response
The time complexity of this code is O(n * k), where n is the value of the variable n and k is the value of the variable k. This is because there are two nested loops, one iterating n times and the other iterating k times. The accum += i statement inside the inner loop is executed n * k times, resulting in a time complexity of O(n * k).
API request
curl

curl https://api.openai.com/v1/chat/completions
-H "Content-Type: application/json"
-H "Authorization: Bearer $OPENAI_API_KEY"
-d '{
"model": "gpt-3.5-turbo",
"messages": [],
"temperature": 0,
"max_tokens": 256
}'

</details></details>

#399: openai-python api doc

### DetailsSimilarity score: 0.86 - [ ] [openai-python/api.md at main · openai/openai-python](https://github.com/openai/openai-python/blob/main/api.md)

Add error handling for failed API requests

Is this a bug or feature request?
Bug

What is the current behavior?
Currently, the application does not handle failed API requests, resulting in a poor user experience and potential loss of data.

What is the expected behavior?
The application should handle failed API requests gracefully, providing clear error messages to the user and, if possible, retrying the request.

What is the impact of this issue?
The lack of error handling can lead to confusion for users when API requests fail, and may cause them to lose data if they are not aware that the request has failed. Additionally, it can make it difficult for developers to diagnose and fix issues with the application.

Possible Solutions:

  1. Implement a global error handler that catches failed API requests and displays an appropriate error message to the user.
  2. If possible, implement a retry mechanism for failed API requests, to increase the chances of success on the second attempt.
  3. Log failed API requests for further analysis and debugging.

Steps to reproduce:

  1. Open the application.
  2. Trigger an API request (e.g. by submitting a form, or refreshing the page).
  3. Disconnect from the internet or otherwise prevent the API request from succeeding.
  4. Observe the lack of error handling and the poor user experience.

Additional context:
This issue has been identified as a priority for improving the reliability and user experience of the application. It is also an important step in ensuring that the application can be easily maintained and debugged by developers.

Suggested labels

{ "key": "ai-platform", "value": "Platforms and tools for implementing AI solutions" }

#638: Announcing function calling and JSON mode

### DetailsSimilarity score: 0.86 - [ ] [Announcing function calling and JSON mode](https://www.together.ai/blog/function-calling-json-mode)

Announcing function calling and JSON mode

DESCRIPTION:
Announcing function calling and JSON mode
JANUARY 31, 2024・BY TOGETHER AI
We are excited to introduce JSON mode & function calling on Together Inference! They are designed to provide you with more flexibility and control over your interactions with LLMs. We currently support these features in Mixtral, Mistral, and CodeLlama with more coming soon. In this post, we'll introduce and walk you through how to use JSON mode and function calling through the Together API!

Introduction to JSON mode and function calling

While both JSON mode and function calling can enhance your interaction with LLMs, it's important to understand that they are not interchangeable — they serve different purposes and offer unique benefits. Specifically:

  • JSON mode allows you to specify a JSON schema that will be used by the LLM to output data in this format. This means you can dictate the format and data types of the response, leading to a more structured and predictable output that can suit your specific needs.

  • Function calling enables LLMs to intelligently output a JSON object containing arguments for external functions that are defined. This is particularly useful when there is a need for real-time data access, such as weather updates, product information, or stock market data, or when you want the LLM to be aware of certain functions you’ve defined. It also makes it possible for the LLM to intelligently determine what information to gather from a user if it determines a function should be called. Our endpoint ensures that these function calls align with the prescribed function schema, incorporating necessary arguments with the appropriate data types.

JSON Mode

With JSON mode, you can specify a schema for the output of the LLM. While the OpenAI API does not inherently allow for the specification of a JSON schema, we augmented the response_format argument with schema. When a schema is passed in, we enforce the model to generate the output aligned with the given schema.

Here's an example of how you can use JSON mode with Mixtral:

import os
import json
import openai
from pydantic import BaseModel, Field

# Create client
client = openai.OpenAI(
    base_url = "https://api.together.xyz/v1",
    api_key = os.environ['TOGETHER_API_KEY'],
)

# Define the schema for the output.
class User(BaseModel):
    name: str = Field(description="user name")
    address: str = Field(description="address")
    
# Generate
chat_completion = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    response_format={
        "type": "json_object", 
        "schema": User.model_json_schema()
    },
    messages=[
        {"role": "system", "content": "You are a helpful assistant that answers in JSON."},
        {"role": "user", "content": "Create a user named Alice, who lives in 42, Wonderland Avenue."}
    ],
)

created_user = json.loads(chat_completion.choices[0].message.content)
print(json.dumps(created_user, indent=2))

In this example, we define a schema for a User object that contains their name and address. The LLM then generates a response that matches this schema, providing a structured JSON object that we can use directly in our application in a deterministic way.

The expected output of this example is:

{
  "address": "42, Wonderland Avenue",
  "name": "Alice"
}

More Examples:

  • Array and Optional argument
  • Nested data types

For more detailed information, check out our documentation on JSON mode.

Function Calling

With function calling, it will output a JSON object containing arguments for external functions that are defined. After the functions are defined, the LLM will intelligently determine if a function needs to be invoked and if it does, it will suggest the appropriate one with the correct parameters in a JSON object. After that, you can execute the API call within your application and relay the response back to the LLM to continue working.

Let's illustrate this process with a simple example: creating a chatbot that has access to weather data. The function is defined in tools:

import os
import json
import openai

# Create client
client = openai.OpenAI(
    base_url = "https://api.together.xyz/v1",
    api_key = os.environ['TOGETHER_API_KEY'],
)

# Define function(s)
tools = [
  {
    "type": "function",
    "function": {
      "name": "get_current_weather",
      "description": "Get the current weather in a given location",
      "parameters": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ]
          }
        }
      }
    }
  }
]
    
# Generate
response = client.chat.completions.create(
    model="mistralai/Mixtral-8x7B-Instruct-v0.1",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is the current temperature of New York?"}
    ],
    tools=tools,
    tool_choice="auto",
)

print(json.dumps(response.choices[0].message.dict()['tool_calls'], indent=2))

In this example, we define an external function that gets the current weather in a given location. We then use this function in our chat completion request. The AI model generates a response that includes calls to this function, providing real-time weather data for the requested locations. The expected output is:

[
  {
    "id": "...",
    "function": {
      "arguments": "{\"location\":\"New York\",\"unit\":\"fahrenheit\"}",
      "name": "get_current_weather"
    },
    "type": "function"
  }
]

More Examples:

  • Parallel function calling
  • No function calling
  • Multi-turn example

For more detailed information, check out our documentation on function calling.

Conclusion

We believe that JSON mode and function calling are a significant step forward, bringing a new level of versatility and functionality to AI applications. By enabling a more structured interaction with the model and allowing for specific types of outputs and behaviors, we're confident that it will be a valuable tool for developers.

We can't wait to see what you build on Together AI! For more info, check out our function calling and JSON mode docs.

Suggested labels

{'label-name': 'JSON-structure', 'label-description': 'Describes JSON schema usage and generation for structured data output in AI interactions.', 'gh-repo': 'knowledge-repo', 'confidence': 53.09}

#305: Home - LibreChat

### DetailsSimilarity score: 0.86 - [ ] [Home - LibreChat](https://docs.librechat.ai/index.html)

Table of contents
🪶 Features
📃 All-In-One AI Conversations with LibreChat
⭐ Star History
✨ Contributors
💖 This project exists in its current state thanks to all the people who contribute

LibreChat

🪶 Features

🖥️ UI matching ChatGPT, including Dark mode, Streaming, and 11-2023 updates
💬 Multimodal Chat:
Upload and analyze images with GPT-4 and Gemini Vision 📸
More filetypes and Assistants API integration in Active Development 🚧
🌎 Multilingual UI:
English, 中文, Deutsch, Español, Français, Italiano, Polski, Português Brasileiro, Русский
日本語, Svenska, 한국어, Tiếng Việt, 繁體中文, العربية, Türkçe, Nederlands
🤖 AI model selection: OpenAI API, Azure, BingAI, ChatGPT, Google Vertex AI, Anthropic (Claude), Plugins
💾 Create, Save, & Share Custom Presets
🔄 Edit, Resubmit, and Continue messages with conversation branching
📤 Export conversations as screenshots, markdown, text, json.
🔍 Search all messages/conversations
🔌 Plugins, including web access, image generation with DALL-E-3 and more
👥 Multi-User, Secure Authentication with Moderation and Token spend tools
⚙️ Configure Proxy, Reverse Proxy, Docker, many Deployment options, and completely Open-Source
📃 All-In-One AI Conversations with LibreChat

LibreChat brings together the future of assistant AIs with the revolutionary technology of OpenAI's ChatGPT. Celebrating the original styling, LibreChat gives you the ability to integrate multiple AI models. It also integrates and enhances original client features such as conversation and message search, prompt templates and plugins.

With LibreChat, you no longer need to opt for ChatGPT Plus and can instead use free or pay-per-call APIs. We welcome contributions, cloning, and forking to enhance the capabilities of this advanced chatbot platform.

Suggested labels

"ai-platform"

@irthomasthomas irthomasthomas changed the title skorokithakis/ez-openai: Ez API, ez life. ez-openai: better openai python library for assistants and function calling Mar 4, 2024
@irthomasthomas
Copy link
Owner Author

Related content

#683 - Similarity score: 1.0

#396 - Similarity score: 0.89

#129 - Similarity score: 0.88

#506 - Similarity score: 0.87

#305 - Similarity score: 0.87

#132 - Similarity score: 0.86

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
AI-Chatbots Topics related to advanced chatbot platforms integrating multiple AI models Code-Interpreter OpenAI Code-Interpreter data-validation Validating data structures and formats llm-function-calling Function Calling with Large Language Models programming-languages Topics related to programming languages and their features. software-engineering Best practice for software engineering
Projects
None yet
Development

No branches or pull requests

1 participant