Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENHANCEMENT] Groups and Subgroups support #4532

Closed
taoari opened this issue Sep 5, 2024 · 1 comment
Closed

[ENHANCEMENT] Groups and Subgroups support #4532

taoari opened this issue Sep 5, 2024 · 1 comment
Labels
enhancement New feature or request triage issues that need triage

Comments

@taoari
Copy link

taoari commented Sep 5, 2024

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

In the following examples, I made two OpenAI calls. They will show as separate traces. It would be great if I can programmatically group the two OpenAI calls into single traces, or make one the parent of the other.

Sorry that I am new to Phoenix, this feature might be already supported, but I find nowhere in the documentation to describe this.

import os
os.environ['PHOENIX_PROJECT_NAME'] = 'my-llm-app'

import phoenix as px
from phoenix.trace.openai import OpenAIInstrumentor

# Initialize OpenAI auto-instrumentation
OpenAIInstrumentor().instrument()

import os
from openai import OpenAI

# Initialize an OpenAI client
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])

## 1.

# Define a conversation with a user message
conversation = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello, can you help me with something?"}
]

# Generate a response from the assistant
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=conversation,
)

# Extract and print the assistant's reply
# The traces will be available in the Phoenix App for the above messsages
assistant_reply = response.choices[0].message.content
print(assistant_reply)

## 2.

conversation.append({"role": "assistant", "content": assistant_reply})
conversation.append({"role": "user", "content": "are you a robot?"})
# Generate a response from the assistant
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=conversation,
)

# Extract and print the assistant's reply
# The traces will be available in the Phoenix App for the above messsages
assistant_reply = response.choices[0].message.content
print(assistant_reply)

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@taoari taoari added enhancement New feature or request triage issues that need triage labels Sep 5, 2024
@github-project-automation github-project-automation bot moved this to 📘 Todo in phoenix Sep 5, 2024
@axiomofjoy
Copy link
Contributor

Hey @taoari 👋 Thanks for filing this issue. I believe this is a duplicate of #2619. Closing, please feel free to add details to the linked issue.

@github-project-automation github-project-automation bot moved this from 📘 Todo to ✅ Done in phoenix Sep 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request triage issues that need triage
Projects
Archived in project
Development

No branches or pull requests

2 participants