-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: DIA-1362: Add custom LLM endpoint testing scripts #210
base: master
Are you sure you want to change the base?
Conversation
""" | ||
app = FastAPI() | ||
|
||
TARGET_URL = os.getenv('TARGET_URL') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a check to ensure TARGET_URL
is set. If not, raise an exception or log an error to prevent runtime issues.
df = pd.DataFrame([["I'm happy"], ["I'm sad"], ["I'm neutral"]], columns=["input"]) | ||
|
||
results = asyncio.run(agent.arun(input=df)) | ||
print(results) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add assertions to verify the expected behavior of the agent. Currently, the test only prints results without checking for correctness.
This PR adds tests to verify Custom LLM endpoint connection.
How to create a custom LLM endpoint
ollama run llama3.1
/v1
suffix):http://localhost:11434/v1
llama3.1
ollama
localhost
is not available to use), use ngrok or any other proxy serveruser:pass
string, it must be provided as base64 encoded parameter in the corresponding custom LLM endpoint field. For example, runecho -n "user:pass" | base64
, and then use the string:Basic <base64 encoded credentials>
auth_proxy_server.py
script launching it with:TARGET_URL=http://localhost:11434 EXPECTED_HEADER=secret uvicorn auth_proxy_server:app
and providesecret
as Authentication header field in Custom LLM endpoint connection