Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

_OpenAI(APIBaseModel) add default_headers in client_kwargs #2447

Merged
merged 1 commit into from
Sep 10, 2024

Conversation

mirkenstein
Copy link
Contributor

Added default_headers as available key in client_kwargs

Description

Sometimes custom OpenAI compatible endpoints require specific headers.

This is supported in the OpenAI library via the default_headers but is it not available in the super_duper adapter.

This was tested for connecting via local TGI server that is serving OpenAI compatible API.

from superduper_openai import OpenAIChatCompletion

llm = OpenAIChatCompletion(
    identifier="llm",
    model='meta-llama/Meta-Llama-3.1-8B-Instruct',
    client_kwargs={
        "base_url": "http://127.0.0.1:8080/v1",
        "api_key" : "NA",
        "default_headers": {"Content-Type": "application/json"}
    }
)
print(llm.predict("Tell me a joke"))

It does not brake existing functionality.

Related Issues

NA

Checklist

  • [] Is this code covered by new or existing unit tests or integration tests?
  • Did you run make unit_testing and make integration-testing successfully?
  • Do new classes, functions, methods and parameters all have docstrings?
  • Were existing docstrings updated, if necessary?
  • Was external documentation updated, if necessary?

Additional Notes or Comments

Added `default_headers` as available key in `client_kwargs`
@blythed
Copy link
Collaborator

blythed commented Sep 10, 2024

Nice PR - thanks!

@blythed blythed merged commit 502694a into superduper-io:main Sep 10, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants