Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure OpenAI support in 2.0.0? #303

Open
bw-Deejee opened this issue Oct 15, 2024 · 2 comments
Open

Azure OpenAI support in 2.0.0? #303

bw-Deejee opened this issue Oct 15, 2024 · 2 comments

Comments

@bw-Deejee
Copy link

How do I get Azure OpenAI API to work in 2.0.0?

@bw-Deejee
Copy link
Author

I actually got Azure OpenAI it to work 🎉. Not the cleanest solution but here for whom it might be interesting:

1. Create a new generator file in "goldenverba > components > generation", called AzureOpenAIGenerator.py, containing the following code (replace placeholders with your URL/key):

import os
from dotenv import load_dotenv
from goldenverba.components.interfaces import Generator
from goldenverba.components.types import InputConfig
from goldenverba.components.util import get_environment
import httpx
import json

load_dotenv()


class AzureOpenAIGenerator(Generator):
    """
    Azure OpenAI Generator.
    """

    def __init__(self):
        super().__init__()
        self.name = "AzureOpenAI"
        self.description = "Using Azure OpenAI LLM models to generate answers to queries"
        self.context_window = 10000

        models = ["gpt-4o", "gpt-3.5-turbo"]

        self.config["Model"] = InputConfig(
            type="dropdown",
            value=models[0],
            description="Select an Azure OpenAI Model",
            values=models,
        )

        if os.getenv("AZURE_OPENAI_API_KEY") is None:
            self.config["API Key"] = InputConfig(
                type="password",
                value="<ADD YOUR AZURE API KEY HERE>_",
                description="You can set your Azure OpenAI API Key here or set it as environment variable `AZURE_OPENAI_API_KEY`",
                values=[],
            )
        if os.getenv("AZURE_OPENAI_BASE_URL") is None:
            self.config["URL"] = InputConfig(
                type="text",
                value="https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME",
                description="You can change the Base URL here if needed",
                values=[],
            )

    async def generate_stream(
        self,
        config: dict,
        query: str,
        context: str,
        conversation: list[dict] = [],
    ):
        system_message = config.get("System Message").value
        print(system_message)
        
        model = config.get("Model", {"value": "gpt-3.5-turbo"}).value
        print(model)
        
        azure_key = get_environment(
            config, "API Key", "AZURE_OPENAI_API_KEY", "No Azure OpenAI API Key found"
        )
        print(azure_key)
        
        azure_url = get_environment(
            config, "URL", "AZURE_OPENAI_BASE_URL", "https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME"
        )
        print(azure_url)
        
        messages = self.prepare_messages(query, context, conversation, system_message)
        print(messages)
        
        headers = {
            "Content-Type": "application/json",
            "api-key": azure_key,
        }
        data = {
            "messages": messages,
            "model": model,
            "stream": True,
        }
        
        async with httpx.AsyncClient() as client2:
            async with client2.stream(
                "POST",
                f"{azure_url}/chat/completions?api-version=2023-03-15-preview",
                json=data,
                headers=headers,
                timeout=None,
            ) as response:
                async for line in response.aiter_lines():
                    if line.startswith("data: "):
                        if line.strip() == "data: [DONE]":
                            break
                        json_line = json.loads(line[6:])
                        choice = json_line["choices"][0]
                        if "delta" in choice and "content" in choice["delta"]:
                            yield {
                                "message": choice["delta"]["content"],
                                "finish_reason": choice.get("finish_reason"),
                            }
                        elif "finish_reason" in choice:
                            yield {
                                "message": "",
                                "finish_reason": choice["finish_reason"],
                            }
                    else: print(response)
                            

    def prepare_messages(
        self, query: str, context: str, conversation: list[dict], system_message: str
    ) -> list[dict]:
        messages = [
            {
                "role": "system",
                "content": system_message,
            }
        ]

        for message in conversation:
            messages.append({"role": message.type, "content": message.content})

        messages.append(
            {
                "role": "user",
                "content": f"Answer this query: '{query}' with this provided context: {context}",
            }
        )

        return messages

2. Add the new generator to the manager in "goldenverba > components > managers.py":

add to line 68: from goldenverba.components.generation.AzureOpenAIGenerator import AzureOpenAIGenerator

also to line 110:

    generators = [
        OllamaGenerator(),
        OpenAIGenerator(),
        AnthropicGenerator(),
        CohereGenerator(),
        AzureOpenAIGenerator(),
    ]

and to line 137:

    generators = [
        OpenAIGenerator(),
        AnthropicGenerator(),
        CohereGenerator(),
        AzureOpenAIGenerator(),
    ]

3. If you are (like me) using this behind a proxy, modify the code in "goldenverba > components > managers.py" around ~line 1215:

        import httpx

        async with httpx.AsyncClient(proxy="http://YOUR_PROXYSERVER:YOUR_PORT") as client:
            async for result in self.generators[generator].generate_stream(
                generator_config, query, context, conversation
            ):
                yield result

Don't ask me how exactly this works. I tried adding the proxy in the AsyncClient() in AzureOpenAIGenerator.py but only the solution above finally worked for me.

@bw-Deejee bw-Deejee reopened this Oct 15, 2024
@thomashacker
Copy link
Collaborator

Oh thanks a lot! Nice work
Would you be interested in creating a PR and adding Azure functionality to Verba? 🚀

@thomashacker thomashacker added enhancement New feature or request Community PR and removed enhancement New feature or request labels Dec 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants