Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

【share】Custom Provider py file for OPENROUTER #296

Open
ccsen opened this issue Jul 26, 2024 · 0 comments
Open

【share】Custom Provider py file for OPENROUTER #296

ccsen opened this issue Jul 26, 2024 · 0 comments

Comments

@ccsen
Copy link

ccsen commented Jul 26, 2024

You can add any model you what to use to MODELDICT and model list and Remember to add your key:

from chainforge.providers import provider
from openai import OpenAI



OPENROUTER_API_KEY = your_key #add your key here
# JSON schemas to pass react-jsonschema-form, one for this provider's settings and one to describe the settings UI.
client = OpenAI(
  base_url="https://openrouter.ai/api/v1",
  api_key=OPENROUTER_API_KEY)

OPENROUTER_SETTINGS_SCHEMA = {
  "settings": {
    "temperature": {
      "type": "number",
      "title": "temperature",
      "description": "Controls the 'creativity' or randomness of the response.",
      "default": 0.75,
      "minimum": 0,
      "maximum": 2.0,
      "multipleOf": 0.01,
    },
    "max_tokens": {
      "type": "integer",
      "title": "max_tokens",
      "description": "Maximum number of tokens to generate in the response.",
      "default": 8024,
      "minimum": 1,
      "maximum": 320123,
    },
  },
  "ui": {
    "temperature": {
      "ui:help": "Defaults to 1.0.",
      "ui:widget": "range"
    },
    "max_tokens": {
      "ui:help": "Defaults to 100.",
      "ui:widget": "range"
    },
  }
}



MODELDICT={
    'llama-3.1-8b-instruct':'meta-llama/llama-3.1-8b-instruct',
    'llama-3.1-70b-instruct':'meta-llama/llama-3.1-70b-instruct',
    'llama-3.1-405b-instruct':'meta-llama/llama-3.1-405b-instruct',
    'gpt-4o-mini':'openai/gpt-4o-mini',
    'gpt-4o':'openai/gpt-4o',
    'gpt-4-turbo':'openai/gpt-4-turbo',
    'mistral-nemo':'mistralai/mistral-nemo',
    'gemma-2-27b-it':'google/gemma-2-27b-it',
    'gemma-2-9b-it':'google/gemma-2-9b-it',
    'gemini-flash-1.5':'google/gemini-flash-1.5',
    'gemini-pro-1.5':'google/gemini-pro-1.5',
    'deepseek-chat':'deepseek/deepseek-chat',
    'deepseek-coder': 'deepseek/deepseek-coder',
    'claude-3.5-sonnet':'anthropic/claude-3.5-sonnet',
    'qwen-2-72b-instruct':'qwen/qwen-2-72b-instruct',
    'qwen-72b-chat':'qwen/qwen-72b-chat',
    'qwen-110b-chat':'qwen/qwen-110b-chat',
    'llama-3-70b-instruct': 'meta-llama/llama-3-70b-instruct',
    'llama-3-8b-instruct': 'meta-llama/llama-3-8b-instruct',
    'auto': 'openrouter/auto'
}

# Our custom model provider for OpenRouter text generation API.

@provider(name="OpenRouter",
          emoji="🚀",
          models=[
    'llama-3.1-8b-instruct',
    'llama-3.1-70b-instruct',
    'llama-3.1-405b-instruct',
    'gpt-4o-mini',
    'gpt-4o',
    'gpt-4-turbo',
    'mistral-nemo',
    'gemma-2-27b-it',
    'gemma-2-9b-it',
    'gemini-flash-1.5',
    'gemini-pro-1.5',
    'deepseek-chat',
    'deepseek-coder',
    'claude-3.5-sonnet',
    'qwen-2-72b-instruct',
    'qwen-72b-chat',
    'qwen-110b-chat',
    'llama-3-70b-instruct',
    'llama-3-8b-instruct',
    'auto'
],
          rate_limit="60", # enter "sequential" for blocking; an integer N > 0 means N is the max mumber of requests per minute.
          settings_schema=OPENROUTER_SETTINGS_SCHEMA
          )


def OpenRouterCompletion(prompt: str, model: str, temperature: float, **kwargs) -> str:
    completion = client.chat.completions.create(
    model = MODELDICT[model],
    temperature = temperature,
    messages = [
        {
            "role": "user",
            "content": prompt
        },
    ],
    )
    return completion.choices[0].message.content
@ccsen ccsen changed the title Custom Provider py file for OPENROUTER 【share】Custom Provider py file for OPENROUTER Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant