Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add custom server proxy support #81

Merged
merged 7 commits into from
Jul 11, 2023
Merged

Conversation

astrokin
Copy link
Contributor

@astrokin astrokin commented May 1, 2023

To follow OpenAI requirements

Remember that your API key is a secret! Do not share it with others or expose it in any client-side code (browsers, apps). Production requests must be routed through your own backend server where your API key can be securely loaded from an environment variable or key management service.

and basic industrial safety you should not call OpenAI API directly.

With this PR you have an option to implement OpenAISwift class with custom Config which can point into custom proxy server.

Example of usage:

let openAISwift = OpenAISwift(
            config: OpenAISwift.Config(
                baseURL: "https://api.openai.myproxy.com",
                endpointPrivider: OpenAIEndpointProvider(source: .proxy(path: { api -> String in
                    switch api {
                        case .completions:
                            return "/completions"
                        case .edits:
                            return "/edits"
                        case .chat:
                            return "/chat/completions"
                        case .images:
                            return "/images/generations"
                        case .embeddings:
                            return "/embeddings"
                    }
                }, method: { api -> String in
                    return "POST"
                })),
                session: URLSession.shared,
                authorizeRequest: { request in
                    ///
                    ///here your can have custom authorization of each request following to your server, for ex. add "X-App-Token" or "X-Encryption-Token" or any other type of header
                    ///
                    request.setValue("<super_secure_token>", forHTTPHeaderField: "X-App-Token")
                }))

The goal for these changes was to have support of custom server with minimum code changes and follow existed logic and architecture.

* original: (35 commits)
  Update README.md
  Make completionTokens optional
  [ChatError] Fixed implementation
  add api for Embeddings
  Improve README docs
  Fixed typos
  Added support for other custom models
  update readme
  add image generation api
  optionals
  async/await
  Correct types
  Improve documentation and add logit_bias
  fix spaces
  change `UsageResult` to conform to  `Codable` instead of `Payload`
  Make the session configurable through a Config struct
  Update README.md
  rename `UsageResult` property names as camel case
  Add `usage` to `OpenAI` payload struct
  Chat error handling.
  ...
This reverts commit 44e2975.
This was referenced May 4, 2023
astrokin added 2 commits June 12, 2023 16:07
* original:
  sendModeration endpoint
  added gpt4 enum
  Added CocoaPods install doc in README.md
  Added OpenAISwift.podspec

# Conflicts:
#	Sources/OpenAISwift/OpenAIEndpoint.swift
@adamrushy adamrushy merged commit 2cac42a into adamrushy:main Jul 11, 2023
@adamrushy
Copy link
Owner

Nice job 👏🏼

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants