Skip to content

Commit

Permalink
Merge branch 'coggsfl-sashabaranov-master'
Browse files Browse the repository at this point in the history
  • Loading branch information
coggsflod committed Oct 16, 2023
2 parents c82d5dc + 171cdfe commit 4f0ca08
Show file tree
Hide file tree
Showing 26 changed files with 1,304 additions and 90 deletions.
88 changes: 88 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Contributing Guidelines

## Overview
Thank you for your interest in contributing to the "Go OpenAI" project! By following this guideline, we hope to ensure that your contributions are made smoothly and efficiently. The Go OpenAI project is licensed under the [Apache 2.0 License](https://github.com/sashabaranov/go-openai/blob/master/LICENSE), and we welcome contributions through GitHub pull requests.

## Reporting Bugs
If you discover a bug, first check the [GitHub Issues page](https://github.com/sashabaranov/go-openai/issues) to see if the issue has already been reported. If you're reporting a new issue, please use the "Bug report" template and provide detailed information about the problem, including steps to reproduce it.

## Suggesting Features
If you want to suggest a new feature or improvement, first check the [GitHub Issues page](https://github.com/sashabaranov/go-openai/issues) to ensure a similar suggestion hasn't already been made. Use the "Feature request" template to provide a detailed description of your suggestion.

## Reporting Vulnerabilities
If you identify a security concern, please use the "Report a security vulnerability" template on the [GitHub Issues page](https://github.com/sashabaranov/go-openai/issues) to share the details. This report will only be viewable to repository maintainers. You will be credited if the advisory is published.

## Questions for Users
If you have questions, please utilize [StackOverflow](https://stackoverflow.com/) or the [GitHub Discussions page](https://github.com/sashabaranov/go-openai/discussions).

## Contributing Code
There might already be a similar pull requests submitted! Please search for [pull requests](https://github.com/sashabaranov/go-openai/pulls) before creating one.

### Requirements for Merging a Pull Request

The requirements to accept a pull request are as follows:

- Features not provided by the OpenAI API will not be accepted.
- The functionality of the feature must match that of the official OpenAI API.
- All pull requests should be written in Go according to common conventions, formatted with `goimports`, and free of warnings from tools like `golangci-lint`.
- Include tests and ensure all tests pass.
- Maintain test coverage without any reduction.
- All pull requests require approval from at least one Go OpenAI maintainer.

**Note:**
The merging method for pull requests in this repository is squash merge.

### Creating a Pull Request
- Fork the repository.
- Create a new branch and commit your changes.
- Push that branch to GitHub.
- Start a new Pull Request on GitHub. (Please use the pull request template to provide detailed information.)

**Note:**
If your changes introduce breaking changes, please prefix your pull request title with "[BREAKING_CHANGES]".

### Code Style
In this project, we adhere to the standard coding style of Go. Your code should maintain consistency with the rest of the codebase. To achieve this, please format your code using tools like `goimports` and resolve any syntax or style issues with `golangci-lint`.

**Run goimports:**
```
go install golang.org/x/tools/cmd/goimports@latest
```

```
goimports -w .
```

**Run golangci-lint:**
```
go install github.com/golangci/golangci-lint/cmd/golangci-lint@latest
```

```
golangci-lint run --out-format=github-actions
```

### Unit Test
Please create or update tests relevant to your changes. Ensure all tests run successfully to verify that your modifications do not adversely affect other functionalities.

**Run test:**
```
go test -v ./...
```

### Integration Test
Integration tests are requested against the production version of the OpenAI API. These tests will verify that the library is properly coded against the actual behavior of the API, and will fail upon any incompatible change in the API.

**Notes:**
These tests send real network traffic to the OpenAI API and may reach rate limits. Temporary network problems may also cause the test to fail.

**Run integration test:**
```
OPENAI_TOKEN=XXX go test -v -tags=integration ./api_integration_test.go
```

If the `OPENAI_TOKEN` environment variable is not available, integration tests will be skipped.

---

We wholeheartedly welcome your active participation. Let's build an amazing project together!
122 changes: 104 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,16 @@ This library provides unofficial Go clients for [OpenAI API](https://platform.op
* DALL·E 2
* Whisper

### Installation:
## Installation

```
go get github.com/coggsfl/go-openai
```
Currently, go-openai requires Go version 1.18 or greater.


## Usage

### ChatGPT example usage:

```go
Expand Down Expand Up @@ -479,6 +483,62 @@ func main() {
```
</details>

<details>
<summary>Embedding Semantic Similarity</summary>

```go
package main

import (
"context"
"log"
openai "github.com/sashabaranov/go-openai"

)

func main() {
client := openai.NewClient("your-token")

// Create an EmbeddingRequest for the user query
queryReq := openai.EmbeddingRequest{
Input: []string{"How many chucks would a woodchuck chuck"},
Model: openai.AdaEmbeddingv2,
}

// Create an embedding for the user query
queryResponse, err := client.CreateEmbeddings(context.Background(), queryReq)
if err != nil {
log.Fatal("Error creating query embedding:", err)
}

// Create an EmbeddingRequest for the target text
targetReq := openai.EmbeddingRequest{
Input: []string{"How many chucks would a woodchuck chuck if the woodchuck could chuck wood"},
Model: openai.AdaEmbeddingv2,
}

// Create an embedding for the target text
targetResponse, err := client.CreateEmbeddings(context.Background(), targetReq)
if err != nil {
log.Fatal("Error creating target embedding:", err)
}

// Now that we have the embeddings for the user query and the target text, we
// can calculate their similarity.
queryEmbedding := queryResponse.Data[0]
targetEmbedding := targetResponse.Data[0]

similarity, err := queryEmbedding.DotProduct(&targetEmbedding)
if err != nil {
log.Fatal("Error calculating dot product:", err)
}

log.Printf("The similarity score between the query and the target is %f", similarity)
}

```
</details>

<details>
<summary>Azure OpenAI Embeddings</summary>

Expand Down Expand Up @@ -666,11 +726,16 @@ func main() {
client := openai.NewClient("your token")
ctx := context.Background()

// create a .jsonl file with your training data
// create a .jsonl file with your training data for conversational model
// {"prompt": "<prompt text>", "completion": "<ideal generated text>"}
// {"prompt": "<prompt text>", "completion": "<ideal generated text>"}
// {"prompt": "<prompt text>", "completion": "<ideal generated text>"}

// chat models are trained using the following file format:
// {"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
// {"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]}
// {"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]}

// you can use openai cli tool to validate the data
// For more info - https://platform.openai.com/docs/guides/fine-tuning

Expand All @@ -683,29 +748,29 @@ func main() {
return
}

// create a fine tune job
// create a fine tuning job
// Streams events until the job is done (this often takes minutes, but can take hours if there are many jobs in the queue or your dataset is large)
// use below get method to know the status of your model
tune, err := client.CreateFineTune(ctx, openai.FineTuneRequest{
fineTuningJob, err := client.CreateFineTuningJob(ctx, openai.FineTuningJobRequest{
TrainingFile: file.ID,
Model: "ada", // babbage, curie, davinci, or a fine-tuned model created after 2022-04-21.
Model: "davinci-002", // gpt-3.5-turbo-0613, babbage-002.
})
if err != nil {
fmt.Printf("Creating new fine tune model error: %v\n", err)
return
}

getTune, err := client.GetFineTune(ctx, tune.ID)
fineTuningJob, err = client.RetrieveFineTuningJob(ctx, fineTuningJob.ID)
if err != nil {
fmt.Printf("Getting fine tune model error: %v\n", err)
return
}
fmt.Println(getTune.FineTunedModel)
fmt.Println(fineTuningJob.FineTunedModel)

// once the status of getTune is `succeeded`, you can use your fine tune model in Completion Request
// once the status of fineTuningJob is `succeeded`, you can use your fine tune model in Completion Request or Chat Completion Request

// resp, err := client.CreateCompletion(ctx, openai.CompletionRequest{
// Model: getTune.FineTunedModel,
// Model: fineTuningJob.FineTunedModel,
// Prompt: "your prompt",
// })
// if err != nil {
Expand All @@ -719,19 +784,40 @@ func main() {
</details>
See the `examples/` folder for more.

### Integration tests:
## Frequently Asked Questions

Integration tests are requested against the production version of the OpenAI API. These tests will verify that the library is properly coded against the actual behavior of the API, and will fail upon any incompatible change in the API.
### Why don't we get the same answer when specifying a temperature field of 0 and asking the same question?

**Notes:**
These tests send real network traffic to the OpenAI API and may reach rate limits. Temporary network problems may also cause the test to fail.
Even when specifying a temperature field of 0, it doesn't guarantee that you'll always get the same response. Several factors come into play.

**Run tests using:**
```
OPENAI_TOKEN=XXX go test -v -tags=integration ./api_integration_test.go
```
1. Go OpenAI Behavior: When you specify a temperature field of 0 in Go OpenAI, the omitempty tag causes that field to be removed from the request. Consequently, the OpenAI API applies the default value of 1.
2. Token Count for Input/Output: If there's a large number of tokens in the input and output, setting the temperature to 0 can still result in non-deterministic behavior. In particular, when using around 32k tokens, the likelihood of non-deterministic behavior becomes highest even with a temperature of 0.

Due to the factors mentioned above, different answers may be returned even for the same question.

**Workarounds:**
1. Using `math.SmallestNonzeroFloat32`: By specifying `math.SmallestNonzeroFloat32` in the temperature field instead of 0, you can mimic the behavior of setting it to 0.
2. Limiting Token Count: By limiting the number of tokens in the input and output and especially avoiding large requests close to 32k tokens, you can reduce the risk of non-deterministic behavior.

By adopting these strategies, you can expect more consistent results.

**Related Issues:**
[omitempty option of request struct will generate incorrect request when parameter is 0.](https://github.com/sashabaranov/go-openai/issues/9)

### Does Go OpenAI provide a method to count tokens?

No, Go OpenAI does not offer a feature to count tokens, and there are no plans to provide such a feature in the future. However, if there's a way to implement a token counting feature with zero dependencies, it might be possible to merge that feature into Go OpenAI. Otherwise, it would be more appropriate to implement it in a dedicated library or repository.

For counting tokens, you might find the following links helpful:
- [Counting Tokens For Chat API Calls](https://github.com/pkoukk/tiktoken-go#counting-tokens-for-chat-api-calls)
- [How to count tokens with tiktoken](https://github.com/openai/openai-cookbook/blob/main/examples/How_to_count_tokens_with_tiktoken.ipynb)

**Related Issues:**
[Is it possible to join the implementation of GPT3 Tokenizer](https://github.com/sashabaranov/go-openai/issues/62)

## Contributing

If the `OPENAI_TOKEN` environment variable is not available, integration tests will be skipped.
By following [Contributing Guidelines](https://github.com/sashabaranov/go-openai/blob/master/CONTRIBUTING.md), we hope to ensure that your contributions are made smoothly and efficiently.

## Thank you

Expand Down
21 changes: 19 additions & 2 deletions audio.go
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,21 @@ type AudioResponse struct {
Transient bool `json:"transient"`
} `json:"segments"`
Text string `json:"text"`

httpHeader
}

type audioTextResponse struct {
Text string `json:"text"`

httpHeader
}

func (r *audioTextResponse) ToAudioResponse() AudioResponse {
return AudioResponse{
Text: r.Text,
httpHeader: r.httpHeader,
}
}

// CreateTranscription — API call to create a transcription. Returns transcribed text.
Expand Down Expand Up @@ -102,9 +117,11 @@ func (c *Client) callAudioAPI(
}

if request.HasJSONResponse() {
err = c.sendRequest(ctx, req, &response)
err = c.sendRequest(req, &response)

Check failure on line 120 in audio.go

View workflow job for this annotation

GitHub Actions / Sanity check

Function `sendRequest->requestImage` should pass the context parameter (contextcheck)

Check failure on line 120 in audio.go

View workflow job for this annotation

GitHub Actions / Sanity check

Function `sendRequest->requestImage` should pass the context parameter (contextcheck)
} else {
err = c.sendRequest(ctx, req, &response.Text)
var textResponse audioTextResponse
err = c.sendRequest(req, &textResponse)

Check failure on line 123 in audio.go

View workflow job for this annotation

GitHub Actions / Sanity check

Function `sendRequest->requestImage` should pass the context parameter (contextcheck)

Check failure on line 123 in audio.go

View workflow job for this annotation

GitHub Actions / Sanity check

Function `sendRequest->requestImage` should pass the context parameter (contextcheck)
response = textResponse.ToAudioResponse()
}
if err != nil {
return AudioResponse{}, err
Expand Down
11 changes: 10 additions & 1 deletion chat.go
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,13 @@ const (
FinishReasonNull FinishReason = "null"
)

func (r FinishReason) MarshalJSON() ([]byte, error) {
if r == FinishReasonNull || r == "" {
return []byte("null"), nil
}
return []byte(`"` + string(r) + `"`), nil // best effort to not break future API changes
}

type ChatCompletionChoice struct {
Index int `json:"index"`
Message ChatCompletionMessage `json:"message"`
Expand All @@ -135,6 +142,8 @@ type ChatCompletionResponse struct {
Model string `json:"model"`
Choices []ChatCompletionChoice `json:"choices"`
Usage Usage `json:"usage"`

httpHeader
}

// CreateChatCompletion — API call to Create a completion for the chat message.
Expand All @@ -158,6 +167,6 @@ func (c *Client) CreateChatCompletion(
return
}

err = c.sendRequest(ctx, req, &response)
err = c.sendRequest(req, &response)

Check failure on line 170 in chat.go

View workflow job for this annotation

GitHub Actions / Sanity check

Function `sendRequest->requestImage` should pass the context parameter (contextcheck)

Check failure on line 170 in chat.go

View workflow job for this annotation

GitHub Actions / Sanity check

Function `sendRequest->requestImage` should pass the context parameter (contextcheck)
return
}
Loading

0 comments on commit 4f0ca08

Please sign in to comment.