Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Expose all parameters #7

Merged
merged 8 commits into from
Mar 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 29 additions & 41 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,9 @@ interactable commands you can use, and those are mostly examples.*
- [OpenAI.el](#openaiel)
- [📚 Documentation](#📚-documentation)
- [🔨 Usage](#🔨-usage)
- [The simplest example](#the-simplest-example)
- [📝 Customization](#📝-customization)
- [🔰 The simplest example](#🔰-the-simplest-example)
- [📨 Sending Request](#📨-sending-request)
- [📢 API functions](#📢-api-functions)
- [🖥 Setting Model](#🖥-setting-model)
- [🔗 References](#🔗-references)
- [Contribute](#contribute)

Expand All @@ -39,7 +38,19 @@ You will need to set up your API key before you can use this library.
(setq openai-key "[YOUR API KEY]")
```

### The simplest example
For requests that need your user identifier,

```elisp
(setq openai-user "[YOUR USER UID]")
```

> 💡 Tip
>
> The two variables `openai-key` and `openai-user` are the default values for
> sending requests! However, you can still overwrite the value by passing the
> keywords `:key` and `:user`!

### 🔰 The simplest example

Here is the simplest example that teaches you how to use this library. This is
a function with a `query` and a callback function.
Expand All @@ -50,37 +61,25 @@ a function with a `query` and a callback function.
(message "%s" data)))
```

### 📝 Customization
### 📨 Sending Request

Most arguments are extracted (excepts the required one) as global variables.
For example, one variable `openai-completon-n` is defined in `openai-completion.el`
file. That variable is used for the completion request, for more information see
https://beta.openai.com/docs/api-reference/completions. The naming convention is
by the following pattern:
Most arguments are exposed in the argument list (excepts the required one).

```
[PACKAGE NAME]-[API TYPE]-[NAME OF THE ARGUMENT]
```
For example, the request function `openai-completion` accepts argument
`max-tokens`. By seeing OpenAI's references page:

For example:
> `max_tokens` integer Optional Defaults to 16
>
> The maximum number of tokens to generate in the completion.
>
> The token count of your prompt plus `max_tokens` cannot exceed the model's
> context length. Most models have a context length of 2048 tokens (except for
> the newest models, which support 4096).

```elisp
(setq openai-edit-temperature 1)
```

- `openai` - is the package name
- `edit` - is the api type, see [OpenAI API reference](https://platform.openai.com/docs/api-reference/introduction)
- `temperature` - is the argument for the [Edit](https://platform.openai.com/docs/api-reference/edits) request.

You can change the model for a single request without changing its global value.

```elisp
(let ((openai-edit-model "text-davinci-edit-001") ; use another model for this request,
(openai-edit-n 3)) ; and i want three outcomes
(openai-edit-create "What day of the wek is it?"
"Fix the spelling mistakes"
(lambda (data)
(message "%s" data))))
(openai-completion ...
...
:max-tokens 4069) ; max out tokens!
```

### 📢 API functions
Expand All @@ -101,17 +100,6 @@ For example:
- `file` - is the api type, see [OpenAI API reference](https://platform.openai.com/docs/api-reference/introduction)
- `list` - is the request name

### 🖥 Setting Model

You can also choose which model you want to use by going to the
[api](https://api.openai.com/v1/models) website and looking at the id's.
For code usage you probably want something that starts with `code-` whereas
with more text related files you'll likely want something starting with `text-`.

```elisp
(setq openai-completion-model "NAME-HERE")
```

## 🔗 References

- [CodeGPT](https://marketplace.visualstudio.com/items?itemName=timkmecl.codegpt3)
Expand Down
189 changes: 62 additions & 127 deletions openai-completion.el
Original file line number Diff line number Diff line change
Expand Up @@ -28,139 +28,62 @@

(require 'openai)

(defcustom openai-completon-model "text-davinci-003"
"ID of the model to use.

You can use the List models API to see all of your available models."
:type 'string
:group 'openai)

(defcustom openai-completon-suffix nil
"The suffix that comes after a completion of inserted text."
:type 'string
:group 'openai)

(defcustom openai-completon-max-tokens 4000
"The maximum number of tokens to generate in the completion.

The token count of your prompt plus max_tokens cannot exceed the model's context
length. Most models have a context length of 2048 tokens (except for the newest
models, which support 4096)."
:type 'integer
:group 'openai)

(defcustom openai-completon-temperature 1.0
"What sampling temperature to use.

Higher values means the model will take more risks. Try 0.9 for more creative
applications, and 0 (argmax sampling) for ones with a well-defined answer."
:type 'number
:group 'openai)

(defcustom openai-completon-top-p 1.0
"An alternative to sampling with temperature, called nucleus sampling, where
the model considers the results of the tokens with top_p probability mass.
So 0.1 means only the tokens comprising the top 10% probability mass are
considered.

We generally recommend altering this or `temperature' but not both."
:type 'number
:group 'openai)

(defcustom openai-completon-n 1
"How many completions to generate for each prompt."
:type 'integer
:group 'openai)

(defcustom openai-completon-stream nil
"Whether to stream back partial progress.

If set, tokens will be sent as data-only server-sent events as they become
available, with the stream terminated by a data: [DONE] message."
:type 'boolean
:group 'openai)

(defcustom openai-completon-logprobs nil
"Include the log probabilities on the logprobs most likely tokens, as well the
chosen tokens. For example, if logprobs is 5, the API will return a list of the
5 most likely tokens. The API will always return the logprob of the sampled
token, so there may be up to logprobs+1 elements in the response.

The maximum value for logprobs is 5."
:type 'integer
:group 'openai)

(defcustom openai-completon-echo nil
"Echo back the prompt in addition to the completion."
:type 'boolean
:group 'openai)

(defcustom openai-completon-stop nil
"Up to 4 sequences where the API will stop generating further tokens.
The returned text will not contain the stop sequence."
:type 'string
:group 'openai)

(defcustom openai-completon-presence-penalty 0
"Number between -2.0 and 2.0. Positive values penalize new tokens based on
whether they appear in the text so far, increasing the model's likelihood to
talk about new topics."
:type 'number
:group 'openai)

(defcustom openai-completon-frequency-penalty 0
"Number between -2.0 and 2.0.

Positive values penalize new tokens based on their existing frequency in the
text so far, decreasing the model's likelihood to repeat the same line verbatim."
:type 'number
:group 'openai)

(defcustom openai-completon-best-of 1
"Generates best_of completions server-side and returns the \"best\" (the one
with the highest log probability per token). Results cannot be streamed.

When used with `n', `best_of' controls the number of candidate completions and
`n' specifies how many to return – `best_of' must be greater than `n'."
:type 'integer
:group 'openai)

(defcustom openai-completon-logit-bias nil
"Modify the likelihood of specified tokens appearing in the completion."
:type 'list
:group 'openai)

;;
;;; API

;;;###autoload
(defun openai-completion (query callback)
"Query OpenAI with QUERY.

Argument CALLBACK is a function received one argument which is the JSON data."
(cl-defun openai-completion ( prompt callback
&key
(key openai-key)
(model "text-davinci-003")
suffix
max-tokens
temperature
top-p
n
stream
logprobs
echo
stop
presence-penalty
frequency-penalty
best-of
logit-bias
(user openai-user))
"Send completion request.

Arguments PROMPT and CALLBACK are required for this type of request. PROMPT is
either the question or instruction to OpenAI. CALLBACK is the execuation after
request is made.

Arguments KEY and USER are global options; however, you can overwrite the value
by passing it in.

The rest of the arugments are optional, please see OpenAI API reference page
for more information. Arguments here refer to MODEL, SUFFIX, MAX-TOKENS,
TEMPERATURE, TOP-P, N, STREAM, LOGPROBS, ECHO, STOP, PRESENCE-PENALTY,
FREQUENCY-PENALTY, BEST-OF, and LOGIT-BIAS."
(openai-request "https://api.openai.com/v1/completions"
:type "POST"
:headers `(("Content-Type" . "application/json")
("Authorization" . ,(concat "Bearer " openai-key)))
:data (json-encode
`(("model" . ,openai-completon-model)
("prompt" . ,query)
("suffix" . ,openai-completon-suffix)
("max_tokens" . ,openai-completon-max-tokens)
("temperature" . ,openai-completon-temperature)
("top_p" . ,openai-completon-top-p)
("n" . ,openai-completon-n)
;;("stream" . ,(if openai-completon-stream "true" "false"))
("logprobs" . ,openai-completon-logprobs)
;;("echo" . ,(if openai-completon-echo "true" "false"))
;;("stop" . ,openai-completon-stop)
;;("presence_penalty" . ,openai-completon-presence-penalty)
;;("frequency_penalty" . ,openai-completon-frequency-penalty)
;;("best_of" . ,openai-completon-best-of)
;;("logit_bias" . ,(if (listp openai-completon-logit-bias)
;; (json-encode openai-completon-logit-bias)
;; openai-completon-logit-bias))
("user" . ,openai-user)))
("Authorization" . ,(concat "Bearer " key)))
:data (openai--json-encode
`(("model" . ,model)
("prompt" . ,prompt)
("suffix" . ,suffix)
("max_tokens" . ,max-tokens)
("temperature" . ,temperature)
("top-p" . ,top-p)
("n" . ,n)
("stream" . ,stream)
("logprobs" . ,logprobs)
("echo" . ,echo)
("stop" . ,stop)
("presence_penalty" . ,presence-penalty)
("frequency_penalty" . ,frequency-penalty)
("best_of" . ,best-of)
("logit_bias" . ,logit-bias)
("user" . ,user)))
:parser 'json-read
:success (cl-function
(lambda (&key data &allow-other-keys)
Expand All @@ -169,6 +92,16 @@ Argument CALLBACK is a function received one argument which is the JSON data."
;;
;;; Application

(defcustom openai-completion-max-tokens 4000
"The maximum number of tokens to generate in the completion."
:type 'integer
:group 'openai)

(defcustom openai-completion-temperature 1.0
"What sampling temperature to use."
:type 'number
:group 'openai)

;;;###autoload
(defun openai-completion-select-insert (start end)
"Send the region to OpenAI and insert the result to the next paragraph.
Expand Down Expand Up @@ -197,7 +130,9 @@ START and END are selected region boundaries."
(fill-region original-point (point))
;; Highlight the region!
(call-interactively #'set-mark-command)
(goto-char (1+ original-point))))))))
(goto-char (1+ original-point)))))
:max-tokens openai-completion-max-tokens
:temperature openai-completion-temperature)))

;;;###autoload
(defun openai-completion-buffer-insert ()
Expand Down
Loading