Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feat] add gpt-4o-mini and set as default #220

Merged
merged 3 commits into from
Jul 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Type: Package
Package: gptstudio
Title: Use Large Language Models Directly in your Development Environment
Version: 0.4.0.9001
Version: 0.4.0.9002
Authors@R: c(
person("Michel", "Nivard", , "m.g.nivard@vu.nl", role = c("aut", "cph")),
person("James", "Wade", , "github@jameshwade.com", role = c("aut", "cre", "cph"),
Expand Down
1 change: 1 addition & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

- Fixed a bug that showed the message "ChatGPT responded" even when other service was being used in "Chat in source" related addins. #213
- Added claude-3.5-sonnet model from Anthropic.
- Set gpt-4o-mini as default model for OpenAI. #219

## gptstudio 0.4.0

Expand Down
4 changes: 2 additions & 2 deletions R/api_skeletons.R
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ new_gptstudio_request_skeleton_google <- function(
new_gptstudio_request_skeleton_azure_openai <- function(
url = "user provided with environmental variables",
api_key = Sys.getenv("AZURE_OPENAI_KEY"),
model = "gpt-3.5-turbo",
model = "gpt-4o-mini",
prompt = "What is a ggplot?",
history = list(
list(
Expand Down Expand Up @@ -249,7 +249,7 @@ gptstudio_create_skeleton <- function(service = "openai",
)
),
stream = TRUE,
model = "gpt-3.5-turbo",
model = "gpt-4o-mini",
...) {
switch(service,
"openai" = new_gptstudio_request_skeleton_openai(
Expand Down
2 changes: 1 addition & 1 deletion R/models.R
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ list_available_models.openai <- function(service) {
stringr::str_subset("vision", negate = TRUE) %>%
sort()

idx <- which(models == "gpt-3.5-turbo")
idx <- which(models == "gpt-4o-mini")
models <- c(models[idx], models[-idx])
return(models)
}
Expand Down
4 changes: 2 additions & 2 deletions R/service-openai_streaming.R
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
#' @param element_callback A callback function to handle each element
#' of the streamed response (optional).
#' @param model A character string specifying the model to use for chat completion.
#' The default model is "gpt-3.5-turbo".
#' The default model is "gpt-4o-mini".
#' @param openai_api_key A character string of the OpenAI API key.
#' By default, it is fetched from the "OPENAI_API_KEY" environment variable.
#' Please note that the OpenAI API key is sensitive information and should be
Expand All @@ -17,7 +17,7 @@
stream_chat_completion <-
function(messages = NULL,
element_callback = cat,
model = "gpt-3.5-turbo",
model = "gpt-4o-mini",
openai_api_key = Sys.getenv("OPENAI_API_KEY")) {
# Set the API endpoint URL
url <- paste0(getOption("gptstudio.openai_url"), "/chat/completions")
Expand Down
2 changes: 1 addition & 1 deletion man/gptstudio_create_skeleton.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 2 additions & 2 deletions man/stream_chat_completion.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 5 additions & 3 deletions tests/testthat/test-models.R
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for updating the tests

Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,11 @@ test_that("get_available_models works for openai", {
service <- "openai"
models <- get_available_models(service)
expect_equal(models, c(
"gpt-4o-mini",
"gpt-3.5-turbo", "gpt-3.5-turbo-0125", "gpt-3.5-turbo-1106",
"gpt-3.5-turbo-16k", "gpt-4", "gpt-4-0125-preview", "gpt-4-0613",
"gpt-4-1106-preview", "gpt-4-turbo", "gpt-4-turbo-2024-04-09",
"gpt-4-turbo-preview", "gpt-4o", "gpt-4o-2024-05-13"
"gpt-4-turbo-preview", "gpt-4o", "gpt-4o-2024-05-13", "gpt-4o-mini-2024-07-18"
))
})

Expand Down Expand Up @@ -73,8 +74,9 @@ test_that("get_available_models works for cohere", {
service <- "cohere"
models <- get_available_models(service)
expect_equal(models, c(
"command-r", "command-nightly", "command-r-plus", "c4ai-aya-23",
"command-light-nightly", "command", "command-light"
"command-r", "command-r-plus", "c4ai-aya-23",
"command-light-nightly", "command-nightly",
"command", "command-light"
))
})

Expand Down
Loading