Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: GPT-4 Turbo #1149

Closed
1 task done
Originalimoc opened this issue Nov 7, 2023 · 2 comments
Closed
1 task done

Enhancement: GPT-4 Turbo #1149

Originalimoc opened this issue Nov 7, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@Originalimoc
Copy link

Contact Details

No response

What features would you like to see added?

New model:

New GPT-4 Turbo:We announced GPT-4 Turbo, our most advanced model. It offers a 128K context window and knowledge of world events up to April 2023. We’ve reduced pricing for GPT-4 Turbo considerably: input tokens are now priced at $0.01/1K and output tokens at $0.03/1K, making it 3x and 2x cheaper respectively compared to the previous GPT-4 pricing. We’ve improved function calling, including the ability to call multiple functions in a single message, to always return valid functions with JSON mode, and improved accuracy on returning the right function parameters.Model outputs are more deterministic with our new reproducible outputs beta feature.You can access GPT-4 Turbo by passing gpt-4-1106-preview in the API, with a stable production-ready model release planned later this year.
Updated GPT-3.5 Turbo:The new gpt-3.5-turbo-1106 supports 16K context by default and that 4x longer context is available at lower prices: $0.001/1K input, $0.002/1K output. Fine-tuning of this 16K model is available. Fine-tuned GPT-3.5 is much cheaper to use: with input token prices decreasing by 75% to $0.003/1K and output token prices by 62% to $0.006/1K.gpt-3.5-turbo-1106 joins GPT-4 Turbo with improved function calling and reproducible outputs.

gpt-4-1106-preview/128K context
gpt-3.5-turbo-1106/16k context

Additional, gpt-4-vision-preview?:

Multimodal capabilities:
GPT-4 Turbo now supports visual inputs in the Chat Completions API, enabling use cases like caption generation and visual analysis. You can access the vision features by using the gpt-4-vision-preview model. This vision capability will be integrated into the production-ready version of GPT-4 Turbo when it comes out of preview later this year.
You can also integrate DALL·E 3 for image generation into your applications via the Image generation API.
We released text-to-speech capabilities through the newly introduced TTS model, which will read text for you using one of six natural sounding voices.

More details

None

Which components are impacted by your request?

General

Pictures

No response

Code of Conduct

  • I agree to follow this project's Code of Conduct
@Originalimoc Originalimoc added the enhancement New feature or request label Nov 7, 2023
@fuegovic
Copy link
Collaborator

fuegovic commented Nov 7, 2023

This updated the context for the new 1106 models:
#1146

DALL-E 3 has been included in this update:
#1147

and here's a quote @danny-avila on discord:

Will soon be working on image recognition for the gpt-4-vision model and more to come with the latest API updates

@Originalimoc
Copy link
Author

I forgot .env file needs to be changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants