Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: openAI explicit value for maxToken and temperature #659

Merged
merged 2 commits into from
Sep 18, 2023

Conversation

panpan0000
Copy link
Contributor

@panpan0000 panpan0000 commented Sep 15, 2023

Because when k8sgpt talks with vLLM, the default MaxToken is 16, which is so small.
Given the most model supports 2048 token(like Llama1 ..etc), so put here for a safe value.

Closes # NA

📑 Description

✅ Checks

  • My pull request adheres to the code style of this project
  • My code requires changes to the documentation
  • I have updated the documentation as required
  • All the tests have passed

ℹ Additional Information

When k8sgpt talks with local(on-premise) LLM model, for example, model serving by vLLM(refer to blog https://k8sgpt.ai/blog/post-6/)
Problem was met when request from k8sgpt has NO default MaxToken value.

From the log of vLLM
image
you will see temperature=0.7 and max_tokens=16

such a small max_tokens will cause below issue: the answers from LLM was truncated.
Pasted Graphic

So this fix try to make the value explicit , just like what was done in https://github.com/k8sgpt-ai/k8sgpt/blob/main/pkg/ai/cohere.go#L66

With this fix , we will see the new log from vLLM become better :
image

image

Regression Test

when swich back to online openAI, the result is still good

image

@panpan0000 panpan0000 requested review from a team as code owners September 15, 2023 05:46
Copy link
Member

@AlexsJones AlexsJones left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Please can you add those as constants at the top of file?
  • PR name needs to be semantic e.g. "feat: OpenAI: explicit value for MaxToken and Temp "

Nice suggestion!

@panpan0000 panpan0000 changed the title OpenAI: explicit value for MaxToken and Temp feat: OpenAI- explicit value for MaxToken and Temp Sep 15, 2023
@panpan0000
Copy link
Contributor Author

  • Please can you add those as constants at the top of file?
  • PR name needs to be semantic e.g. "feat: OpenAI: explicit value for MaxToken and Temp "

Nice suggestion!

done . Thanks !

@panpan0000 panpan0000 changed the title feat: OpenAI- explicit value for MaxToken and Temp feat: OpenAI: explicit value for MaxToken and Temp Sep 15, 2023
@arbreezy
Copy link
Member

Why do we need to specify Temperature value as a constant ?

Is this worth doing across all AI backends ?

@panpan0000 panpan0000 changed the title feat: OpenAI: explicit value for MaxToken and Temp feat: OpenAI explicit value for MaxToken and Temp Sep 15, 2023
@panpan0000 panpan0000 changed the title feat: OpenAI explicit value for MaxToken and Temp feat: openAI explicit value for maxToken and temperature Sep 15, 2023
@panpan0000
Copy link
Contributor Author

panpan0000 commented Sep 15, 2023

Why do we need to specify Temperature value as a constant ?

Is this worth doing across all AI backends ?

Hi, @arbreezy :
(1) I learned from previous code for cohere backend : https://github.com/k8sgpt-ai/k8sgpt/blob/main/pkg/ai/cohere.go#L66
(2) no default temperature value from go-openAI library
(3) so what the values will be depends on openAI API server. openAI make it 0.7 by default. but if we build a on-premise LLM server , it may vary.
(4) to be honest, I think for k8sgpt is more serious case that normal chat case, so the 0.7 temperature should not be too high to lead hallucination, but I don't want to change the value for now..

@arbreezy
Copy link
Member

@panpan0000 makes sense.
Two additional things to consider,
a) is it worth making temperature's value configurable to users?
b) i think it makes sense have temperature explicitly defined in all our ai backends e.g azure openai

Because when k8sgpt talks with vLLM, the default MaxToken is 16,
which is so small.
Given the most model supports 2048 token(like Llama1 ..etc), so
put here for a safe value.

Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
Signed-off-by: Peter Pan <Peter.Pan@daocloud.io>
@panpan0000
Copy link
Contributor Author

@panpan0000 makes sense. Two additional things to consider, a) is it worth making temperature's value configurable to users? b) i think it makes sense have temperature explicitly defined in all our ai backends e.g azure openai

sure. @arbreezy . already updated with second commit : (1) add it for all backend (2) add it as a cmd flag.

can you please review again? Thanks a lot

@AlexsJones AlexsJones merged commit f55946d into k8sgpt-ai:main Sep 18, 2023
7 checks passed
@arbreezy
Copy link
Member

@panpan0000 @AlexsJones ah I think we need to also set it in the server mode right?

@AlexsJones
Copy link
Member

@panpan0000 @AlexsJones ah I think we need to also set it in the server mode right?

Feels like an oversight that needs fixing yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants