Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: ability to fine tune the prompt context #415

Open
laurentperez opened this issue Sep 9, 2024 · 3 comments
Open

[Feature]: ability to fine tune the prompt context #415

laurentperez opened this issue Sep 9, 2024 · 3 comments

Comments

@laurentperez
Copy link

Description

hello

context is : ollama + llama3.1 8B

the generated commit message is too rich, it acts as is the LLM did a code review of my code and gave me some suggestions.

example :

image

Suggested Solution

I'd like to drive the prompt as in "you are not a code reviewer. do not give me suggestions, instead tell me what I did and return a concise report of the changes, to get a compact git commit message"

how would you do this ?

Alternatives

No response

Additional Context

No response

@tanc
Copy link

tanc commented Oct 8, 2024

Looking at the code here and here it seems like it should be providing a decent prompt but I get the same kind of result as @laurentperez does from ollama + llama3.1 or ollama + mistral-nemo.

│
◇  📝 Commit message generated
│
└  Generated commit message:
——————————————————
It seems like you're adding a new field called `memberOnly` to your GraphQL schema for the `ScientificArticle` type. Here's how the changes you've made affect the schema:

@tanc
Copy link

tanc commented Oct 8, 2024

I just tried with fewer files staged and the result was much better. Is this a tokens issue? I've upped the number of max tokens in OCO_TOKENS_MAX_INPUT and OCO_TOKENS_MAX_INPUT but the problem remains

@tanc
Copy link

tanc commented Oct 8, 2024

ok, this due to Ollama having a default context size to 2048. This can be increased when making API calls but this project doesn't do that. A workaround is to create a model with a larger context by doing the following (example for llama3.1):

  1. Create a Modelfile text file somewhere with the following content:
FROM llama3.1

PARAMETER num_ctx 8192
PARAMETER num_predict -1
  1. Run the command ollama create llama3.1-max-context -f ./Modelfile
  2. You should see the new model in the list when running ollama list
  3. Change the model used by oco: oco config set OCO_MODEL='llama3.1-max-context'

This should allow for much bigger diffs to be pushed into the model to get a correct answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants