-
-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: Improve the LLM prompts for tag generation #505
base: main
Are you sure you want to change the base?
Conversation
Hey, thanks for taking the time, attempting to tweak the prompt and sending the PR. However, I intentionally don't want to lose control over the prompt. I want to keep the ability to change the format of the output in the future to include extra stuff, etc. If I let people replace the prompt completely, their future upgrades might not be backward compatible. My plan is to give the people the option to "customize" the prompt and add extra rules. But I don't want people to completely replace it. The next release allows people to add new rules, but I'm also planning on letting the people modify the prebuilt rules. Now, regarding the base prompt. I'd be happy to accept tweaking for it if you think your prompt achieves better result in llama, but we'd probably also want to test it on gpt models as well. |
This new prompt works well in gpt-4o-mini and llama3.1-8b.
124af70
to
567e63d
Compare
So I've just pushed a commit (removing the initial one) which basically is just my new prompt, with slight changes to allow for the custom prompt feature. I've tried it with an example document in both Example
gpt-4o-mini {
"tags": [
"Transformer",
"Artificial Intelligence",
"Privacy",
"Open Models",
"Machine Learning"
]
} llama 3.1 8B { "tags": ["Transformer", "Artificial Intelligence", "Machine Learning", "OpenAI", "Privacy"] } What do you think? |
I tried it out with 34 links and it failed on 7 of them(also llama 3). |
I'm not saying that this should be merged outright. However, a 16% failure rate is already a lot better than the 100% failure rate I saw on my end. I'm pretty sure that the issue lies in the code assuming that
With this I usually see an almost perfect success rate. These changes wouldn't hurt OpenAI as |
It makes things better for you, but makes things worse for everyone else though, so not very promising. |
Hello!
I just found this great piece of software! I was thrilled to see that I can use my own OpenAI endpoint and point it towards my Llama 3.1 8B endpoint.
Status Quo
I think the standard prompt as set in
packages/shared/prompts.ts
could require some tweaking.The default prompt doesn't work at all with my model (Which I think is a pretty commonly used one). This one does work great:
I've only tested this prompt manually with my model. With this, my 0% success rate turns into a 100% success rate; Which surprises me, usually you need to handle the model giving an introduction like "Your requested JSON document: ..." by looking for the first
{
and last}
.Thus
Long story short: This PR adds the "TEXT_TAG_PROMPT" and "IMAGE_TAG_PROMPT" environment variables. If not set, then the already existing prompt is used. If it is set, it's used instead. The variables are then interpolated
Like {{this}}
, akin to Jinja2 templates (Which the project may adopt in the future?)My template in this syntax
Not Great: As is, this change breaks
AISettings.tsx
, as it would now require the server configuration to work. For my tests I simply removed the calls to the buildPrompt functions - But this of course isn't acceptable for merging. I'd need a little help here on how we should approach this.Now it works with Llama 3.1 8B, yay