Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prompt engineering required when using cheaper LLM #26

Open
rorywhite200 opened this issue Sep 6, 2024 · 1 comment
Open

Prompt engineering required when using cheaper LLM #26

rorywhite200 opened this issue Sep 6, 2024 · 1 comment
Labels
enhancement New feature or request P3 Should be addressed at some point

Comments

@rorywhite200
Copy link
Collaborator

I realized that we have been using GPT-4o rather than GPT-4o mini (the more economical version). When I switch to use the cheaper model (by enforcing this on the server) I notice a subtle deterioration in the quality of the tutor, in particular it seems to give away the answers more quickly. I think we need some additional prompt engineering to get this working properly.

@rorywhite200 rorywhite200 added the enhancement New feature or request label Sep 6, 2024
@joelostblom
Copy link
Owner

Maybe hold off with this for now, since we might explore another approach to not have to pay for the LLM. My understanding is that nothing is prohibitively expensive right now?

@joelostblom joelostblom added the P3 Should be addressed at some point label Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request P3 Should be addressed at some point
Projects
None yet
Development

No branches or pull requests

2 participants