Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a different model to guess pip and respond #80

Merged
merged 2 commits into from
Feb 16, 2024

Conversation

jakethekoenig
Copy link
Member

If we finetune a model it will learn to always respond in python surrounded with backticks so it won't be able to produce pip packages well. We need to have a different model.

If we finetune a model it will learn to always respond in python
surrounded with backticks so it won't be able to produce pip packages
well. We need to have a different model.
model = "gpt-3.5-turbo"
else:
model = llm_model

custom_llm_provider = self.config.get("llm_custom_provider")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the user is using a different llm provider (like ollama)? It would be kind of annoying to have to change the pip model as well. But I guess that's a very niche scenario so it probably doesn't matter.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should only be an issue if they are using ollama with a rawdog fine tuned model. Otherwise they'll just get the same model for both llm_model and pip_model which is what they would want.

@jakethekoenig jakethekoenig merged commit 2ed7ac5 into main Feb 16, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants