Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add llama-3.3 models for Groq #11533

Merged
merged 1 commit into from
Dec 11, 2024

Conversation

pvoo
Copy link
Contributor

@pvoo pvoo commented Dec 10, 2024

Summary

Added support for two new Groq LLM models: Llama 3.3 70B Specdec and Llama 3.3 70B Versatile. These models are part of Groq's latest offerings and provide enhanced capabilities compared to previous versions.

Screenshots

Before After
... ...

Checklist

  • This change requires a documentation update, included: Dify Document
  • I understand that this PR may be closed in case there was no previous discussion or issues. (This doesn't apply to typos!)
  • I've added a test for each change that was introduced, and I tried as much as possible to make a single atomic change.
  • I've updated the documentation accordingly.
  • I ran dev/reformat(backend) and cd web && npx lint-staged(frontend) to appease the lint gods

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. ⚙️ feat:model-runtime labels Dec 10, 2024
@pvoo pvoo changed the title Add llama-3.3 models for Groq feat: Add llama-3.3 models for Groq Dec 10, 2024
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Dec 11, 2024
@crazywoola crazywoola merged commit 80c52e0 into langgenius:main Dec 11, 2024
5 checks passed
iamjoel pushed a commit that referenced this pull request Dec 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⚙️ feat:model-runtime lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants