Skip to content

Conversation

@Prabhatyadav60
Copy link

@Prabhatyadav60 Prabhatyadav60 commented Feb 7, 2026

Fixes #134

Description

The current hardcoded model gemma2-9b-it has been decommissioned by Groq, causing the backend to crash with groq.BadRequestError immediately upon running any analysis (Bias, Chat, or Fact-Check).

Changes Proposed

  • Updated all backend instances of gemma2-9b-it to the stable llama-3.3-70b-versatile model.
  • Verified compatibility with existing prompts.

Verification

  • Bias Analysis: Success (Score generated).
  • Chat: Success (Responds to context).

Reasoning

  • Compatibility: llama-3.3-70b is natively supported by the existing Groq client, requiring zero code refactoring.
  • Capabilities: As a 70B parameter model, it offers superior reasoning for Bias and Fact-Check analysis compared to the decommissioned 9B model.
  • Stability: It is the current stable flagship on Groq, ensuring long-term reliability.

Summary by CodeRabbit

  • Chores
    • Updated the underlying AI model used across core processing modules to improve response quality and consistency.
    • No user-facing behavior or workflows were changed; existing prompts, outputs, and error handling remain the same.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 7, 2026

📝 Walkthrough

Walkthrough

This PR updates the LLM model identifier from "gemma2-9b-it" to "llama-3.3-70b-versatile" across five backend modules. No other logic, control flow, or error handling is modified.

Changes

Cohort / File(s) Summary
Bias Detection
backend/app/modules/bias_detection/check_bias.py
Replaced Groq model parameter: gemma2-9b-itllama-3.3-70b-versatile in the API call.
Chat
backend/app/modules/chat/llm_processing.py
Updated LLM invocation model string from gemma2-9b-it to llama-3.3-70b-versatile.
Facts Check
backend/app/modules/facts_check/llm_processing.py
Two chat completion calls changed to use llama-3.3-70b-versatile instead of gemma2-9b-it.
Langgraph Nodes
backend/app/modules/langgraph_nodes/judge.py, backend/app/modules/langgraph_nodes/sentiment.py
Replaced module-level/inline Groq model initializations with llama-3.3-70b-versatile (was gemma2-9b-it).

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Possibly related PRs

Poem

🐰 A tiny hop through files I tread,
Swapping names where LLMs are fed,
gemma bowed out, llama took the stage,
Five small edits on the code-stage page,
Nose a-twitch — the build moves ahead.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the primary change: replacing a decommissioned model with a new one across the backend.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/app/modules/facts_check/llm_processing.py (1)

143-148: ⚠️ Potential issue | 🟡 Minor

Pre-existing bug: parsed is used unbound if JSON parsing fails.

Not introduced by this PR, but worth noting: if json.loads raises on line 144, the except block logs the error but doesn't set parsed or continue/skip. Line 148 then references parsed, which will raise NameError and crash the entire function.

This risk increases slightly with a model change, as the new model's output format may differ.

Suggested fix
             try:
                 parsed = json.loads(content)
             except Exception as parse_err:
                 logger.error(f"LLM JSON parse error: {parse_err}")
+                parsed = {
+                    "verdict": "Unknown",
+                    "explanation": f"Parse error: {parse_err}",
+                    "original_claim": claim,
+                    "source_link": source,
+                }
🧹 Nitpick comments (1)
backend/app/modules/bias_detection/check_bias.py (1)

64-64: Consider centralizing the model identifier into a shared constant.

The string "llama-3.3-70b-versatile" is now hardcoded in five separate files. The next time the model needs to change (as happened with this PR), every occurrence must be found and updated. A single constant (e.g., GROQ_MODEL in a shared config module or environment variable) would make future swaps a one-line change.

Example

In a shared config (e.g., backend/app/config.py):

GROQ_MODEL = os.getenv("GROQ_MODEL", "llama-3.3-70b-versatile")

Then in each module:

-            model="llama-3.3-70b-versatile",
+            model=GROQ_MODEL,

@Prabhatyadav60 Prabhatyadav60 force-pushed the fix/upgrade-deprecated-groq-model branch from 9e7592b to 3b4593a Compare February 7, 2026 07:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Critical: Backend crashing due to deprecated Groq model (gemma2-9b-it)

1 participant