-
-
Notifications
You must be signed in to change notification settings - Fork 76
optimizations and formatting frontend and backend #114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -4,7 +4,6 @@ | |
|
|
||
|
|
||
| def embed_query(query: str): | ||
|
|
||
| embeddings = embedder.encode(query).tolist() | ||
|
|
||
| return embeddings | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,11 +1,10 @@ | ||
|
|
||
|
|
||
| def error_handler(input): | ||
| print("Error detected!") | ||
| print(f"From: {input.get('error_from')}") | ||
| print(f"Message: {input.get('message')}") | ||
|
|
||
| return {"status": "stopped_due_to_error", | ||
| "from": [input.get("error_from")], | ||
| "error": [input.get("message")] | ||
| } | ||
| return { | ||
| "status": "stopped_due_to_error", | ||
| "from": [input.get("error_from")], | ||
| "error": [input.get("message")], | ||
| } |
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -13,10 +13,7 @@ class PerspectiveOutput(BaseModel): | |||||
|
|
||||||
| my_llm = "llama-3.3-70b-versatile" | ||||||
|
|
||||||
| llm = ChatGroq( | ||||||
| model=my_llm, | ||||||
| temperature=0.7 | ||||||
| ) | ||||||
| llm = ChatGroq(model=my_llm, temperature=0.7) | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainLower temperature for structured output and verify model ID For schema-constrained generation, a lower temperature materially reduces parse/validation errors. Also, please confirm the Groq model ID is valid. Apply: -llm = ChatGroq(model=my_llm, temperature=0.7)
+llm = ChatGroq(model=my_llm, temperature=0.2)Optionally consider making To verify the model ID: 🌐 Web query: 💡 Result: Short answer: Yes — llama-3.3-70b-versatile is a valid Groq model ID and is commonly used with ChatGroq. (See Groq's model page and Chat/Playground examples.) (console.groq.com) Which models are currently available for ChatGroq (representative list, as shown in Groq’s docs and third‑party integrator pages):
Notes and recommendation:
If you’d like, I can (a) fetch the live model list right now from Groq’s API for you, or (b) show exact code examples for calling ChatGroq with llama-3.3-70b-versatile in your environment. Which would you prefer? Citations:
Lower temperature for structured output (model ID confirmed valid) The Groq model ID File: - llm = ChatGroq(model=my_llm, temperature=0.7)
+ llm = ChatGroq(model=my_llm, temperature=0.2)Optional: Make 📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
|
|
||||||
| structured_llm = llm.with_structured_output(PerspectiveOutput) | ||||||
|
|
||||||
|
|
@@ -37,24 +34,27 @@ def generate_perspective(state): | |||||
| elif not facts: | ||||||
| raise ValueError("Missing or empty 'facts' in state") | ||||||
|
|
||||||
| facts_str = "\n".join([f"Claim: {f['original_claim']}\n" | ||||||
| "Verdict: {f['verdict']}\nExplanation: " | ||||||
| "{f['explanation']}" for f in state["facts"]]) | ||||||
|
|
||||||
| result = chain.invoke({ | ||||||
| "cleaned_article": text, | ||||||
| "facts": facts_str, | ||||||
| "sentiment": state.get("sentiment", "neutral") | ||||||
| }) | ||||||
| facts_str = "\n".join( | ||||||
| [ | ||||||
| f"Claim: {f['original_claim']}\n" | ||||||
| "Verdict: {f['verdict']}\nExplanation: " | ||||||
| "{f['explanation']}" | ||||||
| for f in state["facts"] | ||||||
| ] | ||||||
| ) | ||||||
|
Comment on lines
+37
to
+44
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Critical: f-string interpolation bug in facts_str (‘verdict’ and ‘explanation’ not rendered) Only the first segment is an f-string; the others are plain strings, so braces are emitted literally. This changes behavior. Fix and also avoid an intermediate list: - facts_str = "\n".join(
- [
- f"Claim: {f['original_claim']}\n"
- "Verdict: {f['verdict']}\nExplanation: "
- "{f['explanation']}"
- for f in state["facts"]
- ]
- )
+ facts_str = "\n".join(
+ (
+ f"Claim: {f['original_claim']}\n"
+ f"Verdict: {f['verdict']}\n"
+ f"Explanation: {f['explanation']}"
+ )
+ for f in facts
+ )
🤖 Prompt for AI Agents |
||||||
|
|
||||||
| result = chain.invoke( | ||||||
| { | ||||||
| "cleaned_article": text, | ||||||
| "facts": facts_str, | ||||||
| "sentiment": state.get("sentiment", "neutral"), | ||||||
| } | ||||||
| ) | ||||||
| except Exception as e: | ||||||
| print(f"some error occured in generate_perspective:{e}") | ||||||
| return { | ||||||
| "status": "error", | ||||||
| "error_from": "generate_perspective", | ||||||
| "message": f"{e}", | ||||||
| } | ||||||
| return { | ||||||
| **state, | ||||||
| "perspective": result, | ||||||
| "status": "success" | ||||||
| } | ||||||
| return {**state, "perspective": result, "status": "success"} | ||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Use exception chaining and fix typos in error message (Ruff B904)
Adopt
raise ... from eand correct spelling to improve debuggability and consistency.Apply this diff:
Also mirror the same improvements for the earlier client initialization exception (Line 13), e.g.:
🏁 Script executed:
Length of output: 1414
Use exception chaining and correct typos in error messages
Two locations in backend/app/db/vector_store.py need updates to improve debuggability and consistency:
Apply the following diffs:
These changes:
raise … from e) for original traceback preservation📝 Committable suggestion
🧰 Tools
🪛 Ruff (0.12.2)
36-36: Within an
exceptclause, raise exceptions withraise ... from errorraise ... from Noneto distinguish them from errors in exception handling(B904)
🤖 Prompt for AI Agents