Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 11 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -134,6 +134,15 @@ sequenceDiagram
## Setup & Deployment

### Frontend Setup

Setup environment variables:*
- add .env file in `/frontend`directory.
- add following environment variable in your .env file.
```
NEXT_PUBLIC_API_URL = http://localhost:8000

```

```bash
cd frontend
npm install
Expand All @@ -151,10 +160,10 @@ npm run dev


*Setup environment variables:*
- add .env file in `/new-backend`directory.
- add .env file in `/backend`directory.
- add following environment variable in your .env file.
```
GROQ_API_KEY= <groq_api_key>
GROQ_API_KEY= <groq_api_key>
PINECONE_API_KEY = <your_pinecone_API_KEY>
PORT = 8000
SEARCH_KEY = <your_Google_custom_search_engine_API_key>
Expand Down
9 changes: 2 additions & 7 deletions backend/app/db/vector_store.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,6 @@
if not PINECONE_API_KEY:
raise ValueError("PINECONE_API_KEY environment variable is required")
try:

# Initialize Pinecone client
pc = Pinecone(api_key=PINECONE_API_KEY)

Expand All @@ -25,10 +24,7 @@
name=INDEX_NAME,
dimension=DIMENSIONS,
metric=METRIC,
spec=ServerlessSpec(
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1
)
spec=ServerlessSpec(cloud=CloudProvider.AWS, region=AwsRegion.US_EAST_1),
)
else:
print(f"Index '{INDEX_NAME}' already exists")
Expand All @@ -37,5 +33,4 @@
# Connect to the index
index = pc.Index(INDEX_NAME)
except Exception as e:
raise RuntimeError("Error occured while "
f"connecting to the index {INDEX_NAME}:{e}")
raise RuntimeError(f"Error occured while connecting to the index {INDEX_NAME}:{e}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Use exception chaining and fix typos in error message (Ruff B904)

Adopt raise ... from e and correct spelling to improve debuggability and consistency.

Apply this diff:

-    raise RuntimeError(f"Error occured while connecting to the index {INDEX_NAME}:{e}")
+    raise RuntimeError(
+        f"Error occurred while connecting to the index {INDEX_NAME}: {e}"
+    ) from e

Also mirror the same improvements for the earlier client initialization exception (Line 13), e.g.:

except Exception as e:
    raise RuntimeError(
        f"Error occurred while initializing Pinecone client: {e}"
    ) from e

🏁 Script executed:

#!/bin/bash
# Display the beginning of the file to locate the client initialization exception around line 13
sed -n '1,50p' backend/app/db/vector_store.py

echo "----"

# Display lines around the reported error at line 36
sed -n '30,70p' backend/app/db/vector_store.py

Length of output: 1414


Use exception chaining and correct typos in error messages

Two locations in backend/app/db/vector_store.py need updates to improve debuggability and consistency:

  • Client initialization block (around line 13)
  • Index connection block (around line 36)

Apply the following diffs:

--- a/backend/app/db/vector_store.py
@@ -9,7 +9,10 @@ PINECONE_API_KEY = os.getenv("PINECONE_API_KEY")
 try:
     # Initialize Pinecone client
     pc = Pinecone(api_key=PINECONE_API_KEY)
-
-except Exception as e:
-    raise RuntimeError(f"Error occured while intialising pinecone client:{e}")
+except Exception as e:
+    raise RuntimeError(
+        f"Error occurred while initializing Pinecone client: {e}"
+    ) from e
--- a/backend/app/db/vector_store.py
@@ -34,7 +37,10 @@ if not pc.has_index(INDEX_NAME):
 try:
     # Connect to the index
     index = pc.Index(INDEX_NAME)
-
-except Exception as e:
-    raise RuntimeError(f"Error occured while connecting to the index {INDEX_NAME}:{e}")
+except Exception as e:
+    raise RuntimeError(
+        f"Error occurred while connecting to the index {INDEX_NAME}: {e}"
+    ) from e

These changes:

  • Correct “occured” → “occurred” and “intialising” → “initializing”
  • Adopt exception chaining (raise … from e) for original traceback preservation
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
raise RuntimeError(f"Error occured while connecting to the index {INDEX_NAME}:{e}")
PINECONE_API_KEY = os.getenv("PINECONE_API_KEY")
try:
# Initialize Pinecone client
pc = Pinecone(api_key=PINECONE_API_KEY)
except Exception as e:
raise RuntimeError(
f"Error occurred while initializing Pinecone client: {e}"
) from e
Suggested change
raise RuntimeError(f"Error occured while connecting to the index {INDEX_NAME}:{e}")
if not pc.has_index(INDEX_NAME):
try:
# Connect to the index
index = pc.Index(INDEX_NAME)
except Exception as e:
raise RuntimeError(
f"Error occurred while connecting to the index {INDEX_NAME}: {e}"
) from e
🧰 Tools
🪛 Ruff (0.12.2)

36-36: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling

(B904)

🤖 Prompt for AI Agents
In backend/app/db/vector_store.py around lines 13 and 36, update the two
RuntimeError messages to correct typos and preserve original tracebacks: change
"intialising" to "initializing" in the client initialization block and "occured"
to "occurred" in the index connection block, and re-raise using exception
chaining (raise RuntimeError("...") from e) so the original exception is
attached to the new RuntimeError.

7 changes: 2 additions & 5 deletions backend/app/modules/bias_detection/check_bias.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ def check_bias(text):
try:
print(text)
print(json.dumps(text))

if not text:
raise ValueError("Missing or empty 'cleaned_text'")

Expand All @@ -30,10 +30,7 @@ def check_bias(text):
},
{
"role": "user",
"content": (
"Give bias score to the following article "
f"\n\n{text}"
),
"content": (f"Give bias score to the following article \n\n{text}"),
},
],
model="gemma2-9b-it",
Expand Down
1 change: 0 additions & 1 deletion backend/app/modules/chat/embed_query.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@


def embed_query(query: str):

embeddings = embedder.encode(query).tolist()

return embeddings
15 changes: 4 additions & 11 deletions backend/app/modules/chat/get_rag_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,15 @@


def search_pinecone(query: str, top_k: int = 5):

embeddings = embed_query(query)

results = index.query(
vector=embeddings,
top_k=top_k,
include_metadata=True,
namespace="default"

vector=embeddings, top_k=top_k, include_metadata=True, namespace="default"
)

matches = []
for match in results["matches"]:
matches.append({
"id": match["id"],
"score": match["score"],
"metadata": match["metadata"]
})
matches.append(
{"id": match["id"], "score": match["score"], "metadata": match["metadata"]}
)
return matches
10 changes: 6 additions & 4 deletions backend/app/modules/chat/llm_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,10 @@


def build_context(docs):

return "\n".join(f"{m['metadata'].get('explanation') or m['metadata'].get('reasoning', '')}"for m in docs)
return "\n".join(
f"{m['metadata'].get('explanation') or m['metadata'].get('reasoning', '')}"
for m in docs
)


def ask_llm(question, docs):
Expand All @@ -28,8 +30,8 @@ def ask_llm(question, docs):
model="gemma2-9b-it",
messages=[
{"role": "system", "content": "Use only the context to answer."},
{"role": "user", "content": prompt}
]
{"role": "user", "content": prompt},
],
)

return response.choices[0].message.content
9 changes: 6 additions & 3 deletions backend/app/modules/facts_check/web_search.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,17 @@

GOOGLE_SEARCH = os.getenv("SEARCH_KEY")


def search_google(query):
results = requests.get(f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}")
results = requests.get(
f"https://www.googleapis.com/customsearch/v1?key={GOOGLE_SEARCH}&cx=f637ab77b5d8b4a3c&q={query}"
)
res = results.json()
first = {}
first["title"] = res["items"][0]["title"]
first["link"] = res["items"][0]["link"]
first["snippet"] = res["items"][0]["snippet"]

return [
first,
]
]
62 changes: 17 additions & 45 deletions backend/app/modules/langgraph_builder.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,8 @@
generate_perspective,
judge,
store_and_send,
error_handler
)
error_handler,
)

from typing_extensions import TypedDict

Expand All @@ -24,58 +24,34 @@ class MyState(TypedDict):
def build_langgraph():
graph = StateGraph(MyState)

graph.add_node(
"sentiment_analysis",
sentiment.run_sentiment_sdk
)
graph.add_node(
"fact_checking",
fact_check.run_fact_check
)
graph.add_node(
"generate_perspective",
generate_perspective.generate_perspective
)
graph.add_node(
"judge_perspective",
judge.judge_perspective
)
graph.add_node(
"store_and_send",
store_and_send.store_and_send
)
graph.add_node(
"error_handler",
error_handler.error_handler
)
graph.add_node("sentiment_analysis", sentiment.run_sentiment_sdk)
graph.add_node("fact_checking", fact_check.run_fact_check)
graph.add_node("generate_perspective", generate_perspective.generate_perspective)
graph.add_node("judge_perspective", judge.judge_perspective)
graph.add_node("store_and_send", store_and_send.store_and_send)
graph.add_node("error_handler", error_handler.error_handler)

graph.set_entry_point(
"sentiment_analysis",
)
"sentiment_analysis",
)

graph.add_conditional_edges(
"sentiment_analysis",
lambda x: (
"error_handler" if x.get("status") == "error" else "fact_checking"
)
lambda x: ("error_handler" if x.get("status") == "error" else "fact_checking"),
)

graph.add_conditional_edges(
"fact_checking",
lambda x: (
"error_handler"
if x.get("status") == "error"
else "generate_perspective"
)
"error_handler" if x.get("status") == "error" else "generate_perspective"
),
)

graph.add_conditional_edges(
"generate_perspective",
lambda x: (
"error_handler"
if x.get("status") == "error"
else "judge_perspective"
)
"error_handler" if x.get("status") == "error" else "judge_perspective"
),
)

graph.add_conditional_edges(
Expand All @@ -90,15 +66,11 @@ def build_langgraph():
)
if state.get("score", 0) < 70
else "store_and_send"
)
),
)
graph.add_conditional_edges(
"store_and_send",
lambda x: (
"error_handler"
if x.get("status") == "error"
else "__end__"
)
lambda x: ("error_handler" if x.get("status") == "error" else "__end__"),
)

graph.set_finish_point("store_and_send")
Expand Down
11 changes: 5 additions & 6 deletions backend/app/modules/langgraph_nodes/error_handler.py
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@


def error_handler(input):
print("Error detected!")
print(f"From: {input.get('error_from')}")
print(f"Message: {input.get('message')}")

return {"status": "stopped_due_to_error",
"from": [input.get("error_from")],
"error": [input.get("message")]
}
return {
"status": "stopped_due_to_error",
"from": [input.get("error_from")],
"error": [input.get("message")],
}
7 changes: 1 addition & 6 deletions backend/app/modules/langgraph_nodes/fact_check.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@

from app.utils.fact_check_utils import run_fact_check_pipeline


Expand All @@ -25,9 +24,5 @@ def run_fact_check(state):
"status": "error",
"error_from": "fact_checking",
"message": f"{e}",
}
return {
**state,
"facts": verifications,
"status": "success"
}
return {**state, "facts": verifications, "status": "success"}
36 changes: 18 additions & 18 deletions backend/app/modules/langgraph_nodes/generate_perspective.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,7 @@ class PerspectiveOutput(BaseModel):

my_llm = "llama-3.3-70b-versatile"

llm = ChatGroq(
model=my_llm,
temperature=0.7
)
llm = ChatGroq(model=my_llm, temperature=0.7)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Lower temperature for structured output and verify model ID

For schema-constrained generation, a lower temperature materially reduces parse/validation errors. Also, please confirm the Groq model ID is valid.

Apply:

-llm = ChatGroq(model=my_llm, temperature=0.7)
+llm = ChatGroq(model=my_llm, temperature=0.2)

Optionally consider making my_llm configurable via env/config for deployments.

To verify the model ID:


🌐 Web query:

What are the currently supported Groq model IDs for ChatGroq? Is "llama-3.3-70b-versatile" valid?

💡 Result:

Short answer: Yes — llama-3.3-70b-versatile is a valid Groq model ID and is commonly used with ChatGroq. (See Groq's model page and Chat/Playground examples.) (console.groq.com)

Which models are currently available for ChatGroq (representative list, as shown in Groq’s docs and third‑party integrator pages):

Notes and recommendation:

  • Model availability and exact IDs can change. To get the authoritative, up‑to‑date list you can call Groq’s models endpoint (returns all active model IDs): GET https://api.groq.com/openai/v1/models (example curl shown in Groq docs). (groq-preview.mintlify.app)
  • When using LangChain’s ChatGroq, pass the Groq model ID as the model parameter (e.g., model="llama-3.3-70b-versatile"). (python.langchain.com)

If you’d like, I can (a) fetch the live model list right now from Groq’s API for you, or (b) show exact code examples for calling ChatGroq with llama-3.3-70b-versatile in your environment. Which would you prefer?

Citations:


Lower temperature for structured output (model ID confirmed valid)

The Groq model ID "llama-3.3-70b-versatile" is valid and widely used. To improve schema-constrained generation reliability, please lower the temperature:

File: backend/app/modules/langgraph_nodes/generate_perspective.py
Lines: 16

- llm = ChatGroq(model=my_llm, temperature=0.7)
+ llm = ChatGroq(model=my_llm, temperature=0.2)

Optional: Make my_llm configurable via environment variables or your configuration system for easier deployments.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
llm = ChatGroq(model=my_llm, temperature=0.7)
llm = ChatGroq(model=my_llm, temperature=0.2)
🤖 Prompt for AI Agents
In backend/app/modules/langgraph_nodes/generate_perspective.py around line 16,
the ChatGroq LLM is instantiated with temperature=0.7 which can produce
non-deterministic, free-form output; lower the temperature (e.g., 0.0–0.2) to
improve schema-constrained/structured outputs and update the call accordingly,
and optionally make the model id and temperature configurable via environment
variables or the existing config system so deployments can adjust model and
temperature without code changes.


structured_llm = llm.with_structured_output(PerspectiveOutput)

Expand All @@ -37,24 +34,27 @@ def generate_perspective(state):
elif not facts:
raise ValueError("Missing or empty 'facts' in state")

facts_str = "\n".join([f"Claim: {f['original_claim']}\n"
"Verdict: {f['verdict']}\nExplanation: "
"{f['explanation']}" for f in state["facts"]])

result = chain.invoke({
"cleaned_article": text,
"facts": facts_str,
"sentiment": state.get("sentiment", "neutral")
})
facts_str = "\n".join(
[
f"Claim: {f['original_claim']}\n"
"Verdict: {f['verdict']}\nExplanation: "
"{f['explanation']}"
for f in state["facts"]
]
)
Comment on lines +37 to +44
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: f-string interpolation bug in facts_str (‘verdict’ and ‘explanation’ not rendered)

Only the first segment is an f-string; the others are plain strings, so braces are emitted literally. This changes behavior.

Fix and also avoid an intermediate list:

-        facts_str = "\n".join(
-            [
-                f"Claim: {f['original_claim']}\n"
-                "Verdict: {f['verdict']}\nExplanation: "
-                "{f['explanation']}"
-                for f in state["facts"]
-            ]
-        )
+        facts_str = "\n".join(
+            (
+                f"Claim: {f['original_claim']}\n"
+                f"Verdict: {f['verdict']}\n"
+                f"Explanation: {f['explanation']}"
+            )
+            for f in facts
+        )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In backend/app/modules/langgraph_nodes/generate_perspective.py around lines 37
to 44, the string building uses only the first segment as an f-string so
"{f['verdict']}" and "{f['explanation']}" are not interpolated; replace the
current list comprehension with a single generator expression that yields a
fully interpolated f-string for each fact and pass that generator directly to
"\n".join to avoid creating an intermediate list and ensure verdict and
explanation are rendered.


result = chain.invoke(
{
"cleaned_article": text,
"facts": facts_str,
"sentiment": state.get("sentiment", "neutral"),
}
)
except Exception as e:
print(f"some error occured in generate_perspective:{e}")
return {
"status": "error",
"error_from": "generate_perspective",
"message": f"{e}",
}
return {
**state,
"perspective": result,
"status": "success"
}
return {**state, "perspective": result, "status": "success"}
6 changes: 3 additions & 3 deletions backend/app/modules/langgraph_nodes/sentiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@ def run_sentiment_sdk(state):
},
{
"role": "user",
"content": ("Analyze the sentiment of the following text:"
f"\n\n{text}"
),
"content": (
f"Analyze the sentiment of the following text:\n\n{text}"
),
},
],
model="gemma2-9b-it",
Expand Down
5 changes: 1 addition & 4 deletions backend/app/modules/langgraph_nodes/store_and_send.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,4 @@ def store_and_send(state):
"message": f"{e}",
}
# sending to frontend
return {
**state,
"status": "success"
}
return {**state, "status": "success"}
Loading