-
Notifications
You must be signed in to change notification settings - Fork 76
Base Langgraph setup with node files. #99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
92ace01
611ebd8
c006530
b3d3df1
5c76b93
d9089c2
273f368
23db210
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,94 @@ | ||
| from langgraph.graph import StateGraph | ||
| from app.modules.langgraph_nodes import ( | ||
| sentiment, | ||
| fact_check, | ||
| generate_perspective, | ||
| judge, | ||
| store_and_send, | ||
| error_handler | ||
| ) | ||
|
|
||
|
|
||
| def build_langgraph(): | ||
| graph = StateGraph() | ||
|
|
||
| graph.add_node( | ||
| "sentiment_analysis", | ||
| sentiment.run_sentiment | ||
| ) | ||
| graph.add_node( | ||
| "fact_checking", | ||
| fact_check.run_fact_check | ||
| ) | ||
| graph.add_node( | ||
| "generate_perspective", | ||
| generate_perspective.generate_perspective | ||
| ) | ||
| graph.add_node( | ||
| "judge_perspective", | ||
| judge.judge_perspective | ||
| ) | ||
| graph.add_node( | ||
| "store_and_send", | ||
| store_and_send.store_and_send | ||
| ) | ||
| graph.add_node( | ||
| "error_handler", | ||
| error_handler | ||
| ) | ||
|
|
||
| graph.set_entry_point( | ||
| "sentiment_analysis" | ||
| ) | ||
|
|
||
| graph.set_conditional_edges( | ||
| "sentiment_analysis", | ||
| lambda x: ( | ||
| "error_handler" if x.get("status") == "error" else "fact_checking" | ||
| ) | ||
| ) | ||
|
|
||
| graph.set_conditional_edges( | ||
| "fact_checking", | ||
| lambda x: ( | ||
| "error_handler" | ||
| if x.get("status") == "error" | ||
| else "generate_perspective" | ||
| ) | ||
| ) | ||
|
|
||
| graph.set_conditional_edges( | ||
| "generate_perspective", | ||
| lambda x: ( | ||
| "error_handler" | ||
| if x.get("status") == "error" | ||
| else "judge_perspective" | ||
| ) | ||
| ) | ||
|
|
||
| graph.set_conditional_edges( | ||
| "judge_perspective", | ||
| lambda state: ( | ||
| "error_handler" | ||
| if state.get("status") == "error" | ||
| else ( | ||
| "store_and_send" | ||
| if state.get("retries", 0) >= 3 | ||
| else "generate_perspective" | ||
| ) | ||
| if state.get("score", 0) < 70 | ||
| else "store_and_send" | ||
| ) | ||
| ) | ||
| graph.set_conditional_edges( | ||
| "store_and_send", | ||
| lambda x: ( | ||
| "error_handler" | ||
| if x.get("status") == "error" | ||
| else None | ||
| ) | ||
| ) | ||
|
|
||
| graph.set_finish_point("store_and_send") | ||
|
|
||
| return graph.compile() |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,11 @@ | ||
|
|
||
|
|
||
| def error_handler(input): | ||
| print("Error detected!") | ||
| print(f"From: {input.get('error_from')}") | ||
| print(f"Message: {input.get('message')}") | ||
|
|
||
| return {"status": "stopped_due_to_error", | ||
| "from": [input.get("error_from")], | ||
| "error": [input.get("message")] | ||
| } | ||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,30 @@ | ||||||||||||||||||||||||||
| # web search + fact check | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| def search_web(): | ||||||||||||||||||||||||||
| return [] | ||||||||||||||||||||||||||
|
Comment on lines
+3
to
+4
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Runtime‐blocking:
Diff to align the signature with its usage and add a minimal contract: -def search_web():
- return []
+def search_web(query: str) -> list[dict]:
+ """
+ Placeholder web-search. Keeps the pipeline alive until a real
+ implementation is plugged in.
+ """
+ # TODO: integrate actual search provider
+ return []📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| def run_fact_check(state): | ||||||||||||||||||||||||||
| try: | ||||||||||||||||||||||||||
| text = state.get("cleaned_text") | ||||||||||||||||||||||||||
| keywords = state["keywords"] | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| if not text: | ||||||||||||||||||||||||||
| raise ValueError("Missing or empty 'cleaned_text' in state") | ||||||||||||||||||||||||||
| elif not keywords: | ||||||||||||||||||||||||||
| raise ValueError("Missing or empty 'keywords' in state") | ||||||||||||||||||||||||||
|
Comment on lines
+10
to
+15
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion
- keywords = state["keywords"]
+ keywords = state.get("keywords", [])After the first - if not text:
- raise ValueError(...)
- elif not keywords:
+ if not text:
+ raise ValueError(...)
+ if not keywords:
raise ValueError(...)📝 Committable suggestion
Suggested change
🧰 Tools🪛 Pylint (3.3.7)[refactor] 12-15: Unnecessary "elif" after "raise", remove the leading "el" from "elif" (R1720) 🤖 Prompt for AI Agents |
||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| results = search_web(text + " " + " ".join(keywords)) | ||||||||||||||||||||||||||
| sources = [{"snippet": r.text, "url": r.link} for r in results] | ||||||||||||||||||||||||||
| except Exception as e: | ||||||||||||||||||||||||||
| print(f"some error occured in fact_checking:{e}") | ||||||||||||||||||||||||||
| return { | ||||||||||||||||||||||||||
| "status": "error", | ||||||||||||||||||||||||||
| "error_from": "fact_checking", | ||||||||||||||||||||||||||
| "message": f"{e}", | ||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||
| return { | ||||||||||||||||||||||||||
| **state, | ||||||||||||||||||||||||||
| "facts": sources, | ||||||||||||||||||||||||||
| "status": "success" | ||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,47 @@ | ||||||||||||||||||||||
| from langchain.chains import LLMChain | ||||||||||||||||||||||
| from langchain.prompts import PromptTemplate | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| prompt = PromptTemplate( | ||||||||||||||||||||||
| input_variables=["text", "facts"], | ||||||||||||||||||||||
| template="""Given the following article: | ||||||||||||||||||||||
| {text} | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| And the following verified facts: | ||||||||||||||||||||||
| {facts} | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| Generate a reasoned opposing perspective using chain-of-thought logic. | ||||||||||||||||||||||
| """ | ||||||||||||||||||||||
| ) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| my_llm = "groq llm" | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| chain = LLMChain(prompt=prompt, llm=my_llm) | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
|
Comment on lines
+16
to
+19
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Passing a plain string will raise during graph compilation. Either inject a real LLM or keep the node disabled behind a feature flag. Example fix with LangChain’s OpenAI wrapper: -from langchain.chains import LLMChain
+from langchain.chains import LLMChain
+from langchain_openai import ChatOpenAI # or any provider
-my_llm = "groq llm"
+my_llm = ChatOpenAI(model="gpt-3.5-turbo", temperature=0.7)📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||
|
|
||||||||||||||||||||||
| def generate_perspective(state): | ||||||||||||||||||||||
| try: | ||||||||||||||||||||||
| retries = state.get("retries", 0) | ||||||||||||||||||||||
| state["retries"] = retries + 1 | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| text = state["cleaned_text"] | ||||||||||||||||||||||
| facts = state.get("facts") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| if not text: | ||||||||||||||||||||||
| raise ValueError("Missing or empty 'cleaned_text' in state") | ||||||||||||||||||||||
| elif not facts: | ||||||||||||||||||||||
| raise ValueError("Missing or empty 'facts' in state") | ||||||||||||||||||||||
|
|
||||||||||||||||||||||
| facts = "\n".join([f["snippet"] for f in state["facts"]]) | ||||||||||||||||||||||
| result = chain.run({"text": text, "facts": facts}) | ||||||||||||||||||||||
| except Exception as e: | ||||||||||||||||||||||
| print(f"some error occured in generate_perspective:{e}") | ||||||||||||||||||||||
| return { | ||||||||||||||||||||||
| "status": "error", | ||||||||||||||||||||||
| "error_from": "generate_perspective", | ||||||||||||||||||||||
| "message": f"{e}", | ||||||||||||||||||||||
| } | ||||||||||||||||||||||
| return { | ||||||||||||||||||||||
| **state, | ||||||||||||||||||||||
| "perspective": result, | ||||||||||||||||||||||
| "status": "success" | ||||||||||||||||||||||
| } | ||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
| def judge_perspective(state): | ||
| # Dummy scoring | ||
| try: | ||
| perspective = state.get("perspective") | ||
|
|
||
| if not perspective: | ||
| raise ValueError("Missing or empty 'perspective' in state") | ||
|
|
||
| score = 85 if "reasoned" in perspective else 40 | ||
| except Exception as e: | ||
| print(f"some error occured in judge_perspetive:{e}") | ||
| return { | ||
| "status": "error", | ||
| "error_from": "judge_perspective", | ||
| "message": f"{e}", | ||
| } | ||
| return { | ||
| **state, | ||
| "score": score, | ||
| "status": "success" | ||
| } | ||
|
|
||
| # llm based score assignment | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. LLM should consider factors like logic, factual alignment, coherence, and tone and return a numeric score (0–100) which can be parsed and added to the state
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Exactly. |
||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,27 @@ | ||||||||||||||||||||||||||||
| from transformers import pipeline | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| sentiment_pipeline = pipeline("sentiment-analysis") | ||||||||||||||||||||||||||||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. instead of using default model for sentiment analysis, consider exploring more robust models for improved sentiment analysis
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is just a base setup, I am going to build all the modules in detail.
Comment on lines
+1
to
+3
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Eager pipeline initialisation will slow cold-starts and consumes ~1-2 GB RAM -from transformers import pipeline
-
-sentiment_pipeline = pipeline("sentiment-analysis")
+from transformers import pipeline
+_sentiment_pipeline = None
+
+
+def _get_pipeline():
+ global _sentiment_pipeline
+ if _sentiment_pipeline is None: # lazy, thread-safe under GIL
+ _sentiment_pipeline = pipeline("sentiment-analysis")
+ return _sentiment_pipeline📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| def run_sentiment(state): | ||||||||||||||||||||||||||||
| try: | ||||||||||||||||||||||||||||
| text = state.get("cleaned_text") | ||||||||||||||||||||||||||||
| if not text: | ||||||||||||||||||||||||||||
| raise ValueError("Missing or empty 'cleaned_text' in state") | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| result = sentiment_pipeline(text)[0] | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| return { | ||||||||||||||||||||||||||||
| **state, | ||||||||||||||||||||||||||||
| "sentiment": result["label"], | ||||||||||||||||||||||||||||
| "sentiment_score": result["score"], | ||||||||||||||||||||||||||||
| "status": "success" | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||||
| except Exception as e: | ||||||||||||||||||||||||||||
| print(f"Error in sentiment_analysis: {e}") | ||||||||||||||||||||||||||||
| return { | ||||||||||||||||||||||||||||
| "status": "error", | ||||||||||||||||||||||||||||
| "error_from": "sentiment_analysis", | ||||||||||||||||||||||||||||
| "message": str(e), | ||||||||||||||||||||||||||||
| } | ||||||||||||||||||||||||||||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
| @@ -0,0 +1,21 @@ | ||||||
| from utils.vector_store import save_to_vector_db | ||||||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Verification agent 🧩 Analysis chainCheck import path correctness
Run the quick search below to confirm the real module path and adjust accordingly: 🏁 Script executed: #!/bin/bash
fd vector_store.pyLength of output: 58 Update the import to include the The module • File: new-backend/app/modules/langgraph_nodes/store_and_send.py - from utils.vector_store import save_to_vector_db
+ from app.utils.vector_store import save_to_vector_db📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||
|
|
||||||
|
|
||||||
| def store_and_send(state): | ||||||
| # to store data in vector db | ||||||
| try: | ||||||
| save_to_vector_db({ | ||||||
| **state | ||||||
| }) | ||||||
| except Exception as e: | ||||||
| print(f"some error occured in store_and_send:{e}") | ||||||
| return { | ||||||
| "status": "error", | ||||||
| "error_from": "store_and_send", | ||||||
| "message": f"{e}", | ||||||
| } | ||||||
| # sending to frontend | ||||||
| return { | ||||||
| **state, | ||||||
| "status": "success" | ||||||
| } | ||||||
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -1,6 +1,7 @@ | ||||||||||||||||||||
| from fastapi import APIRouter | ||||||||||||||||||||
| from pydantic import BaseModel | ||||||||||||||||||||
| from app.modules.pipeline import run_scraper_pipeline | ||||||||||||||||||||
| from app.modules.pipeline import run_langgraph_workflow | ||||||||||||||||||||
| import json | ||||||||||||||||||||
|
|
||||||||||||||||||||
| router = APIRouter() | ||||||||||||||||||||
|
|
@@ -19,4 +20,5 @@ async def home(): | |||||||||||||||||||
| async def run_pipelines(request: URlRequest): | ||||||||||||||||||||
| article_text = run_scraper_pipeline(request.url) | ||||||||||||||||||||
| print(json.dumps(article_text, indent=2)) | ||||||||||||||||||||
| return article_text | ||||||||||||||||||||
| data = run_langgraph_workflow(article_text) | ||||||||||||||||||||
| return data | ||||||||||||||||||||
|
Comment on lines
+23
to
+24
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. State payload is incompatible with the sentiment node – will raise
- data = run_langgraph_workflow(article_text)
- return data
+ # Map scraper output to the keys expected by the LangGraph workflow
+ langgraph_state = {
+ "text": article_text["cleaned_text"],
+ "keywords": article_text["keywords"],
+ }
+ data = run_langgraph_workflow(langgraph_state)
+ return data📝 Committable suggestion
Suggested change
🤖 Prompt for AI Agents |
||||||||||||||||||||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid shadowing built-ins and switch to proper logging
Using the parameter name
inputshadows Python’s built-ininput()function, which can be confusing.Additionally,
print()statements are not suitable for production logging; prefer the project’s configured logger.📝 Committable suggestion
🤖 Prompt for AI Agents