Conversation
WalkthroughA new asynchronous POST request to a local bias detection API has been added in the analysis loading page, storing its response in sessionStorage. The results page now retrieves and displays the bias score from sessionStorage using a new state variable. A new backend module for bias detection and a corresponding API endpoint have been introduced. Existing flows and error handling remain unchanged. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant LoadingPage
participant BiasAPI
participant SessionStorage
participant ResultsPage
User->>LoadingPage: Initiates analysis
LoadingPage->>BiasAPI: POST article URL to /api/bias
BiasAPI-->>LoadingPage: Returns bias score
LoadingPage->>SessionStorage: Save bias score as "BiasScore"
User->>ResultsPage: Navigates to results
ResultsPage->>SessionStorage: Retrieve "BiasScore"
ResultsPage->>User: Display bias score in UI
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
Note ⚡️ Unit Test Generation is now available in beta!Learn more here, or try it out under "Finishing Touches" below. 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (3)
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (1)
frontend/app/analyze/results/page.tsx (1)
50-81: Remove duplicate useEffect hook.There are two useEffect hooks handling similar logic for loading analysis data. This creates code duplication and potential race conditions. The second useEffect (lines 50-81) contains important validation and redirect logic that should be merged with the first one.
Consolidate the useEffect hooks:
useEffect(() => { + if (isRedirecting.current) return; + const timer = setTimeout(() => setIsLoading(false), 1500) const storedData = sessionStorage.getItem("analysisResult") const storedBiasScore = sessionStorage.getItem("BiasScore") + if(storedBiasScore) setBiasScore(JSON.parse(storedBiasScore).bias_score) else console.warn("No bias score found.") - if (storedData) setAnalysisData(JSON.parse(storedData)) - else console.warn("No analysis result found") + if (storedData) { + const parsedData = JSON.parse(storedData); + const requiredFields = ['cleaned_text', 'facts', 'sentiment', 'perspective', 'score']; + const isDataValid = requiredFields.every(field => parsedData[field] !== undefined && parsedData[field] !== null); + + if (isDataValid) { + setAnalysisData(parsedData); + } else { + console.warn("Incomplete analysis data. Redirecting..."); + isRedirecting.current = true; + router.push("/analyze"); + } + } else { + console.warn("No analysis result found. Redirecting..."); + isRedirecting.current = true; + router.push("/analyze"); + } + return () => clearTimeout(timer) - }, []) - - - useEffect(() => { - // Remove the entire second useEffect block (lines 50-81) + }, [router])
🧹 Nitpick comments (1)
frontend/app/analyze/loading/page.tsx (1)
83-87: Consider partial failure handling strategy.The current error handling only catches failures from the main analysis API. If the bias API fails after the main analysis succeeds, users won't receive feedback about the partial failure. Consider implementing a strategy to handle partial failures gracefully.
You might want to provide user feedback for partial failures:
// After API calls, before progress simulation const hasMainAnalysis = !!res.data; const hasBiasScore = !!sessionStorage.getItem("BiasScore"); if (hasMainAnalysis && !hasBiasScore) { console.warn("Analysis completed but bias detection unavailable"); // Optionally show a toast notification to the user }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
frontend/app/analyze/loading/page.tsx(1 hunks)frontend/app/analyze/results/page.tsx(3 hunks)
🔇 Additional comments (1)
frontend/app/analyze/results/page.tsx (1)
116-117: Verify BiasMeter and bias score display consistency.There's a potential inconsistency between the BiasMeter component and the displayed bias score:
- BiasMeter uses
scorefromanalysisData(Line 116)- Text displays
biasScorefrom the new state variable (Line 117)These might show different values if they come from different sources.
Please confirm which source should be used for both displays. If they should be consistent, apply this diff:
- <BiasMeter score={score} /> + <BiasMeter score={biasScore} /> <p className="text-sm mt-2">Bias Score: {biasScore}</p>Or verify that they intentionally display different types of scores.
| const bias_score = await axios.post("http://localhost:8000/api/bias",{ | ||
| url: storedUrl, | ||
| }) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Improve variable naming and add type definitions.
The variable bias_score is misleading since it contains the entire API response, not just the score value. Also, use consistent camelCase naming for TypeScript.
Consider adding proper TypeScript interfaces:
+interface BiasResponse {
+ bias_score: number;
+ // Add other expected properties
+}
+
- const bias_score = await axios.post("http://localhost:8000/api/bias",{
+ const biasResponse = await axios.post<BiasResponse>("http://localhost:8000/api/bias", {
url: storedUrl,
})🤖 Prompt for AI Agents
In frontend/app/analyze/loading/page.tsx around lines 67 to 69, rename the
variable bias_score to a more descriptive name like biasResponse to reflect that
it holds the entire API response, not just the score. Change the variable name
to camelCase for consistency. Additionally, define and use a TypeScript
interface to type the expected structure of the API response and apply it to the
axios.post call to improve type safety.
| const bias_score = await axios.post("http://localhost:8000/api/bias",{ | ||
| url: storedUrl, | ||
| }) | ||
|
|
||
| // Save response to sessionStorage | ||
| sessionStorage.setItem("analysisResult", JSON.stringify(res.data)) | ||
|
|
||
| // optional logging | ||
| console.log("Analysis result saved") | ||
| console.log(res) | ||
|
|
||
| sessionStorage.setItem("biasScore", JSON.stringify(bias_score.data)) | ||
|
|
||
| console.log("Bias score saved") | ||
| console.log(bias_score) | ||
| // optional logging | ||
|
|
There was a problem hiding this comment.
Add proper error handling and fix configuration issues.
Several critical issues need to be addressed:
-
Missing error handling: The bias API call isn't wrapped in try-catch, which could cause the entire analysis to fail if the bias endpoint is unavailable.
-
Hardcoded localhost URL: Using
localhost:8000will break in production environments. This should be configurable via environment variables. -
sessionStorage key inconsistency: This code uses
"biasScore"but the results page expects"BiasScore"(Line 40 in results page).
Apply this diff to fix these issues:
try {
const res = await axios.post("https://Thunder1245-perspective-backend.hf.space/api/process", {
url: storedUrl,
})
- const bias_score = await axios.post("http://localhost:8000/api/bias",{
- url: storedUrl,
- })
// Save response to sessionStorage
sessionStorage.setItem("analysisResult", JSON.stringify(res.data))
console.log("Analysis result saved")
console.log(res)
- sessionStorage.setItem("biasScore", JSON.stringify(bias_score.data))
-
- console.log("Bias score saved")
- console.log(bias_score)
- // optional logging
+
+ // Bias API call with proper error handling
+ try {
+ const bias_score = await axios.post(process.env.NEXT_PUBLIC_BIAS_API_URL || "http://localhost:8000/api/bias", {
+ url: storedUrl,
+ })
+ sessionStorage.setItem("BiasScore", JSON.stringify(bias_score.data))
+ console.log("Bias score saved:", bias_score.data)
+ } catch (biasError) {
+ console.warn("Bias API unavailable, continuing without bias score:", biasError)
+ sessionStorage.setItem("BiasScore", JSON.stringify({ bias_score: null }))
+ }
} catch (err) {Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In frontend/app/analyze/loading/page.tsx around lines 67 to 82, add a try-catch
block around the axios.post call to the bias API to handle potential errors
gracefully. Replace the hardcoded "http://localhost:8000/api/bias" URL with a
value read from an environment variable to support different environments. Also,
change the sessionStorage key from "biasScore" to "BiasScore" to match the
expected key used in the results page.
| */ | ||
| export default function AnalyzePage() { | ||
| const [analysisData, setAnalysisData] = useState<any>(null) | ||
| const [biasScore, setBiasScore] = useState<any>(null) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Improve type safety and add error handling for JSON parsing.
The bias score state management needs better type safety and error handling:
- Type safety: Using
anytype reduces type safety benefits - JSON parsing: No error handling if stored data is corrupted
- Nested property access: Could fail if response structure changes
Apply this diff to improve robustness:
+interface BiasData {
+ bias_score: number | null;
+}
+
- const [biasScore, setBiasScore] = useState<any>(null)
+ const [biasScore, setBiasScore] = useState<number | null>(null)
const storedBiasScore = sessionStorage.getItem("BiasScore")
- if(storedBiasScore) setBiasScore(JSON.parse(storedBiasScore).bias_score)
- else console.warn("No bias score found.")
+ if (storedBiasScore) {
+ try {
+ const biasData: BiasData = JSON.parse(storedBiasScore)
+ setBiasScore(biasData.bias_score ?? null)
+ } catch (error) {
+ console.error("Failed to parse bias score:", error)
+ setBiasScore(null)
+ }
+ } else {
+ console.warn("No bias score found.")
+ }Also applies to: 40-42
🤖 Prompt for AI Agents
In frontend/app/analyze/results/page.tsx at lines 22 and 40-42, replace the
useState<any> for biasScore with a more specific type to improve type safety.
Add try-catch blocks around JSON parsing to handle potential errors from
corrupted stored data gracefully. Also, add checks for nested property existence
before accessing them to prevent runtime errors if the response structure
changes.
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (3)
backend/app/modules/bias_detection/check_bias.py (3)
11-17: Add type hints and consider parameter validation.The function signature and basic validation look good, but could be enhanced:
-def check_bias(text): +def check_bias(text: str) -> dict: try: - print(text) - print(json.dumps(text)) - if not text: + if not text or not isinstance(text, str): - raise ValueError("Missing or empty 'cleaned_text'") + raise ValueError("Missing, empty, or invalid text input")
51-57: Enhance error handling specificity.Consider adding more specific error handling for different types of failures.
except Exception as e: - print(f"Error in bias_detection: {e}") + import logging + logger = logging.getLogger(__name__) + logger.error(f"Error in bias_detection: {e}") + + # More specific error types + if "api" in str(e).lower() or "groq" in str(e).lower(): + error_source = "groq_api" + elif "network" in str(e).lower() or "connection" in str(e).lower(): + error_source = "network" + else: + error_source = "bias_detection" + return { "status": "error", - "error_from": "bias_detection", + "error_from": error_source, "message": str(e), }
39-42: Consider adding timeout and retry logic.The API call should include timeout handling and potentially retry logic for production use.
chat_completion = client.chat.completions.create( messages=[...], model="gemma2-9b-it", temperature=0.3, max_tokens=512, + timeout=30, # Add timeout )Consider wrapping in retry logic for transient failures:
from tenacity import retry, stop_after_attempt, wait_exponential @retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10)) def _make_api_call(self, messages, model, temperature, max_tokens): return client.chat.completions.create(...)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
backend/app/modules/bias_detection/check_bias.py(1 hunks)backend/app/routes/routes.py(2 hunks)frontend/app/analyze/loading/page.tsx(8 hunks)frontend/app/analyze/page.tsx(7 hunks)frontend/app/analyze/results/page.tsx(3 hunks)
✅ Files skipped from review due to trivial changes (1)
- frontend/app/analyze/page.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
- frontend/app/analyze/results/page.tsx
- frontend/app/analyze/loading/page.tsx
🔇 Additional comments (2)
backend/app/routes/routes.py (1)
6-6: LGTM - Clean import addition.The import statement correctly references the new bias detection module.
backend/app/modules/bias_detection/check_bias.py (1)
1-9: LGTM - Clean module setup.The imports and client initialization follow good practices with environment variable usage for API credentials.
| print(text) | ||
| print(json.dumps(text)) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Remove debug print statements.
Debug print statements should be removed from production code or replaced with proper logging.
- print(text)
- print(json.dumps(text))If logging is needed, use a proper logger:
+ import logging
+ logger = logging.getLogger(__name__)
+ logger.debug(f"Processing text of length: {len(text)}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| print(text) | |
| print(json.dumps(text)) | |
| import logging | |
| logger = logging.getLogger(__name__) | |
| logger.debug(f"Processing text of length: {len(text)}") |
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 13 to 14,
remove the debug print statements that output the variable 'text' and its JSON
representation. If output is necessary for monitoring or debugging in
production, replace these print statements with calls to a proper logging
framework configured for the application, ensuring logs are appropriately
leveled and formatted.
| "role": "system", | ||
| "content": ( | ||
| "You are an assistant that checks " | ||
| "if given article is biased and give" | ||
| "score to each based on biasness where 0 is lowest bias and 100 is highest bias" | ||
| "Only return a number between 0 to 100 base on bias." | ||
| "only return Number No Text" | ||
| ), |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Improve system prompt formatting and clarity.
The system prompt has formatting and grammatical issues that could affect model performance.
"content": (
- "You are an assistant that checks "
- "if given article is biased and give"
- "score to each based on biasness where 0 is lowest bias and 100 is highest bias"
- "Only return a number between 0 to 100 base on bias."
- "only return Number No Text"
+ "You are an assistant that analyzes articles for bias. "
+ "Rate the bias level on a scale from 0 to 100, where: "
+ "0 = completely unbiased/neutral, 100 = extremely biased. "
+ "Return ONLY a single number between 0 and 100. "
+ "Do not include any explanatory text."
),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "role": "system", | |
| "content": ( | |
| "You are an assistant that checks " | |
| "if given article is biased and give" | |
| "score to each based on biasness where 0 is lowest bias and 100 is highest bias" | |
| "Only return a number between 0 to 100 base on bias." | |
| "only return Number No Text" | |
| ), | |
| "role": "system", | |
| "content": ( | |
| "You are an assistant that analyzes articles for bias. " | |
| "Rate the bias level on a scale from 0 to 100, where: " | |
| "0 = completely unbiased/neutral, 100 = extremely biased. " | |
| "Return ONLY a single number between 0 and 100. " | |
| "Do not include any explanatory text." | |
| ), |
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 22 to 29, the
system prompt string has formatting and grammatical errors that reduce clarity.
Rewrite the prompt to use proper spacing, punctuation, and grammar for better
readability and model understanding. Ensure the instructions are concise and
clearly state that the output should be a single number between 0 and 100
representing bias, with no additional text.
| bias_score = chat_completion.choices[0].message.content.strip() | ||
|
|
||
| return { | ||
| "bias_score": bias_score, | ||
| "status": "success", | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Validate numeric response from AI model.
The response should be validated to ensure it's actually a numeric value as requested.
bias_score = chat_completion.choices[0].message.content.strip()
+
+ # Validate that the response is numeric
+ try:
+ bias_value = float(bias_score)
+ if not (0 <= bias_value <= 100):
+ raise ValueError(f"Bias score {bias_value} outside valid range 0-100")
+ bias_score = str(int(bias_value)) # Convert to integer string
+ except ValueError as ve:
+ raise ValueError(f"Invalid numeric response from AI model: {bias_score}")
return {
"bias_score": bias_score,
"status": "success",
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| bias_score = chat_completion.choices[0].message.content.strip() | |
| return { | |
| "bias_score": bias_score, | |
| "status": "success", | |
| } | |
| bias_score = chat_completion.choices[0].message.content.strip() | |
| # Validate that the response is numeric | |
| try: | |
| bias_value = float(bias_score) | |
| if not (0 <= bias_value <= 100): | |
| raise ValueError(f"Bias score {bias_value} outside valid range 0-100") | |
| bias_score = str(int(bias_value)) # Convert to integer string | |
| except ValueError as ve: | |
| raise ValueError(f"Invalid numeric response from AI model: {bias_score}") | |
| return { | |
| "bias_score": bias_score, | |
| "status": "success", | |
| } |
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 44 to 49, the
bias_score returned from the AI model is not validated to confirm it is a
numeric value. Add validation logic to check if the bias_score string can be
converted to a numeric type (e.g., float or int). If the conversion fails,
handle the error appropriately, such as returning a failure status or raising an
exception, to ensure only valid numeric bias scores are processed.
| @router.post("/bias") | ||
| async def bias_detection(request: URlRequest): | ||
| content = run_scraper_pipeline(request.url) | ||
| bias_score = check_bias(content) | ||
| print(bias_score) | ||
| return bias_score | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Address async/sync mismatch and improve logging.
The endpoint implementation has several areas for improvement:
-
Blocking operations in async function: Both
run_scraper_pipeline()andcheck_bias()appear to be synchronous functions called within an async endpoint, which can block the event loop. -
Console logging: Using
print()for logging is not ideal for production applications.
Consider these improvements:
@router.post("/bias")
async def bias_detection(request: URlRequest):
- content = run_scraper_pipeline(request.url)
- bias_score = check_bias(content)
- print(bias_score)
- return bias_score
+ import logging
+ logger = logging.getLogger(__name__)
+
+ content = await asyncio.to_thread(run_scraper_pipeline, request.url)
+ bias_result = await asyncio.to_thread(check_bias, content)
+ logger.info(f"Bias detection result: {bias_result}")
+ return bias_resultAlso add the import at the top:
+import asyncio📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @router.post("/bias") | |
| async def bias_detection(request: URlRequest): | |
| content = run_scraper_pipeline(request.url) | |
| bias_score = check_bias(content) | |
| print(bias_score) | |
| return bias_score | |
| import asyncio | |
| @router.post("/bias") | |
| async def bias_detection(request: URlRequest): | |
| import logging | |
| logger = logging.getLogger(__name__) | |
| content = await asyncio.to_thread(run_scraper_pipeline, request.url) | |
| bias_result = await asyncio.to_thread(check_bias, content) | |
| logger.info(f"Bias detection result: {bias_result}") | |
| return bias_result |
🤖 Prompt for AI Agents
In backend/app/routes/routes.py around lines 19 to 25, the async endpoint calls
synchronous functions run_scraper_pipeline() and check_bias(), which can block
the event loop. Refactor these calls to run in a thread pool executor or convert
them to async if possible to avoid blocking. Replace the print statement with
proper logging using the logging module, and add the necessary import for
logging at the top of the file.
…ckend with asyncio
Description:
I have added Bias-checking endpoint in the backend to check the bias of the article.
Taks done:
Now, frontend will have real bias score instead of Hard-coded value.
Workflow Overview
POST /processPOST /bias/biasendpoint:/resultsand renders the bias score.Workflow:
sequenceDiagram participant U as User participant F as Frontend participant P as "/process" participant B as "/bias (Gemma2-9B)" participant R as "/results" U->>F: Enters URL F->>P: POST URL par Parallel calls F->>B: POST URL and F->>P: POST URL end B-->>F: Bias score (0–100) F->>R: Render bias scoreSummary by CodeRabbit
Summary by CodeRabbit
New Features
Bug Fixes
Chores