Skip to content

Bias checking endpoint.#112

Merged
ManavSarkar merged 3 commits intomainfrom
bias-api-endpoint
Aug 10, 2025
Merged

Bias checking endpoint.#112
ManavSarkar merged 3 commits intomainfrom
bias-api-endpoint

Conversation

@ParagGhatage
Copy link
Collaborator

@ParagGhatage ParagGhatage commented Aug 1, 2025

Description:

I have added Bias-checking endpoint in the backend to check the bias of the article.

Taks done:

  • Added endpoint too handle check-bias requests.
  • connected it to frontend.
  • Optimize it to give only Number as the output.

Now, frontend will have real bias score instead of Hard-coded value.

Workflow Overview

  1. User enters a URL.
  2. Frontend sends the URL in parallel to:
    • POST /process
    • POST /bias
  3. /bias endpoint:
    • Calls the GROQ API (using the Gemma 2-9B-IT model)
    • Analyzes the article’s bias
    • Returns a single numeric score between 0 and 100
  4. Frontend navigates to /results and renders the bias score.

Workflow:

sequenceDiagram
    participant U as User
    participant F as Frontend
    participant P as "/process"
    participant B as "/bias (Gemma2-9B)"
    participant R as "/results"

    U->>F: Enters URL
    F->>P: POST URL
    par Parallel calls
        F->>B: POST URL
    and
        F->>P: POST URL
    end
    B-->>F: Bias score (0–100)
    F->>R: Render bias score
Loading

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Added integration with a bias detection API to analyze articles for bias.
    • Bias scores are now displayed on the results page alongside existing analysis data.
  • Bug Fixes

    • Improved handling and display of bias scores by retrieving and validating the correct value from session storage.
  • Chores

    • Applied stylistic and formatting improvements across several frontend files for better readability.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 1, 2025

Walkthrough

A new asynchronous POST request to a local bias detection API has been added in the analysis loading page, storing its response in sessionStorage. The results page now retrieves and displays the bias score from sessionStorage using a new state variable. A new backend module for bias detection and a corresponding API endpoint have been introduced. Existing flows and error handling remain unchanged.

Changes

Cohort / File(s) Change Summary
Bias API Integration in Loading Page
frontend/app/analyze/loading/page.tsx
Adds async POST to local bias detection API, saves response in sessionStorage as "BiasScore", and logs relevant info.
Bias Score Retrieval and Display in Results Page
frontend/app/analyze/results/page.tsx
Introduces biasScore state, retrieves/parses bias score from sessionStorage, validates analysis data, and updates UI.
New Bias Detection Backend Module
backend/app/modules/bias_detection/check_bias.py
Adds check_bias function using Groq API to score article bias from 0 to 100, returning numeric bias score or error.
New Bias Detection API Endpoint & Async Pipeline Calls
backend/app/routes/routes.py
Adds POST /bias endpoint that scrapes article content from URL, calls check_bias, and returns bias score response; updates /process endpoint to use async calls.
Stylistic Updates in Analyze Page
frontend/app/analyze/page.tsx
Applies formatting and stylistic improvements without changing logic or behavior.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LoadingPage
    participant BiasAPI
    participant SessionStorage
    participant ResultsPage

    User->>LoadingPage: Initiates analysis
    LoadingPage->>BiasAPI: POST article URL to /api/bias
    BiasAPI-->>LoadingPage: Returns bias score
    LoadingPage->>SessionStorage: Save bias score as "BiasScore"
    User->>ResultsPage: Navigates to results
    ResultsPage->>SessionStorage: Retrieve "BiasScore"
    ResultsPage->>User: Display bias score in UI
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

A bunny hops through lines anew,
Saving bias scores for you.
From loading page to results in store,
Session magic opens the door.
With every hop, the data's right—
Bias revealed, clear in sight!
🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 53a1c3b and db0520c.

📒 Files selected for processing (3)
  • backend/app/routes/routes.py (2 hunks)
  • frontend/app/analyze/loading/page.tsx (8 hunks)
  • frontend/app/analyze/results/page.tsx (3 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
  • frontend/app/analyze/loading/page.tsx
  • frontend/app/analyze/results/page.tsx
  • backend/app/routes/routes.py
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch bias-api-endpoint

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🔭 Outside diff range comments (1)
frontend/app/analyze/results/page.tsx (1)

50-81: Remove duplicate useEffect hook.

There are two useEffect hooks handling similar logic for loading analysis data. This creates code duplication and potential race conditions. The second useEffect (lines 50-81) contains important validation and redirect logic that should be merged with the first one.

Consolidate the useEffect hooks:

  useEffect(() => {
+   if (isRedirecting.current) return;
+   
    const timer = setTimeout(() => setIsLoading(false), 1500)
    const storedData = sessionStorage.getItem("analysisResult")
    const storedBiasScore = sessionStorage.getItem("BiasScore")
+   
    if(storedBiasScore) setBiasScore(JSON.parse(storedBiasScore).bias_score)
    else console.warn("No bias score found.")

-   if (storedData) setAnalysisData(JSON.parse(storedData))
-   else console.warn("No analysis result found")
+   if (storedData) {
+     const parsedData = JSON.parse(storedData);
+     const requiredFields = ['cleaned_text', 'facts', 'sentiment', 'perspective', 'score'];
+     const isDataValid = requiredFields.every(field => parsedData[field] !== undefined && parsedData[field] !== null);
+
+     if (isDataValid) {
+       setAnalysisData(parsedData);
+     } else {
+       console.warn("Incomplete analysis data. Redirecting...");
+       isRedirecting.current = true;
+       router.push("/analyze");
+     }
+   } else {
+     console.warn("No analysis result found. Redirecting...");
+     isRedirecting.current = true;
+     router.push("/analyze");
+   }
+   
    return () => clearTimeout(timer)
- }, [])
-
-
- useEffect(() => {
-   // Remove the entire second useEffect block (lines 50-81)
+ }, [router])
🧹 Nitpick comments (1)
frontend/app/analyze/loading/page.tsx (1)

83-87: Consider partial failure handling strategy.

The current error handling only catches failures from the main analysis API. If the bias API fails after the main analysis succeeds, users won't receive feedback about the partial failure. Consider implementing a strategy to handle partial failures gracefully.

You might want to provide user feedback for partial failures:

// After API calls, before progress simulation
const hasMainAnalysis = !!res.data;
const hasBiasScore = !!sessionStorage.getItem("BiasScore");

if (hasMainAnalysis && !hasBiasScore) {
  console.warn("Analysis completed but bias detection unavailable");
  // Optionally show a toast notification to the user
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 60754b3 and 3feff7b.

📒 Files selected for processing (2)
  • frontend/app/analyze/loading/page.tsx (1 hunks)
  • frontend/app/analyze/results/page.tsx (3 hunks)
🔇 Additional comments (1)
frontend/app/analyze/results/page.tsx (1)

116-117: Verify BiasMeter and bias score display consistency.

There's a potential inconsistency between the BiasMeter component and the displayed bias score:

  • BiasMeter uses score from analysisData (Line 116)
  • Text displays biasScore from the new state variable (Line 117)

These might show different values if they come from different sources.

Please confirm which source should be used for both displays. If they should be consistent, apply this diff:

- <BiasMeter score={score} />
+ <BiasMeter score={biasScore} />
  <p className="text-sm mt-2">Bias Score: {biasScore}</p>

Or verify that they intentionally display different types of scores.

Comment on lines 67 to 69
const bias_score = await axios.post("http://localhost:8000/api/bias",{
url: storedUrl,
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve variable naming and add type definitions.

The variable bias_score is misleading since it contains the entire API response, not just the score value. Also, use consistent camelCase naming for TypeScript.

Consider adding proper TypeScript interfaces:

+interface BiasResponse {
+  bias_score: number;
+  // Add other expected properties
+}
+
-        const bias_score = await axios.post("http://localhost:8000/api/bias",{
+        const biasResponse = await axios.post<BiasResponse>("http://localhost:8000/api/bias", {
           url: storedUrl,
         })
🤖 Prompt for AI Agents
In frontend/app/analyze/loading/page.tsx around lines 67 to 69, rename the
variable bias_score to a more descriptive name like biasResponse to reflect that
it holds the entire API response, not just the score. Change the variable name
to camelCase for consistency. Additionally, define and use a TypeScript
interface to type the expected structure of the API response and apply it to the
axios.post call to improve type safety.

Comment on lines 67 to 82
const bias_score = await axios.post("http://localhost:8000/api/bias",{
url: storedUrl,
})

// Save response to sessionStorage
sessionStorage.setItem("analysisResult", JSON.stringify(res.data))

// optional logging
console.log("Analysis result saved")
console.log(res)

sessionStorage.setItem("biasScore", JSON.stringify(bias_score.data))

console.log("Bias score saved")
console.log(bias_score)
// optional logging

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add proper error handling and fix configuration issues.

Several critical issues need to be addressed:

  1. Missing error handling: The bias API call isn't wrapped in try-catch, which could cause the entire analysis to fail if the bias endpoint is unavailable.

  2. Hardcoded localhost URL: Using localhost:8000 will break in production environments. This should be configurable via environment variables.

  3. sessionStorage key inconsistency: This code uses "biasScore" but the results page expects "BiasScore" (Line 40 in results page).

Apply this diff to fix these issues:

      try {
        const res = await axios.post("https://Thunder1245-perspective-backend.hf.space/api/process", {
          url: storedUrl,
        })
-        const bias_score = await axios.post("http://localhost:8000/api/bias",{
-          url: storedUrl,
-        })

        // Save response to sessionStorage
        sessionStorage.setItem("analysisResult", JSON.stringify(res.data))

        console.log("Analysis result saved")
        console.log(res)

-        sessionStorage.setItem("biasScore", JSON.stringify(bias_score.data))
-
-        console.log("Bias score saved")
-        console.log(bias_score)
-        // optional logging
+        
+        // Bias API call with proper error handling
+        try {
+          const bias_score = await axios.post(process.env.NEXT_PUBLIC_BIAS_API_URL || "http://localhost:8000/api/bias", {
+            url: storedUrl,
+          })
+          sessionStorage.setItem("BiasScore", JSON.stringify(bias_score.data))
+          console.log("Bias score saved:", bias_score.data)
+        } catch (biasError) {
+          console.warn("Bias API unavailable, continuing without bias score:", biasError)
+          sessionStorage.setItem("BiasScore", JSON.stringify({ bias_score: null }))
+        }
        
      } catch (err) {

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In frontend/app/analyze/loading/page.tsx around lines 67 to 82, add a try-catch
block around the axios.post call to the bias API to handle potential errors
gracefully. Replace the hardcoded "http://localhost:8000/api/bias" URL with a
value read from an environment variable to support different environments. Also,
change the sessionStorage key from "biasScore" to "BiasScore" to match the
expected key used in the results page.

*/
export default function AnalyzePage() {
const [analysisData, setAnalysisData] = useState<any>(null)
const [biasScore, setBiasScore] = useState<any>(null)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve type safety and add error handling for JSON parsing.

The bias score state management needs better type safety and error handling:

  1. Type safety: Using any type reduces type safety benefits
  2. JSON parsing: No error handling if stored data is corrupted
  3. Nested property access: Could fail if response structure changes

Apply this diff to improve robustness:

+interface BiasData {
+  bias_score: number | null;
+}
+
- const [biasScore, setBiasScore] = useState<any>(null)
+ const [biasScore, setBiasScore] = useState<number | null>(null)

  const storedBiasScore = sessionStorage.getItem("BiasScore")
- if(storedBiasScore) setBiasScore(JSON.parse(storedBiasScore).bias_score)
- else console.warn("No bias score found.")
+ if (storedBiasScore) {
+   try {
+     const biasData: BiasData = JSON.parse(storedBiasScore)
+     setBiasScore(biasData.bias_score ?? null)
+   } catch (error) {
+     console.error("Failed to parse bias score:", error)
+     setBiasScore(null)
+   }
+ } else {
+   console.warn("No bias score found.")
+ }

Also applies to: 40-42

🤖 Prompt for AI Agents
In frontend/app/analyze/results/page.tsx at lines 22 and 40-42, replace the
useState<any> for biasScore with a more specific type to improve type safety.
Add try-catch blocks around JSON parsing to handle potential errors from
corrupted stored data gracefully. Also, add checks for nested property existence
before accessing them to prevent runtime errors if the response structure
changes.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
backend/app/modules/bias_detection/check_bias.py (3)

11-17: Add type hints and consider parameter validation.

The function signature and basic validation look good, but could be enhanced:

-def check_bias(text):
+def check_bias(text: str) -> dict:
     try:
-        print(text)
-        print(json.dumps(text))
         
-        if not text:
+        if not text or not isinstance(text, str):
-            raise ValueError("Missing or empty 'cleaned_text'")
+            raise ValueError("Missing, empty, or invalid text input")

51-57: Enhance error handling specificity.

Consider adding more specific error handling for different types of failures.

     except Exception as e:
-        print(f"Error in bias_detection: {e}")
+        import logging
+        logger = logging.getLogger(__name__)
+        logger.error(f"Error in bias_detection: {e}")
+        
+        # More specific error types
+        if "api" in str(e).lower() or "groq" in str(e).lower():
+            error_source = "groq_api"
+        elif "network" in str(e).lower() or "connection" in str(e).lower():
+            error_source = "network"
+        else:
+            error_source = "bias_detection"
+            
         return {
             "status": "error",
-            "error_from": "bias_detection",
+            "error_from": error_source,
             "message": str(e),
         }

39-42: Consider adding timeout and retry logic.

The API call should include timeout handling and potentially retry logic for production use.

         chat_completion = client.chat.completions.create(
             messages=[...],
             model="gemma2-9b-it",
             temperature=0.3,
             max_tokens=512,
+            timeout=30,  # Add timeout
         )

Consider wrapping in retry logic for transient failures:

from tenacity import retry, stop_after_attempt, wait_exponential

@retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
def _make_api_call(self, messages, model, temperature, max_tokens):
    return client.chat.completions.create(...)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3feff7b and 53a1c3b.

📒 Files selected for processing (5)
  • backend/app/modules/bias_detection/check_bias.py (1 hunks)
  • backend/app/routes/routes.py (2 hunks)
  • frontend/app/analyze/loading/page.tsx (8 hunks)
  • frontend/app/analyze/page.tsx (7 hunks)
  • frontend/app/analyze/results/page.tsx (3 hunks)
✅ Files skipped from review due to trivial changes (1)
  • frontend/app/analyze/page.tsx
🚧 Files skipped from review as they are similar to previous changes (2)
  • frontend/app/analyze/results/page.tsx
  • frontend/app/analyze/loading/page.tsx
🔇 Additional comments (2)
backend/app/routes/routes.py (1)

6-6: LGTM - Clean import addition.

The import statement correctly references the new bias detection module.

backend/app/modules/bias_detection/check_bias.py (1)

1-9: LGTM - Clean module setup.

The imports and client initialization follow good practices with environment variable usage for API credentials.

Comment on lines +13 to +14
print(text)
print(json.dumps(text))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove debug print statements.

Debug print statements should be removed from production code or replaced with proper logging.

-        print(text)
-        print(json.dumps(text))

If logging is needed, use a proper logger:

+        import logging
+        logger = logging.getLogger(__name__)
+        logger.debug(f"Processing text of length: {len(text)}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(text)
print(json.dumps(text))
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Processing text of length: {len(text)}")
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 13 to 14,
remove the debug print statements that output the variable 'text' and its JSON
representation. If output is necessary for monitoring or debugging in
production, replace these print statements with calls to a proper logging
framework configured for the application, ensuring logs are appropriately
leveled and formatted.

Comment on lines +22 to +29
"role": "system",
"content": (
"You are an assistant that checks "
"if given article is biased and give"
"score to each based on biasness where 0 is lowest bias and 100 is highest bias"
"Only return a number between 0 to 100 base on bias."
"only return Number No Text"
),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve system prompt formatting and clarity.

The system prompt has formatting and grammatical issues that could affect model performance.

                     "content": (
-                        "You are an assistant that checks  "
-                        "if given article is biased and give"
-                        "score to each based on biasness where 0 is lowest bias and 100 is highest bias"
-                        "Only return a number between 0 to 100 base on bias."
-                        "only return Number No Text"
+                        "You are an assistant that analyzes articles for bias. "
+                        "Rate the bias level on a scale from 0 to 100, where: "
+                        "0 = completely unbiased/neutral, 100 = extremely biased. "
+                        "Return ONLY a single number between 0 and 100. "
+                        "Do not include any explanatory text."
                     ),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"role": "system",
"content": (
"You are an assistant that checks "
"if given article is biased and give"
"score to each based on biasness where 0 is lowest bias and 100 is highest bias"
"Only return a number between 0 to 100 base on bias."
"only return Number No Text"
),
"role": "system",
"content": (
"You are an assistant that analyzes articles for bias. "
"Rate the bias level on a scale from 0 to 100, where: "
"0 = completely unbiased/neutral, 100 = extremely biased. "
"Return ONLY a single number between 0 and 100. "
"Do not include any explanatory text."
),
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 22 to 29, the
system prompt string has formatting and grammatical errors that reduce clarity.
Rewrite the prompt to use proper spacing, punctuation, and grammar for better
readability and model understanding. Ensure the instructions are concise and
clearly state that the output should be a single number between 0 and 100
representing bias, with no additional text.

Comment on lines +44 to +49
bias_score = chat_completion.choices[0].message.content.strip()

return {
"bias_score": bias_score,
"status": "success",
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate numeric response from AI model.

The response should be validated to ensure it's actually a numeric value as requested.

         bias_score = chat_completion.choices[0].message.content.strip()
+        
+        # Validate that the response is numeric
+        try:
+            bias_value = float(bias_score)
+            if not (0 <= bias_value <= 100):
+                raise ValueError(f"Bias score {bias_value} outside valid range 0-100")
+            bias_score = str(int(bias_value))  # Convert to integer string
+        except ValueError as ve:
+            raise ValueError(f"Invalid numeric response from AI model: {bias_score}")

         return {
             "bias_score": bias_score,
             "status": "success",
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
bias_score = chat_completion.choices[0].message.content.strip()
return {
"bias_score": bias_score,
"status": "success",
}
bias_score = chat_completion.choices[0].message.content.strip()
# Validate that the response is numeric
try:
bias_value = float(bias_score)
if not (0 <= bias_value <= 100):
raise ValueError(f"Bias score {bias_value} outside valid range 0-100")
bias_score = str(int(bias_value)) # Convert to integer string
except ValueError as ve:
raise ValueError(f"Invalid numeric response from AI model: {bias_score}")
return {
"bias_score": bias_score,
"status": "success",
}
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 44 to 49, the
bias_score returned from the AI model is not validated to confirm it is a
numeric value. Add validation logic to check if the bias_score string can be
converted to a numeric type (e.g., float or int). If the conversion fails,
handle the error appropriately, such as returning a failure status or raising an
exception, to ensure only valid numeric bias scores are processed.

Comment on lines 19 to 25
@router.post("/bias")
async def bias_detection(request: URlRequest):
content = run_scraper_pipeline(request.url)
bias_score = check_bias(content)
print(bias_score)
return bias_score

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Address async/sync mismatch and improve logging.

The endpoint implementation has several areas for improvement:

  1. Blocking operations in async function: Both run_scraper_pipeline() and check_bias() appear to be synchronous functions called within an async endpoint, which can block the event loop.

  2. Console logging: Using print() for logging is not ideal for production applications.

Consider these improvements:

 @router.post("/bias")
 async def bias_detection(request: URlRequest):
-    content = run_scraper_pipeline(request.url)
-    bias_score = check_bias(content)
-    print(bias_score)
-    return bias_score
+    import logging
+    logger = logging.getLogger(__name__)
+    
+    content = await asyncio.to_thread(run_scraper_pipeline, request.url)
+    bias_result = await asyncio.to_thread(check_bias, content)
+    logger.info(f"Bias detection result: {bias_result}")
+    return bias_result

Also add the import at the top:

+import asyncio
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@router.post("/bias")
async def bias_detection(request: URlRequest):
content = run_scraper_pipeline(request.url)
bias_score = check_bias(content)
print(bias_score)
return bias_score
import asyncio
@router.post("/bias")
async def bias_detection(request: URlRequest):
import logging
logger = logging.getLogger(__name__)
content = await asyncio.to_thread(run_scraper_pipeline, request.url)
bias_result = await asyncio.to_thread(check_bias, content)
logger.info(f"Bias detection result: {bias_result}")
return bias_result
🤖 Prompt for AI Agents
In backend/app/routes/routes.py around lines 19 to 25, the async endpoint calls
synchronous functions run_scraper_pipeline() and check_bias(), which can block
the event loop. Refactor these calls to run in a thread pool executor or convert
them to async if possible to avoid blocking. Replace the print statement with
proper logging using the logging module, and add the necessary import for
logging at the top of the file.

@ManavSarkar ManavSarkar merged commit 38799a8 into main Aug 10, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants