Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file.
57 changes: 57 additions & 0 deletions backend/app/modules/bias_detection/check_bias.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
import os
from groq import Groq
from dotenv import load_dotenv
import json

load_dotenv()

client = Groq(api_key=os.getenv("GROQ_API_KEY"))


def check_bias(text):
try:
print(text)
print(json.dumps(text))
Comment on lines +13 to +14
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove debug print statements.

Debug print statements should be removed from production code or replaced with proper logging.

-        print(text)
-        print(json.dumps(text))

If logging is needed, use a proper logger:

+        import logging
+        logger = logging.getLogger(__name__)
+        logger.debug(f"Processing text of length: {len(text)}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
print(text)
print(json.dumps(text))
import logging
logger = logging.getLogger(__name__)
logger.debug(f"Processing text of length: {len(text)}")
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 13 to 14,
remove the debug print statements that output the variable 'text' and its JSON
representation. If output is necessary for monitoring or debugging in
production, replace these print statements with calls to a proper logging
framework configured for the application, ensuring logs are appropriately
leveled and formatted.


if not text:
raise ValueError("Missing or empty 'cleaned_text'")

chat_completion = client.chat.completions.create(
messages=[
{
"role": "system",
"content": (
"You are an assistant that checks "
"if given article is biased and give"
"score to each based on biasness where 0 is lowest bias and 100 is highest bias"
"Only return a number between 0 to 100 base on bias."
"only return Number No Text"
),
Comment on lines +22 to +29
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve system prompt formatting and clarity.

The system prompt has formatting and grammatical issues that could affect model performance.

                     "content": (
-                        "You are an assistant that checks  "
-                        "if given article is biased and give"
-                        "score to each based on biasness where 0 is lowest bias and 100 is highest bias"
-                        "Only return a number between 0 to 100 base on bias."
-                        "only return Number No Text"
+                        "You are an assistant that analyzes articles for bias. "
+                        "Rate the bias level on a scale from 0 to 100, where: "
+                        "0 = completely unbiased/neutral, 100 = extremely biased. "
+                        "Return ONLY a single number between 0 and 100. "
+                        "Do not include any explanatory text."
                     ),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"role": "system",
"content": (
"You are an assistant that checks "
"if given article is biased and give"
"score to each based on biasness where 0 is lowest bias and 100 is highest bias"
"Only return a number between 0 to 100 base on bias."
"only return Number No Text"
),
"role": "system",
"content": (
"You are an assistant that analyzes articles for bias. "
"Rate the bias level on a scale from 0 to 100, where: "
"0 = completely unbiased/neutral, 100 = extremely biased. "
"Return ONLY a single number between 0 and 100. "
"Do not include any explanatory text."
),
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 22 to 29, the
system prompt string has formatting and grammatical errors that reduce clarity.
Rewrite the prompt to use proper spacing, punctuation, and grammar for better
readability and model understanding. Ensure the instructions are concise and
clearly state that the output should be a single number between 0 and 100
representing bias, with no additional text.

},
{
"role": "user",
"content": (
"Give bias score to the following article "
f"\n\n{text}"
),
},
],
model="gemma2-9b-it",
temperature=0.3,
max_tokens=512,
)

bias_score = chat_completion.choices[0].message.content.strip()

return {
"bias_score": bias_score,
"status": "success",
}
Comment on lines +44 to +49
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate numeric response from AI model.

The response should be validated to ensure it's actually a numeric value as requested.

         bias_score = chat_completion.choices[0].message.content.strip()
+        
+        # Validate that the response is numeric
+        try:
+            bias_value = float(bias_score)
+            if not (0 <= bias_value <= 100):
+                raise ValueError(f"Bias score {bias_value} outside valid range 0-100")
+            bias_score = str(int(bias_value))  # Convert to integer string
+        except ValueError as ve:
+            raise ValueError(f"Invalid numeric response from AI model: {bias_score}")

         return {
             "bias_score": bias_score,
             "status": "success",
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
bias_score = chat_completion.choices[0].message.content.strip()
return {
"bias_score": bias_score,
"status": "success",
}
bias_score = chat_completion.choices[0].message.content.strip()
# Validate that the response is numeric
try:
bias_value = float(bias_score)
if not (0 <= bias_value <= 100):
raise ValueError(f"Bias score {bias_value} outside valid range 0-100")
bias_score = str(int(bias_value)) # Convert to integer string
except ValueError as ve:
raise ValueError(f"Invalid numeric response from AI model: {bias_score}")
return {
"bias_score": bias_score,
"status": "success",
}
🤖 Prompt for AI Agents
In backend/app/modules/bias_detection/check_bias.py around lines 44 to 49, the
bias_score returned from the AI model is not validated to confirm it is a
numeric value. Add validation logic to check if the bias_score string can be
converted to a numeric type (e.g., float or int). If the conversion fails,
handle the error appropriately, such as returning a failure status or raising an
exception, to ensure only valid numeric bias scores are processed.


except Exception as e:
print(f"Error in bias_detection: {e}")
return {
"status": "error",
"error_from": "bias_detection",
"message": str(e),
}
15 changes: 12 additions & 3 deletions backend/app/routes/routes.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,12 @@
from pydantic import BaseModel
from app.modules.pipeline import run_scraper_pipeline
from app.modules.pipeline import run_langgraph_workflow
from app.modules.bias_detection.check_bias import check_bias
import asyncio
import json

router = APIRouter()


class URlRequest(BaseModel):
url: str

Expand All @@ -15,10 +16,18 @@ class URlRequest(BaseModel):
async def home():
return {"message": "Perspective API is live!"}

@router.post("/bias")
async def bias_detection(request: URlRequest):
content = await asyncio.to_thread(run_scraper_pipeline,(request.url))
bias_score = await asyncio.to_thread(check_bias,(content))
print(bias_score)
return bias_score



@router.post("/process")
async def run_pipelines(request: URlRequest):
article_text = run_scraper_pipeline(request.url)
article_text = await asyncio.to_thread(run_scraper_pipeline,(request.url))
print(json.dumps(article_text, indent=2))
data = run_langgraph_workflow(article_text)
data = await asyncio.to_thread(run_langgraph_workflow,(article_text))
return data
185 changes: 111 additions & 74 deletions frontend/app/analyze/loading/page.tsx
Original file line number Diff line number Diff line change
@@ -1,12 +1,20 @@
"use client"
"use client";

import { useEffect, useState } from "react"
import { useRouter } from "next/navigation"
import { Card } from "@/components/ui/card"
import { Badge } from "@/components/ui/badge"
import { Globe, Brain, Shield, CheckCircle, Database, Sparkles, Zap } from "lucide-react"
import ThemeToggle from "@/components/theme-toggle"
import axios from "axios"
import { useEffect, useState } from "react";
import { useRouter } from "next/navigation";
import { Card } from "@/components/ui/card";
import { Badge } from "@/components/ui/badge";
import {
Globe,
Brain,
Shield,
CheckCircle,
Database,
Sparkles,
Zap,
} from "lucide-react";
import ThemeToggle from "@/components/theme-toggle";
import axios from "axios";

/**
* Displays a multi-step animated loading and progress interface for the article analysis workflow.
Expand All @@ -16,10 +24,10 @@ import axios from "axios"
* @remark This component manages its own navigation and redirects based on session state.
*/
export default function LoadingPage() {
const [currentStep, setCurrentStep] = useState(0)
const [progress, setProgress] = useState(0)
const [articleUrl, setArticleUrl] = useState("")
const router = useRouter()
const [currentStep, setCurrentStep] = useState(0);
const [progress, setProgress] = useState(0);
const [articleUrl, setArticleUrl] = useState("");
const router = useRouter();

const steps = [
{
Expand Down Expand Up @@ -52,67 +60,88 @@ export default function LoadingPage() {
description: "Creating balanced alternative viewpoints",
color: "from-pink-500 to-rose-500",
},
]
];

useEffect(() => {
const runAnalysis = async () => {
const storedUrl = sessionStorage.getItem("articleUrl")
if (storedUrl) {
setArticleUrl(storedUrl)
const runAnalysis = async () => {
const storedUrl = sessionStorage.getItem("articleUrl");
if (storedUrl) {
setArticleUrl(storedUrl);

try {
const res = await axios.post("https://Thunder1245-perspective-backend.hf.space/api/process", {
url: storedUrl,
})
try {
const [processRes, biasRes] = await Promise.all([
axios.post(
"https://Thunder1245-perspective-backend.hf.space/api/process",
{
url: storedUrl,
}
),
axios.post(
"http://Thunder1245-perspective-backend.hf.space/api/bias",
{
url: storedUrl,
}
),
]);

// Save response to sessionStorage
sessionStorage.setItem("analysisResult", JSON.stringify(res.data))

// optional logging
console.log("Analysis result saved")
console.log(res)
} catch (err) {
console.error("Failed to process article:", err)
router.push("/analyze") // fallback in case of error
return
}
sessionStorage.setItem("BiasScore", JSON.stringify(biasRes.data));

// Progress and step simulation
const stepInterval = setInterval(() => {
setCurrentStep((prev) => {
if (prev < steps.length - 1) {
return prev + 1
} else {
clearInterval(stepInterval)
setTimeout(() => {
router.push("/analyze/results")
}, 2000)
return prev
}
})
}, 2000)
console.log("Bias score saved");
console.log(biasRes);

const progressInterval = setInterval(() => {
setProgress((prev) => {
if (prev < 100) {
return prev + 1
}
return prev
})
}, 100)
// Save response to sessionStorage
sessionStorage.setItem(
"analysisResult",
JSON.stringify(processRes.data)
);

return () => {
clearInterval(stepInterval)
clearInterval(progressInterval)
}
} else {
router.push("/analyze")
}
}
console.log("Analysis result saved");
console.log(processRes);


// optional logging
} catch (err) {
console.error("Failed to process article:", err);
router.push("/analyze"); // fallback in case of error
return;
}

// Progress and step simulation
const stepInterval = setInterval(() => {
setCurrentStep((prev) => {
if (prev < steps.length - 1) {
return prev + 1;
} else {
clearInterval(stepInterval);
setTimeout(() => {
router.push("/analyze/results");
}, 2000);
return prev;
}
});
}, 2000);

runAnalysis()
}, [router])
const progressInterval = setInterval(() => {
setProgress((prev) => {
if (prev < 100) {
return prev + 1;
}
return prev;
});
}, 100);

return () => {
clearInterval(stepInterval);
clearInterval(progressInterval);
};
} else {
router.push("/analyze");
}
};

runAnalysis();
}, [router]);

return (
<div className="min-h-screen bg-gradient-to-br from-slate-50 via-blue-50/30 to-indigo-100/50 dark:from-slate-900 dark:via-slate-900/80 dark:to-indigo-950/50 transition-colors duration-300 overflow-hidden">
Expand Down Expand Up @@ -162,16 +191,22 @@ export default function LoadingPage() {

{/* Article URL Display */}
<div className="mb-8 md:mb-12 p-3 md:p-4 bg-white/50 dark:bg-slate-800/50 rounded-lg backdrop-blur-sm">
<p className="text-slate-600 dark:text-slate-300 text-xs md:text-sm mb-2">Processing:</p>
<p className="text-blue-600 dark:text-blue-400 font-medium truncate text-sm md:text-base">{articleUrl}</p>
<p className="text-slate-600 dark:text-slate-300 text-xs md:text-sm mb-2">
Processing:
</p>
<p className="text-blue-600 dark:text-blue-400 font-medium truncate text-sm md:text-base">
{articleUrl}
</p>
</div>

{/* Progress Bar */}
<div className="mb-12 md:mb-16">
<div className="w-full bg-slate-200 dark:bg-slate-700 rounded-full h-2 md:h-3 mb-3 md:mb-4 overflow-hidden">
<div
className="h-full bg-gradient-to-r from-blue-600 via-indigo-600 to-purple-600 rounded-full transition-all duration-300 ease-out relative"
style={{ width: `${Math.min(progress, (currentStep + 1) * 20)}%` }}
style={{
width: `${Math.min(progress, (currentStep + 1) * 20)}%`,
}}
>
<div className="absolute inset-0 bg-gradient-to-r from-white/20 to-transparent animate-pulse"></div>
</div>
Expand All @@ -190,8 +225,8 @@ export default function LoadingPage() {
index === currentStep
? "bg-white dark:bg-slate-800 shadow-2xl scale-105 ring-2 ring-blue-500/50"
: index < currentStep
? "bg-white/80 dark:bg-slate-800/80 shadow-lg opacity-75"
: "bg-white/40 dark:bg-slate-800/40 shadow-md opacity-50"
? "bg-white/80 dark:bg-slate-800/80 shadow-lg opacity-75"
: "bg-white/40 dark:bg-slate-800/40 shadow-md opacity-50"
}`}
>
<div className="flex items-center space-x-3 md:space-x-4">
Expand All @@ -200,8 +235,8 @@ export default function LoadingPage() {
index === currentStep
? `bg-gradient-to-br ${step.color} animate-pulse shadow-lg`
: index < currentStep
? "bg-gradient-to-br from-emerald-500 to-teal-500 shadow-md"
: "bg-slate-200 dark:bg-slate-700"
? "bg-gradient-to-br from-emerald-500 to-teal-500 shadow-md"
: "bg-slate-200 dark:bg-slate-700"
}`}
>
{index < currentStep ? (
Expand All @@ -221,13 +256,15 @@ export default function LoadingPage() {
index === currentStep
? "text-blue-600 dark:text-blue-400"
: index < currentStep
? "text-emerald-600 dark:text-emerald-400"
: "text-slate-500 dark:text-slate-400"
? "text-emerald-600 dark:text-emerald-400"
: "text-slate-500 dark:text-slate-400"
}`}
>
{step.title}
</h3>
<p className="text-slate-600 dark:text-slate-300 text-xs md:text-sm">{step.description}</p>
<p className="text-slate-600 dark:text-slate-300 text-xs md:text-sm">
{step.description}
</p>
</div>
{index === currentStep && (
<div className="flex space-x-1">
Expand Down Expand Up @@ -262,5 +299,5 @@ export default function LoadingPage() {
</div>
</main>
</div>
)
);
}
Loading