Skip to content

Conversation

@PrithvijitBose
Copy link

@PrithvijitBose PrithvijitBose commented Aug 10, 2025

🌟 Feature Implementation: Dynamic FAQ Handler

Problem Solved

  • Eliminates manual, repetitive responses to foundational organizational questions
  • Replaces static FAQ databases with dynamic, real-time information retrieval
  • Provides automated public-facing representation for the organization

Key Features Implemented

Smart Query Routing: ReAct Supervisor intelligently identifies organizational queries
Dynamic Web Search: Real-time information retrieval from organization's public presence
Automated Synthesis: Converts search results into comprehensive, user-friendly responses
Source Attribution: Provides links for users to explore topics further

Testing Results

Successfully tested with query: "How does this organization work?"

  • ✅ Correct supervisor routing to FAQ Handler
  • ✅ Dynamic search query refinement
  • ✅ Successful web search execution
  • ✅ Comprehensive response generation with sources

Technical Implementation

  • Integrated with existing DevRel Agent architecture
  • Uses ReAct Supervisor for intelligent routing
  • Implements web search via refined query processing
  • Maintains consistency with existing tool patterns

Ready for review and production deployment.

Closes #97

bandicam.2025-08-10.19-47-44-389.mp4

Summary by CodeRabbit

  • New Features

    • FAQ responses are now generated dynamically using web search and AI synthesis, providing more comprehensive and up-to-date answers.
    • Enhanced Discord integration with improved periodic cleanup of expired verification tokens, supporting environments with or without certain dependencies.
  • Bug Fixes

    • Improved error handling, validation, and logging in the ReAct supervisor, resulting in more robust and reliable agent interactions.
  • Chores

    • Updated project dependencies to remove unnecessary packages and adjust version constraints for better compatibility.
    • Minor whitespace and formatting cleanups for improved code consistency.
  • Refactor

    • Internal changes to support the new dynamic FAQ process and improve maintainability without affecting user-facing functionality.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 10, 2025

Walkthrough

This update refactors the FAQ handler to use a dynamic, multi-step process leveraging web search and an LLM for organizational queries. It changes tool node parameters, updates the workflow, and introduces robust error handling and validation throughout relevant modules. Discord integration and dependency management are also adjusted for compatibility.

Changes

Cohort / File(s) Change Summary
Dynamic FAQ Handler Refactor
backend/app/agents/devrel/nodes/handlers/faq.py, backend/app/agents/devrel/tool_wrappers.py, backend/app/agents/devrel/agent.py
Replaces static FAQ retrieval with dynamic web search and LLM synthesis. Updates function signatures and tool node parameters to use search_tool and llm instead of a static FAQ tool. Adds helper functions for query refinement, response synthesis, formatting, and error handling.
ReAct Supervisor Robustness
backend/app/agents/devrel/nodes/react_supervisor.py
Adds state validation, error handling, iteration limits, improved prompt construction, multi-line parsing, and logging. Refactors tool result management and adds helper functions for state and response management.
Discord Integration Adjustments
backend/integrations/discord/bot.py, backend/integrations/discord/cogs.py, backend/main.py
Cleans up whitespace and indentation. Adds conditional import and fallback for tasks in Discord cogs. Comments out Discord cog extension loading in main application, expanding exception handling in the commented code.
Dependency Management
pyproject.toml
Removes py-cord dependency and lowers the minimum required version of discord-py.
Minor Cleanups
backend/app/core/handler/faq_handler.py, backend/app/agents/devrel/nodes/handlers/web_search.py
Removes unused imports and trailing whitespace. No functional changes.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant FAQHandler
    participant LLM
    participant WebSearch

    User->>FAQHandler: Submit organizational query
    FAQHandler->>LLM: Refine query for organization-specific search
    LLM-->>FAQHandler: Refined query
    FAQHandler->>WebSearch: Perform web search with refined query
    WebSearch-->>FAQHandler: Search results
    FAQHandler->>LLM: Synthesize answer from search results
    LLM-->>FAQHandler: Synthesized answer
    FAQHandler->>User: Respond with answer and source links
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Assessment against linked issues

Objective Addressed Explanation
Implement FAQ Handler that dynamically searches the web for org queries and synthesizes results (FAQ Handler leverages web search, LLM, and provides source links) [#97]
FAQ Handler replaces static FAQ tool with dynamic, AI-augmented pipeline for organizational queries [#97]

Assessment against linked issues: Out-of-scope changes

Code Change (file_path) Explanation
Discord cog extension loading is commented out and exception handling expanded (backend/main.py) This change is not related to the FAQ Handler feature and addresses Discord extension loading robustness.
Removal of py-cord and lowering minimum discord-py version (pyproject.toml) Dependency management changes are unrelated to the FAQ Handler objectives.
Conditional import and fallback for tasks in Discord cogs (backend/integrations/discord/cogs.py) This is a compatibility improvement for Discord integration, not related to the FAQ Handler feature.

Possibly related PRs

Suggested labels

enhancement

Suggested reviewers

  • smokeyScraper

Poem

🐰
A question hops in, so fluffy and bright,
The FAQ Handler leaps—now powered by AI might!
It searches the web, with wisdom to bring,
Synthesizing answers, with sources that sing.
No more static burrows, just dynamic delight—
For every curious bunny, the info is right!

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

🔭 Outside diff range comments (3)
backend/app/core/handler/faq_handler.py (1)

12-27: Static FAQ map likely obsolete; align with new dynamic FAQ flow

This handler still returns canned answers, which diverges from the new dynamic, search-backed FAQ process. Consider deprecating or delegating to the dynamic pipeline to avoid dual behavior.

Options:

  • Deprecate this class (e.g., rename to LegacyFAQHandler) and route all FAQ events through the dynamic node.
  • Or, adapt this handler to invoke the new dynamic FAQ function and reserve the map only as a last-resort fallback. I can draft a migration patch if helpful.
backend/app/agents/devrel/agent.py (1)

31-31: Remove unused FAQTool import and instantiation

The FAQTool is no longer used after the dynamic FAQ refactor—drop both its import and instantiation to keep the agent lean.

• In backend/app/agents/devrel/agent.py

  • Remove the import on line 9
  • Remove the self.faq_tool = FAQTool() on line 31

Suggested diff:

--- a/backend/app/agents/devrel/agent.py
+++ b/backend/app/agents/devrel/agent.py
@@
-from .tools.faq_tool import FAQTool
@@ class DevRelAgent(BaseAgent):
-        self.search_tool = DuckDuckGoSearchTool()
-        self.faq_tool = FAQTool()
-        self.github_toolkit = GitHubToolkit()
+        self.search_tool = DuckDuckGoSearchTool()
+        self.github_toolkit = GitHubToolkit()
backend/app/agents/devrel/tool_wrappers.py (1)

11-17: Add error handling parity for web_search_tool_node

Mirror the GitHub tool’s try/except so failures don’t derail the graph and errors are surfaced consistently.

 async def web_search_tool_node(state: AgentState, search_tool, llm) -> Dict[str, Any]:
     """Execute web search tool and add result to ReAct context"""
     logger.info(f"Executing web search tool for session {state.session_id}")
-
-    handler_result = await handle_web_search_node(state, search_tool, llm)
-    tool_result = handler_result.get("task_result", {})
-    return add_tool_result(state, "web_search", tool_result)
+    try:
+        handler_result = await handle_web_search_node(state, search_tool, llm)
+        tool_result = handler_result.get("task_result", {})
+    except Exception as e:
+        logger.error(f"Error in web search: {str(e)}")
+        tool_result = {
+            "type": "web_search",
+            "error": str(e),
+            "status": "error",
+        }
+    return add_tool_result(state, "web_search", tool_result)
🧹 Nitpick comments (11)
backend/integrations/discord/bot.py (1)

91-96: Avoid per-call construction of priority map

Minor perf/readability: make the priority map a class-level constant instead of recreating it on every message.

Here’s a small refactor:

 class DiscordBot(commands.Bot):
     """Discord bot with LangGraph agent integration"""
+    PRIORITY_MAP = {
+        "high": QueuePriority.HIGH,
+        "medium": QueuePriority.MEDIUM,
+        "low": QueuePriority.LOW,
+    }
@@
-            priority_map = {"high": QueuePriority.HIGH,
-                            "medium": QueuePriority.MEDIUM,
-                            "low": QueuePriority.LOW
-                            }
-            priority = priority_map.get(triage_result.get("priority"), QueuePriority.MEDIUM)
+            priority = self.PRIORITY_MAP.get(triage_result.get("priority"), QueuePriority.MEDIUM)
backend/main.py (1)

49-57: Disabling cog loading disables slash commands and cleanup tasks

With the extension loading commented out, none of the DevRelCommands (slash commands) nor the token cleanup task will be registered/run. If this is intentional for a phased rollout, consider gating via an env flag so prod keeps functionality.

  • Confirm there is an alternate path that loads the cogs at runtime; otherwise, users will lose /verify_github, /reset, etc.
  • If intentional, note the operational impact (no periodic cleanup) and whether there’s a separate TTL-based cleanup on the backend.

I can add an env-driven toggle (e.g., DISCORD_LOAD_COGS=true) that conditionally loads the extension and logs clearly when disabled.

backend/integrations/discord/cogs.py (2)

51-60: Prefer logger over print for task lifecycle messages

Use the module logger for consistency and observability.

-            print("--> Running token cleanup task...")
+            logger.info("--> Running token cleanup task...")
@@
-            print("--> Token cleanup task finished.")
+            logger.info("--> Token cleanup task finished.")

66-78: Manual cleanup loop: switch prints to logger and handle graceful cancel

Logging improvement; also log on cancel to aid debugging.

-        while True:
+        while True:
             try:
                 await asyncio.sleep(300)  # 5 minutes
-                print("--> Running manual token cleanup task...")
+                logger.info("--> Running manual token cleanup task...")
                 await cleanup_expired_tokens()
-                print("--> Manual token cleanup task finished.")
+                logger.info("--> Manual token cleanup task finished.")
             except asyncio.CancelledError:
-                break
+                logger.info("Manual token cleanup task cancelled.")
+                break
backend/app/agents/devrel/nodes/handlers/web_search.py (1)

61-69: Trim overly long snippets and normalize whitespace for cleaner responses

Long raw snippets can overwhelm users. Normalize whitespace, fall back to snippet when content is missing, and truncate to a sane length.

-    for i, result in enumerate(results[:5]):
-        title = result.get('title', 'N/A')
-        snippet = result.get('content', 'N/A')
-        url = result.get('url', '#')
-        response_parts.append(f"{i+1}. {title}: {snippet}")
-        response_parts.append(f"   (Source: {url})")
+    for i, result in enumerate(results[:5]):
+        title = result.get('title', 'N/A')
+        snippet = result.get('content') or result.get('snippet') or 'N/A'
+        # Collapse whitespace/newlines and truncate long snippets
+        snippet = ' '.join(str(snippet).split())
+        if len(snippet) > 240:
+            snippet = snippet[:237] + "..."
+        url = result.get('url', '#')
+        response_parts.append(f"{i+1}. {title}: {snippet}")
+        response_parts.append(f"   (Source: {url})")
backend/app/core/handler/faq_handler.py (1)

44-44: Use lazy logging formatting to avoid unnecessary string building

Minor improvement: use logger formatting instead of concatenation.

-        logger.info("Question: " + question + ", Response: " + response)
+        logger.info("Question: %s, Response: %s", question, response)
backend/app/agents/devrel/nodes/handlers/faq.py (2)

90-93: Normalize refined query output and strip wrapping quotes

LLMs sometimes wrap output in quotes. Strip them to avoid degrading search quality.

-    refined_query = response.content.strip()
+    refined_query = (response.content or "").strip()
+    # Remove unnecessary wrapping quotes/backticks
+    if (refined_query.startswith('"') and refined_query.endswith('"')) or \
+       (refined_query.startswith("'") and refined_query.endswith("'")) or \
+       (refined_query.startswith("`") and refined_query.endswith("`")):
+        refined_query = refined_query[1:-1].strip()
     logger.info(f"Refined query: {refined_query}")

40-66: Optional: prefer domain-scoped search to official sources when available

To bias results toward official properties, consider injecting site: filters (e.g., website/docs/blog/GitHub org) based on configuration.

Example approach:

  • Add org domains to settings (e.g., settings.org_domains = ["devr.ai", "blog.devr.ai", "github.com/devr-ai"])
  • Append ‘site:devr.ai OR site:blog.devr.ai OR site:github.com/devr-ai’ to the refined query before calling search.
backend/app/agents/devrel/nodes/react_supervisor.py (3)

12-16: Consider using a set for VALID_ACTIONS and confirm the action list is exhaustive for the new FAQ flow

Membership checks are frequent; a set communicates intent and provides O(1) membership. Also verify no new actions (e.g., intermediate FAQ steps) are missing.

Would you like me to convert VALID_ACTIONS to a set and add tests asserting routing only permits declared actions?


137-142: Iteration guard off-by-one; align with node check

react_supervisor_node stops at >= MAX_ITERATIONS. Router uses >, allowing one extra routing step. Use >= for consistency.

-        if iteration_count > MAX_ITERATIONS:
+        if iteration_count >= MAX_ITERATIONS:
             logger.warning(f"Max iterations reached for session {state.session_id}")
             return "complete"

198-205: Log full traceback when latest message extraction fails

Add exc_info for faster triage.

-    except Exception as e:
-        logger.error(f"Error getting latest message: {e}")
+    except Exception as e:
+        logger.error(f"Error getting latest message: {e}", exc_info=True)
         return ""
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a4ff6eb and 6281bba.

⛔ Files ignored due to path filters (1)
  • poetry.lock is excluded by !**/*.lock
📒 Files selected for processing (10)
  • backend/app/agents/devrel/agent.py (1 hunks)
  • backend/app/agents/devrel/nodes/handlers/faq.py (1 hunks)
  • backend/app/agents/devrel/nodes/handlers/web_search.py (2 hunks)
  • backend/app/agents/devrel/nodes/react_supervisor.py (1 hunks)
  • backend/app/agents/devrel/tool_wrappers.py (1 hunks)
  • backend/app/core/handler/faq_handler.py (1 hunks)
  • backend/integrations/discord/bot.py (5 hunks)
  • backend/integrations/discord/cogs.py (5 hunks)
  • backend/main.py (3 hunks)
  • pyproject.toml (1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-28T23:14:21.089Z
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#87
File: backend/app/api/v1/health.py:1-12
Timestamp: 2025-06-28T23:14:21.089Z
Learning: In the Devr.AI project, the application is designed to run from the backend directory, making import paths like `from main import DevRAIApplication` correct for the current setup. Import path adjustments for deployment will be configured later during the deployment process.

Applied to files:

  • backend/main.py
🧬 Code Graph Analysis (2)
backend/app/agents/devrel/agent.py (1)
backend/app/agents/devrel/tool_wrappers.py (1)
  • faq_handler_tool_node (19-26)
backend/app/core/handler/faq_handler.py (1)
backend/app/core/events/enums.py (1)
  • EventType (10-32)
🔇 Additional comments (8)
backend/app/agents/devrel/nodes/handlers/web_search.py (1)

54-54: Whitespace cleanup — no functional change

No behavioral difference. Safe to merge.

Also applies to: 64-64

backend/app/core/handler/faq_handler.py (1)

4-4: Remove unused import — LGTM

Import cleanup is correct and reduces noise.

backend/app/agents/devrel/agent.py (1)

46-47: Correctly passes search_tool and llm to FAQ node

Signature matches the refactor; routing remains intact.

backend/app/agents/devrel/tool_wrappers.py (1)

19-26: Signature and internal call updated — LGTM

Swapping to (search_tool, llm) aligns with the dynamic FAQ implementation.

backend/app/agents/devrel/nodes/handlers/faq.py (2)

8-11: Dynamic, search-backed FAQ entrypoint — LGTM

Clean integration with search + LLM. Good logging and return schema.


101-104: Verified search result schema

Both search tool implementations normalize results to include the keys the FAQ handler expects:

  • In backend/app/agents/devrel/tools/search_tool/ddg.py (lines 31–33), raw DDG fields are mapped to
    "title", "content" (from body), and "url" (from href).
  • In backend/app/agents/devrel/tools/search_tool/tavilly.py (lines 34–36), Tavily results are mapped to
    "title", "content", and "url".

The FAQ handler’s calls to result.get('title'), result.get('content'), and result.get('url') align with these mappings. No changes required.

backend/app/agents/devrel/nodes/react_supervisor.py (2)

51-53: Confirm llm interface is a ChatModel; otherwise accept string input

ainvoke with [HumanMessage] assumes a chat model. If llm is a text LLM, this will fail. Either constrain llm type or add a fallback path.

Would you like me to add a small adapter to accept both string and message-list inputs and annotate llm as a ChatModel?

# Example adapter (outside review range, for reference):
async def _ainvoke_chat_or_text(llm, prompt: str):
    try:
        return await llm.ainvoke([HumanMessage(content=prompt)])
    except Exception:
        return await llm.ainvoke(prompt)

247-268: Standardized error/completion responses look good

Consistent shape and action defaulting to complete is appropriate.

Comment on lines 12 to 17
latest_message = ""
if state.messages:
latest_message = state.messages[-1].get("content", "")
elif state.context.get("original_message"):
latest_message = state.context["original_message"]

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against empty user message

If no message is present, return a helpful prompt instead of calling the pipeline with an empty query.

     latest_message = ""
     if state.messages:
         latest_message = state.messages[-1].get("content", "")
     elif state.context.get("original_message"):
         latest_message = state.context["original_message"]
 
+    if not (latest_message or "").strip():
+        logger.info("No message found for FAQ; returning guidance")
+        return {
+            "task_result": {
+                "type": "faq",
+                "response": "Ask me anything about our organization (e.g., 'How does this organization work?' or 'What projects do you maintain?').",
+                "source": "dynamic_web_search"
+            },
+            "current_task": "faq_handled"
+        }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
latest_message = ""
if state.messages:
latest_message = state.messages[-1].get("content", "")
elif state.context.get("original_message"):
latest_message = state.context["original_message"]
latest_message = ""
if state.messages:
latest_message = state.messages[-1].get("content", "")
elif state.context.get("original_message"):
latest_message = state.context["original_message"]
if not (latest_message or "").strip():
logger.info("No message found for FAQ; returning guidance")
return {
"task_result": {
"type": "faq",
"response": "Ask me anything about our organization (e.g., 'How does this organization work?' or 'What projects do you maintain?').",
"source": "dynamic_web_search"
},
"current_task": "faq_handled"
}
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/handlers/faq.py around lines 12 to 17, the
code assigns latest_message from state messages or context but does not handle
the case when latest_message is empty. Add a guard to check if latest_message is
empty or None, and if so, return a helpful prompt message instead of proceeding
to call the pipeline with an empty query. This prevents unnecessary processing
and improves user experience.

Comment on lines +18 to 20
# Dynamic FAQ processing (replaces static faq_tool.get_response)
faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name="Devr.AI")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid hard-coded org name; make it configurable

Hard-coding "Devr.AI" limits reuse. Derive from state/context or settings with a sensible fallback.

-from app.agents.state import AgentState
+from app.agents.state import AgentState
+from app.core.config import settings
@@
-    # Dynamic FAQ processing (replaces static faq_tool.get_response)
-    faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name="Devr.AI")
+    # Dynamic FAQ processing (replaces static faq_tool.get_response)
+    org_name = (state.context.get("org_name") if getattr(state, "context", None) else None) \
+        or getattr(settings, "org_name", None) \
+        or "Devr.AI"
+    faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name=org_name)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Dynamic FAQ processing (replaces static faq_tool.get_response)
faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name="Devr.AI")
from app.agents.state import AgentState
from app.core.config import settings
# Dynamic FAQ processing (replaces static faq_tool.get_response)
org_name = (state.context.get("org_name") if getattr(state, "context", None) else None) \
or getattr(settings, "org_name", None) \
or "Devr.AI"
faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name=org_name)
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/handlers/faq.py around lines 18 to 20, the
org_name parameter is hard-coded as "Devr.AI" in the call to
_dynamic_faq_process. Modify the code to obtain the org_name dynamically from
the current state, context, or configuration settings, and provide a sensible
default fallback if none is available, instead of using a fixed string.

Comment on lines +98 to +105
# Prepare search results context
results_context = ""
for i, result in enumerate(search_results[:5]): # Top 5 results
title = result.get('title', 'N/A')
content = result.get('content', 'N/A')
url = result.get('url', 'N/A')
results_context += f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Bound prompt size and sanitize result content to prevent token bloat

Cap per-result content length, collapse whitespace, and fall back to snippet when content is missing.

-    results_context = ""
-    for i, result in enumerate(search_results[:5]):  # Top 5 results
-        title = result.get('title', 'N/A')
-        content = result.get('content', 'N/A')
-        url = result.get('url', 'N/A')
-        results_context += f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"
+    results_context = ""
+    for i, result in enumerate(search_results[:5]):  # Top 5 results
+        title = result.get('title', 'N/A')
+        content = result.get('content') or result.get('snippet') or 'N/A'
+        # Normalize whitespace and trim overly long blobs
+        content = ' '.join(str(content).split())
+        if len(content) > 800:
+            content = content[:797] + "..."
+        url = result.get('url', 'N/A')
+        results_context += (
+            f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Prepare search results context
results_context = ""
for i, result in enumerate(search_results[:5]): # Top 5 results
title = result.get('title', 'N/A')
content = result.get('content', 'N/A')
url = result.get('url', 'N/A')
results_context += f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"
# Prepare search results context
results_context = ""
for i, result in enumerate(search_results[:5]): # Top 5 results
title = result.get('title', 'N/A')
content = result.get('content') or result.get('snippet') or 'N/A'
# Normalize whitespace and trim overly long blobs
content = ' '.join(str(content).split())
if len(content) > 800:
content = content[:797] + "..."
url = result.get('url', 'N/A')
results_context += (
f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"
)
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/handlers/faq.py around lines 98 to 105, the
current code appends full content from search results which can cause token
bloat in prompts. To fix this, limit the length of the content string to a
reasonable maximum, sanitize it by collapsing multiple whitespace characters
into single spaces, and if the content is missing or empty, use the 'snippet'
field as a fallback. This will keep the prompt size bounded and cleaner.

Comment on lines +20 to +24
# Validate state first
if not _validate_state(state):
logger.error(f"Invalid state for session {getattr(state, 'session_id', 'unknown')}")
return _create_error_response(state, "Invalid state")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Good early state validation; strengthen _validate_state to enforce dict context

Early exit is solid. Ensure _validate_state rejects non-dict context to prevent attribute errors downstream.

Apply the _validate_state diff in the dedicated comment below (Lines 228-246).

🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/react_supervisor.py around lines 20 to 24,
the early state validation is good but the _validate_state function should be
enhanced to explicitly check that the state context is a dictionary. Modify
_validate_state to return False if the state is not a dict, preventing attribute
errors later. Implement this change as indicated in the related comment for
lines 228 to 246.

Comment on lines +31 to +49
tool_results = state.context.get("tool_results", [])
iteration_count = state.context.get("iteration_count", 0)

prompt = REACT_SUPERVISOR_PROMPT.format(
latest_message=latest_message,
platform=state.platform,
interaction_count=state.interaction_count,
iteration_count=iteration_count,
conversation_history=conversation_history,
tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results"
)
# Safety check for max iterations
if iteration_count >= MAX_ITERATIONS:
logger.warning(f"Max iterations ({MAX_ITERATIONS}) reached for session {state.session_id}")
return _create_completion_response(state, "Maximum iterations reached")

response = await llm.ainvoke([HumanMessage(content=prompt)])
decision = _parse_supervisor_decision(response.content)
logger.debug(f"Current iteration: {iteration_count}")
logger.debug(f"Latest message length: {len(latest_message)}")

logger.info(f"ReAct Supervisor decision: {decision['action']}")
prompt = REACT_SUPERVISOR_PROMPT.format(
latest_message=latest_message,
platform=getattr(state, 'platform', 'unknown'),
interaction_count=getattr(state, 'interaction_count', 0),
iteration_count=iteration_count,
conversation_history=conversation_history,
tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Harden tool_results serialization and guard token growth

json.dumps can fail on non-serializable objects; also passing the full list can bloat the prompt. Slice and use default=str.

Apply this diff to sanitize and bound tool_results for the prompt:

-        tool_results = state.context.get("tool_results", [])
+        tool_results = state.context.get("tool_results", [])
+        # Use only the last 10 entries to control token usage
+        sanitized_tool_results = tool_results[-10:] if isinstance(tool_results, list) else tool_results

-        prompt = REACT_SUPERVISOR_PROMPT.format(
+        prompt = REACT_SUPERVISOR_PROMPT.format(
             latest_message=latest_message,
             platform=getattr(state, 'platform', 'unknown'),
             interaction_count=getattr(state, 'interaction_count', 0),
             iteration_count=iteration_count,
             conversation_history=conversation_history,
-            tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results"
+            tool_results=json.dumps(sanitized_tool_results, indent=2, ensure_ascii=False, default=str)
+            if sanitized_tool_results else "No previous tool results"
         )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
tool_results = state.context.get("tool_results", [])
iteration_count = state.context.get("iteration_count", 0)
prompt = REACT_SUPERVISOR_PROMPT.format(
latest_message=latest_message,
platform=state.platform,
interaction_count=state.interaction_count,
iteration_count=iteration_count,
conversation_history=conversation_history,
tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results"
)
# Safety check for max iterations
if iteration_count >= MAX_ITERATIONS:
logger.warning(f"Max iterations ({MAX_ITERATIONS}) reached for session {state.session_id}")
return _create_completion_response(state, "Maximum iterations reached")
response = await llm.ainvoke([HumanMessage(content=prompt)])
decision = _parse_supervisor_decision(response.content)
logger.debug(f"Current iteration: {iteration_count}")
logger.debug(f"Latest message length: {len(latest_message)}")
logger.info(f"ReAct Supervisor decision: {decision['action']}")
prompt = REACT_SUPERVISOR_PROMPT.format(
latest_message=latest_message,
platform=getattr(state, 'platform', 'unknown'),
interaction_count=getattr(state, 'interaction_count', 0),
iteration_count=iteration_count,
conversation_history=conversation_history,
tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results"
)
tool_results = state.context.get("tool_results", [])
# Use only the last 10 entries to control token usage
sanitized_tool_results = tool_results[-10:] if isinstance(tool_results, list) else tool_results
iteration_count = state.context.get("iteration_count", 0)
# Safety check for max iterations
if iteration_count >= MAX_ITERATIONS:
logger.warning(f"Max iterations ({MAX_ITERATIONS}) reached for session {state.session_id}")
return _create_completion_response(state, "Maximum iterations reached")
logger.debug(f"Current iteration: {iteration_count}")
logger.debug(f"Latest message length: {len(latest_message)}")
prompt = REACT_SUPERVISOR_PROMPT.format(
latest_message=latest_message,
platform=getattr(state, 'platform', 'unknown'),
interaction_count=getattr(state, 'interaction_count', 0),
iteration_count=iteration_count,
conversation_history=conversation_history,
tool_results=json.dumps(
sanitized_tool_results, indent=2, ensure_ascii=False, default=str
) if sanitized_tool_results else "No previous tool results"
)
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/react_supervisor.py around lines 31 to 49,
the current code directly serializes the entire tool_results list with
json.dumps, which can fail if tool_results contains non-serializable objects and
can cause prompt bloat. To fix this, limit the number of tool_results items
included by slicing the list to a reasonable length, and use json.dumps with the
default=str parameter to safely serialize any non-serializable objects. This
will harden serialization and prevent excessive token growth in the prompt.

Comment on lines +23 to 26
# Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
handler_result = await handle_faq_node(state, search_tool, llm)
tool_result = handler_result.get("task_result", {})
return add_tool_result(state, "faq_handler", tool_result)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Harden faq_handler_tool_node with error handling

Same rationale as web_search and GitHub tool — contain node failures.

-    # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
-    handler_result = await handle_faq_node(state, search_tool, llm)
-    tool_result = handler_result.get("task_result", {})
-    return add_tool_result(state, "faq_handler", tool_result)
+    # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
+    try:
+        handler_result = await handle_faq_node(state, search_tool, llm)
+        tool_result = handler_result.get("task_result", {})
+    except Exception as e:
+        logger.error(f"Error in FAQ handler: {str(e)}")
+        tool_result = {
+            "type": "faq",
+            "error": str(e),
+            "status": "error",
+        }
+    return add_tool_result(state, "faq_handler", tool_result)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
handler_result = await handle_faq_node(state, search_tool, llm)
tool_result = handler_result.get("task_result", {})
return add_tool_result(state, "faq_handler", tool_result)
# Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
try:
handler_result = await handle_faq_node(state, search_tool, llm)
tool_result = handler_result.get("task_result", {})
except Exception as e:
logger.error(f"Error in FAQ handler: {str(e)}")
tool_result = {
"type": "faq",
"error": str(e),
"status": "error",
}
return add_tool_result(state, "faq_handler", tool_result)
🤖 Prompt for AI Agents
In backend/app/agents/devrel/tool_wrappers.py around lines 23 to 26, the
faq_handler_tool_node lacks error handling which can cause unhandled exceptions.
Wrap the call to handle_faq_node and subsequent processing in a try-except block
to catch any exceptions. On exception, log the error and return a safe fallback
result to ensure the node failure is contained and does not propagate.

Comment on lines 69 to +74
async def _handle_devrel_message(self, message, triage_result: Dict[str, Any]):
"""This now handles both new requests and follow-ups in threads."""
try:
user_id = str(message.author.id)
thread_id = await self._get_or_create_thread(message, user_id)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider guarding per-user thread creation to prevent races

If two messages from the same user arrive near-simultaneously, both could attempt to create a thread before active_threads is updated. A lightweight per-user asyncio.Lock around _get_or_create_thread would prevent duplicate threads.

If helpful, I can provide a focused patch adding a per-user lock map and wrapping the method body with async with lock.

🤖 Prompt for AI Agents
In backend/integrations/discord/bot.py around lines 69 to 74, there is a risk of
race conditions when two messages from the same user trigger simultaneous thread
creation. To fix this, implement a per-user asyncio.Lock map and wrap the call
to _get_or_create_thread inside an async with block using the lock for that
user. This will ensure only one thread creation attempt per user at a time,
preventing duplicate threads.

Comment on lines +30 to 33
# Set up cleanup method based on availability
if TASKS_AVAILABLE:
self._setup_tasks_cleanup()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Undefined method _setup_tasks_cleanup causes runtime error

init calls self._setup_tasks_cleanup(), but no such method exists. This will raise at cog load.

Apply one of:

  • Remove the call and rely on on_ready to start cleanup logic.
  • Or implement _setup_tasks_cleanup to wire the tasks.loop.

Minimal fix:

     def __init__(self, bot: DiscordBot, queue_manager: AsyncQueueManager):
         self.bot = bot
         self.queue = queue_manager
         self._cleanup_task = None
-
-        # Set up cleanup method based on availability
-        if TASKS_AVAILABLE:
-            self._setup_tasks_cleanup()
+        # Cleanup is started in on_ready (tasks loop or manual fallback)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Set up cleanup method based on availability
if TASKS_AVAILABLE:
self._setup_tasks_cleanup()
def __init__(self, bot: DiscordBot, queue_manager: AsyncQueueManager):
self.bot = bot
self.queue = queue_manager
self._cleanup_task = None
# Cleanup is started in on_ready (tasks loop or manual fallback)
🤖 Prompt for AI Agents
In backend/integrations/discord/cogs.py around lines 30 to 33, the __init__
method calls self._setup_tasks_cleanup(), but this method is not defined,
causing a runtime error. To fix this, either remove the call to
self._setup_tasks_cleanup() and rely on the on_ready event to start the cleanup
logic, or implement the _setup_tasks_cleanup method to properly initialize and
wire the tasks.loop for cleanup. Choose one approach and update the code
accordingly to prevent the runtime error.

Comment on lines +36 to 43
if TASKS_AVAILABLE:
if hasattr(self, 'cleanup_expired_tokens') and not self.cleanup_expired_tokens.is_running():
print("--> Starting the token cleanup task...")
self.cleanup_expired_tokens.start()
elif not self._cleanup_task:
self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
print("--> Starting manual token cleanup task...")

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Manual cleanup fallback never starts when tasks are unavailable

The manual loop is nested under the TASKS_AVAILABLE branch, so it won’t run when tasks isn’t available. Move the manual path to the else branch.

Suggested fix:

 @commands.Cog.listener()
 async def on_ready(self):
-    if TASKS_AVAILABLE:
-        if hasattr(self, 'cleanup_expired_tokens') and not self.cleanup_expired_tokens.is_running():
-            print("--> Starting the token cleanup task...")
-            self.cleanup_expired_tokens.start()
-        elif not self._cleanup_task:
-            self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
-            print("--> Starting manual token cleanup task...")
+    if TASKS_AVAILABLE:
+        if not self.cleanup_expired_tokens.is_running():
+            logger.info("--> Starting the token cleanup task...")
+            self.cleanup_expired_tokens.start()
+    else:
+        if not self._cleanup_task:
+            self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
+            logger.info("--> Starting manual token cleanup task...")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if TASKS_AVAILABLE:
if hasattr(self, 'cleanup_expired_tokens') and not self.cleanup_expired_tokens.is_running():
print("--> Starting the token cleanup task...")
self.cleanup_expired_tokens.start()
elif not self._cleanup_task:
self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
print("--> Starting manual token cleanup task...")
@commands.Cog.listener()
async def on_ready(self):
if TASKS_AVAILABLE:
if not self.cleanup_expired_tokens.is_running():
logger.info("--> Starting the token cleanup task...")
self.cleanup_expired_tokens.start()
else:
if not self._cleanup_task:
self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
logger.info("--> Starting manual token cleanup task...")
🤖 Prompt for AI Agents
In backend/integrations/discord/cogs.py around lines 36 to 43, the manual token
cleanup fallback is incorrectly nested inside the TASKS_AVAILABLE condition,
preventing it from running when tasks are unavailable. To fix this, move the
manual cleanup loop code to the else branch of the TASKS_AVAILABLE check so it
runs only when tasks are not available.

"uvicorn (>=0.35.0,<0.36.0)",
"ddgs (>=9.0.2,<10.0.0)",
"discord-py (>=2.5.2,<3.0.0)",
"discord-py (>=2.4.0,<3.0.0)",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Lowering discord-py minimum version: verify API compatibility before merging

You’ve lowered the minimum to >=2.4.0. Please verify that all used APIs exist in 2.4.x (notably: app_commands, tasks.loop, Message.create_thread, and Message.interaction_metadata). If any are missing, keep the previous floor or add guards.

Run the quick checks below:

Optionally confirm with docs:


🏁 Script executed:

#!/usr/bin/env bash
# 1) Find usages that are sensitive to discord.py version
rg -n "app_commands|tasks\.loop|create_thread|interaction_metadata" backend/integrations/discord

# 2) Double-check dependency pin normalization (Poetry accepts both forms, but ensure resolution works in your env)
rg -n '"discord-py' pyproject.toml

Length of output: 1261


🌐 Web query:

In discord.py 2.4.x, are the following available: 
- discord.app_commands
- discord.ext.tasks.loop
- discord.Message.create_thread
- discord.Message.interaction_metadata

💡 Result:

Here is the availability of each feature in discord.py 2.4.x:

  • discord.app_commands:
    Available. The discord.app_commands namespace, which implements application commands (slash commands), is present in discord.py 2.0 and later, including 2.4.x[1][6][7]. The documentation references features like discord.app_commands.AppCommandContext as "New in version 2.4"[1].

  • discord.ext.tasks.loop:
    Available. The discord.ext.tasks.loop decorator and the background task feature are present and supported since 1.3, with improvements ("New in version 2.4") documented as well[5][10].

  • discord.Message.create_thread:
    Not Available as a method on Message.
    As of discord.py 2.4.x, there is no create_thread() method directly on the discord.Message object. Threads in discord.py are typically created via the TextChannel.create_thread method, not on the message itself. While you can create a thread from a message, the method to do so is attached to the channel, not the message object[3].
    In other libraries (e.g., Pycord), there might be thread attributes or methods, but in discord.py 2.4.x, you do not call message.create_thread().

  • discord.Message.interaction_metadata:
    Not Available (as of discord.py 2.4.x).
    The Discord API provides an "interaction metadata" object for messages that originate from interactions[4]. In discord.py 2.4.x, there is no documented .interaction_metadata attribute on discord.Message—that is, you cannot access this information directly via a message attribute as you can in some other libraries (e.g., Pycord 2.6+)[3]. Related information about interactions is accessible via other means, but not as a message.interaction_metadata property or field.


Summary Table

Feature discord.py 2.4.x Support Notes
discord.app_commands Yes Available and featured for app (slash) commands
discord.ext.tasks.loop Yes Supported; new features in 2.4.x
discord.Message.create_thread No Not a Message method; use TextChannel.create_thread() instead
discord.Message.interaction_metadata No Not available as message.interaction_metadata in discord.py 2.4.x

Citations:


Adjust thread creation & interaction metadata for Discord.py 2.4.x

While discord.app_commands and discord.ext.tasks.loop work in 2.4.x, both
Message.create_thread() and Message.interaction_metadata do not exist until later versions. You’ll need to either bump the minimum floor back (e.g. >= 2.6.0) or update the code:

• In backend/integrations/discord/bot.py line 50:
Replace

if message.interaction_metadata is not None:

with a check using the raw interaction payload or guard behind a try/except if you only support newer versions.

• In backend/integrations/discord/bot.py lines 121–122:
Change

thread = await message.create_thread(name=thread_name, auto_archive_duration=60)

to use the channel’s API, e.g.:

thread = await message.channel.create_thread(
    name=thread_name,
    auto_archive_duration=60,
    type=discord.ChannelType.public_thread,
    message=message
)

Adjust your version floor or code accordingly before merging.

🤖 Prompt for AI Agents
In pyproject.toml at line 27, the discord-py version is set to >=2.4.0,<3.0.0,
but Message.create_thread() and Message.interaction_metadata require at least
version 2.6.0. To fix this, either raise the minimum discord-py version to
>=2.6.0 or update backend/integrations/discord/bot.py: at line 50, replace the
check for message.interaction_metadata with a safer alternative using the raw
interaction payload or a try/except block; at lines 121-122, replace
message.create_thread() with message.channel.create_thread(), passing the
appropriate parameters including name, auto_archive_duration,
type=discord.ChannelType.public_thread, and message=message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

FEATURE REQUEST: Implement FAQ Handler with Web Search for Organizational Queries

1 participant