-
Notifications
You must be signed in to change notification settings - Fork 132
Dynamic faq handler #124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamic faq handler #124
Conversation
WalkthroughThis update refactors the FAQ handler to use a dynamic, multi-step process leveraging web search and an LLM for organizational queries. It changes tool node parameters, updates the workflow, and introduces robust error handling and validation throughout relevant modules. Discord integration and dependency management are also adjusted for compatibility. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant FAQHandler
participant LLM
participant WebSearch
User->>FAQHandler: Submit organizational query
FAQHandler->>LLM: Refine query for organization-specific search
LLM-->>FAQHandler: Refined query
FAQHandler->>WebSearch: Perform web search with refined query
WebSearch-->>FAQHandler: Search results
FAQHandler->>LLM: Synthesize answer from search results
LLM-->>FAQHandler: Synthesized answer
FAQHandler->>User: Respond with answer and source links
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Assessment against linked issues
Assessment against linked issues: Out-of-scope changes
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Note 🔌 MCP (Model Context Protocol) integration is now available in Early Access!Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
🔭 Outside diff range comments (3)
backend/app/core/handler/faq_handler.py (1)
12-27: Static FAQ map likely obsolete; align with new dynamic FAQ flowThis handler still returns canned answers, which diverges from the new dynamic, search-backed FAQ process. Consider deprecating or delegating to the dynamic pipeline to avoid dual behavior.
Options:
- Deprecate this class (e.g., rename to LegacyFAQHandler) and route all FAQ events through the dynamic node.
- Or, adapt this handler to invoke the new dynamic FAQ function and reserve the map only as a last-resort fallback. I can draft a migration patch if helpful.
backend/app/agents/devrel/agent.py (1)
31-31: Remove unused FAQTool import and instantiationThe
FAQToolis no longer used after the dynamic FAQ refactor—drop both its import and instantiation to keep the agent lean.• In
backend/app/agents/devrel/agent.py
- Remove the import on line 9
- Remove the
self.faq_tool = FAQTool()on line 31Suggested diff:
--- a/backend/app/agents/devrel/agent.py +++ b/backend/app/agents/devrel/agent.py @@ -from .tools.faq_tool import FAQTool @@ class DevRelAgent(BaseAgent): - self.search_tool = DuckDuckGoSearchTool() - self.faq_tool = FAQTool() - self.github_toolkit = GitHubToolkit() + self.search_tool = DuckDuckGoSearchTool() + self.github_toolkit = GitHubToolkit()backend/app/agents/devrel/tool_wrappers.py (1)
11-17: Add error handling parity for web_search_tool_nodeMirror the GitHub tool’s try/except so failures don’t derail the graph and errors are surfaced consistently.
async def web_search_tool_node(state: AgentState, search_tool, llm) -> Dict[str, Any]: """Execute web search tool and add result to ReAct context""" logger.info(f"Executing web search tool for session {state.session_id}") - - handler_result = await handle_web_search_node(state, search_tool, llm) - tool_result = handler_result.get("task_result", {}) - return add_tool_result(state, "web_search", tool_result) + try: + handler_result = await handle_web_search_node(state, search_tool, llm) + tool_result = handler_result.get("task_result", {}) + except Exception as e: + logger.error(f"Error in web search: {str(e)}") + tool_result = { + "type": "web_search", + "error": str(e), + "status": "error", + } + return add_tool_result(state, "web_search", tool_result)
🧹 Nitpick comments (11)
backend/integrations/discord/bot.py (1)
91-96: Avoid per-call construction of priority mapMinor perf/readability: make the priority map a class-level constant instead of recreating it on every message.
Here’s a small refactor:
class DiscordBot(commands.Bot): """Discord bot with LangGraph agent integration""" + PRIORITY_MAP = { + "high": QueuePriority.HIGH, + "medium": QueuePriority.MEDIUM, + "low": QueuePriority.LOW, + } @@ - priority_map = {"high": QueuePriority.HIGH, - "medium": QueuePriority.MEDIUM, - "low": QueuePriority.LOW - } - priority = priority_map.get(triage_result.get("priority"), QueuePriority.MEDIUM) + priority = self.PRIORITY_MAP.get(triage_result.get("priority"), QueuePriority.MEDIUM)backend/main.py (1)
49-57: Disabling cog loading disables slash commands and cleanup tasksWith the extension loading commented out, none of the DevRelCommands (slash commands) nor the token cleanup task will be registered/run. If this is intentional for a phased rollout, consider gating via an env flag so prod keeps functionality.
- Confirm there is an alternate path that loads the cogs at runtime; otherwise, users will lose /verify_github, /reset, etc.
- If intentional, note the operational impact (no periodic cleanup) and whether there’s a separate TTL-based cleanup on the backend.
I can add an env-driven toggle (e.g., DISCORD_LOAD_COGS=true) that conditionally loads the extension and logs clearly when disabled.
backend/integrations/discord/cogs.py (2)
51-60: Prefer logger over print for task lifecycle messagesUse the module logger for consistency and observability.
- print("--> Running token cleanup task...") + logger.info("--> Running token cleanup task...") @@ - print("--> Token cleanup task finished.") + logger.info("--> Token cleanup task finished.")
66-78: Manual cleanup loop: switch prints to logger and handle graceful cancelLogging improvement; also log on cancel to aid debugging.
- while True: + while True: try: await asyncio.sleep(300) # 5 minutes - print("--> Running manual token cleanup task...") + logger.info("--> Running manual token cleanup task...") await cleanup_expired_tokens() - print("--> Manual token cleanup task finished.") + logger.info("--> Manual token cleanup task finished.") except asyncio.CancelledError: - break + logger.info("Manual token cleanup task cancelled.") + breakbackend/app/agents/devrel/nodes/handlers/web_search.py (1)
61-69: Trim overly long snippets and normalize whitespace for cleaner responsesLong raw snippets can overwhelm users. Normalize whitespace, fall back to
snippetwhencontentis missing, and truncate to a sane length.- for i, result in enumerate(results[:5]): - title = result.get('title', 'N/A') - snippet = result.get('content', 'N/A') - url = result.get('url', '#') - response_parts.append(f"{i+1}. {title}: {snippet}") - response_parts.append(f" (Source: {url})") + for i, result in enumerate(results[:5]): + title = result.get('title', 'N/A') + snippet = result.get('content') or result.get('snippet') or 'N/A' + # Collapse whitespace/newlines and truncate long snippets + snippet = ' '.join(str(snippet).split()) + if len(snippet) > 240: + snippet = snippet[:237] + "..." + url = result.get('url', '#') + response_parts.append(f"{i+1}. {title}: {snippet}") + response_parts.append(f" (Source: {url})")backend/app/core/handler/faq_handler.py (1)
44-44: Use lazy logging formatting to avoid unnecessary string buildingMinor improvement: use logger formatting instead of concatenation.
- logger.info("Question: " + question + ", Response: " + response) + logger.info("Question: %s, Response: %s", question, response)backend/app/agents/devrel/nodes/handlers/faq.py (2)
90-93: Normalize refined query output and strip wrapping quotesLLMs sometimes wrap output in quotes. Strip them to avoid degrading search quality.
- refined_query = response.content.strip() + refined_query = (response.content or "").strip() + # Remove unnecessary wrapping quotes/backticks + if (refined_query.startswith('"') and refined_query.endswith('"')) or \ + (refined_query.startswith("'") and refined_query.endswith("'")) or \ + (refined_query.startswith("`") and refined_query.endswith("`")): + refined_query = refined_query[1:-1].strip() logger.info(f"Refined query: {refined_query}")
40-66: Optional: prefer domain-scoped search to official sources when availableTo bias results toward official properties, consider injecting site: filters (e.g., website/docs/blog/GitHub org) based on configuration.
Example approach:
- Add org domains to settings (e.g., settings.org_domains = ["devr.ai", "blog.devr.ai", "github.com/devr-ai"])
- Append ‘site:devr.ai OR site:blog.devr.ai OR site:github.com/devr-ai’ to the refined query before calling search.
backend/app/agents/devrel/nodes/react_supervisor.py (3)
12-16: Consider using a set for VALID_ACTIONS and confirm the action list is exhaustive for the new FAQ flowMembership checks are frequent; a set communicates intent and provides O(1) membership. Also verify no new actions (e.g., intermediate FAQ steps) are missing.
Would you like me to convert VALID_ACTIONS to a set and add tests asserting routing only permits declared actions?
137-142: Iteration guard off-by-one; align with node checkreact_supervisor_node stops at >= MAX_ITERATIONS. Router uses >, allowing one extra routing step. Use >= for consistency.
- if iteration_count > MAX_ITERATIONS: + if iteration_count >= MAX_ITERATIONS: logger.warning(f"Max iterations reached for session {state.session_id}") return "complete"
198-205: Log full traceback when latest message extraction failsAdd exc_info for faster triage.
- except Exception as e: - logger.error(f"Error getting latest message: {e}") + except Exception as e: + logger.error(f"Error getting latest message: {e}", exc_info=True) return ""
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
poetry.lockis excluded by!**/*.lock
📒 Files selected for processing (10)
backend/app/agents/devrel/agent.py(1 hunks)backend/app/agents/devrel/nodes/handlers/faq.py(1 hunks)backend/app/agents/devrel/nodes/handlers/web_search.py(2 hunks)backend/app/agents/devrel/nodes/react_supervisor.py(1 hunks)backend/app/agents/devrel/tool_wrappers.py(1 hunks)backend/app/core/handler/faq_handler.py(1 hunks)backend/integrations/discord/bot.py(5 hunks)backend/integrations/discord/cogs.py(5 hunks)backend/main.py(3 hunks)pyproject.toml(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-06-28T23:14:21.089Z
Learnt from: smokeyScraper
PR: AOSSIE-Org/Devr.AI#87
File: backend/app/api/v1/health.py:1-12
Timestamp: 2025-06-28T23:14:21.089Z
Learning: In the Devr.AI project, the application is designed to run from the backend directory, making import paths like `from main import DevRAIApplication` correct for the current setup. Import path adjustments for deployment will be configured later during the deployment process.
Applied to files:
backend/main.py
🧬 Code Graph Analysis (2)
backend/app/agents/devrel/agent.py (1)
backend/app/agents/devrel/tool_wrappers.py (1)
faq_handler_tool_node(19-26)
backend/app/core/handler/faq_handler.py (1)
backend/app/core/events/enums.py (1)
EventType(10-32)
🔇 Additional comments (8)
backend/app/agents/devrel/nodes/handlers/web_search.py (1)
54-54: Whitespace cleanup — no functional changeNo behavioral difference. Safe to merge.
Also applies to: 64-64
backend/app/core/handler/faq_handler.py (1)
4-4: Remove unused import — LGTMImport cleanup is correct and reduces noise.
backend/app/agents/devrel/agent.py (1)
46-47: Correctly passes search_tool and llm to FAQ nodeSignature matches the refactor; routing remains intact.
backend/app/agents/devrel/tool_wrappers.py (1)
19-26: Signature and internal call updated — LGTMSwapping to (search_tool, llm) aligns with the dynamic FAQ implementation.
backend/app/agents/devrel/nodes/handlers/faq.py (2)
8-11: Dynamic, search-backed FAQ entrypoint — LGTMClean integration with search + LLM. Good logging and return schema.
101-104: Verified search result schemaBoth search tool implementations normalize results to include the keys the FAQ handler expects:
- In
backend/app/agents/devrel/tools/search_tool/ddg.py(lines 31–33), raw DDG fields are mapped to
"title","content"(frombody), and"url"(fromhref).- In
backend/app/agents/devrel/tools/search_tool/tavilly.py(lines 34–36), Tavily results are mapped to
"title","content", and"url".The FAQ handler’s calls to
result.get('title'),result.get('content'), andresult.get('url')align with these mappings. No changes required.backend/app/agents/devrel/nodes/react_supervisor.py (2)
51-53: Confirm llm interface is a ChatModel; otherwise accept string inputainvoke with [HumanMessage] assumes a chat model. If llm is a text LLM, this will fail. Either constrain llm type or add a fallback path.
Would you like me to add a small adapter to accept both string and message-list inputs and annotate llm as a ChatModel?
# Example adapter (outside review range, for reference): async def _ainvoke_chat_or_text(llm, prompt: str): try: return await llm.ainvoke([HumanMessage(content=prompt)]) except Exception: return await llm.ainvoke(prompt)
247-268: Standardized error/completion responses look goodConsistent shape and action defaulting to complete is appropriate.
| latest_message = "" | ||
| if state.messages: | ||
| latest_message = state.messages[-1].get("content", "") | ||
| elif state.context.get("original_message"): | ||
| latest_message = state.context["original_message"] | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Guard against empty user message
If no message is present, return a helpful prompt instead of calling the pipeline with an empty query.
latest_message = ""
if state.messages:
latest_message = state.messages[-1].get("content", "")
elif state.context.get("original_message"):
latest_message = state.context["original_message"]
+ if not (latest_message or "").strip():
+ logger.info("No message found for FAQ; returning guidance")
+ return {
+ "task_result": {
+ "type": "faq",
+ "response": "Ask me anything about our organization (e.g., 'How does this organization work?' or 'What projects do you maintain?').",
+ "source": "dynamic_web_search"
+ },
+ "current_task": "faq_handled"
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| latest_message = "" | |
| if state.messages: | |
| latest_message = state.messages[-1].get("content", "") | |
| elif state.context.get("original_message"): | |
| latest_message = state.context["original_message"] | |
| latest_message = "" | |
| if state.messages: | |
| latest_message = state.messages[-1].get("content", "") | |
| elif state.context.get("original_message"): | |
| latest_message = state.context["original_message"] | |
| if not (latest_message or "").strip(): | |
| logger.info("No message found for FAQ; returning guidance") | |
| return { | |
| "task_result": { | |
| "type": "faq", | |
| "response": "Ask me anything about our organization (e.g., 'How does this organization work?' or 'What projects do you maintain?').", | |
| "source": "dynamic_web_search" | |
| }, | |
| "current_task": "faq_handled" | |
| } |
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/handlers/faq.py around lines 12 to 17, the
code assigns latest_message from state messages or context but does not handle
the case when latest_message is empty. Add a guard to check if latest_message is
empty or None, and if so, return a helpful prompt message instead of proceeding
to call the pipeline with an empty query. This prevents unnecessary processing
and improves user experience.
| # Dynamic FAQ processing (replaces static faq_tool.get_response) | ||
| faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name="Devr.AI") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid hard-coded org name; make it configurable
Hard-coding "Devr.AI" limits reuse. Derive from state/context or settings with a sensible fallback.
-from app.agents.state import AgentState
+from app.agents.state import AgentState
+from app.core.config import settings
@@
- # Dynamic FAQ processing (replaces static faq_tool.get_response)
- faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name="Devr.AI")
+ # Dynamic FAQ processing (replaces static faq_tool.get_response)
+ org_name = (state.context.get("org_name") if getattr(state, "context", None) else None) \
+ or getattr(settings, "org_name", None) \
+ or "Devr.AI"
+ faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name=org_name)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Dynamic FAQ processing (replaces static faq_tool.get_response) | |
| faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name="Devr.AI") | |
| from app.agents.state import AgentState | |
| from app.core.config import settings | |
| # Dynamic FAQ processing (replaces static faq_tool.get_response) | |
| org_name = (state.context.get("org_name") if getattr(state, "context", None) else None) \ | |
| or getattr(settings, "org_name", None) \ | |
| or "Devr.AI" | |
| faq_response = await _dynamic_faq_process(latest_message, search_tool, llm, org_name=org_name) |
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/handlers/faq.py around lines 18 to 20, the
org_name parameter is hard-coded as "Devr.AI" in the call to
_dynamic_faq_process. Modify the code to obtain the org_name dynamically from
the current state, context, or configuration settings, and provide a sensible
default fallback if none is available, instead of using a fixed string.
| # Prepare search results context | ||
| results_context = "" | ||
| for i, result in enumerate(search_results[:5]): # Top 5 results | ||
| title = result.get('title', 'N/A') | ||
| content = result.get('content', 'N/A') | ||
| url = result.get('url', 'N/A') | ||
| results_context += f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Bound prompt size and sanitize result content to prevent token bloat
Cap per-result content length, collapse whitespace, and fall back to snippet when content is missing.
- results_context = ""
- for i, result in enumerate(search_results[:5]): # Top 5 results
- title = result.get('title', 'N/A')
- content = result.get('content', 'N/A')
- url = result.get('url', 'N/A')
- results_context += f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"
+ results_context = ""
+ for i, result in enumerate(search_results[:5]): # Top 5 results
+ title = result.get('title', 'N/A')
+ content = result.get('content') or result.get('snippet') or 'N/A'
+ # Normalize whitespace and trim overly long blobs
+ content = ' '.join(str(content).split())
+ if len(content) > 800:
+ content = content[:797] + "..."
+ url = result.get('url', 'N/A')
+ results_context += (
+ f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n"
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Prepare search results context | |
| results_context = "" | |
| for i, result in enumerate(search_results[:5]): # Top 5 results | |
| title = result.get('title', 'N/A') | |
| content = result.get('content', 'N/A') | |
| url = result.get('url', 'N/A') | |
| results_context += f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n" | |
| # Prepare search results context | |
| results_context = "" | |
| for i, result in enumerate(search_results[:5]): # Top 5 results | |
| title = result.get('title', 'N/A') | |
| content = result.get('content') or result.get('snippet') or 'N/A' | |
| # Normalize whitespace and trim overly long blobs | |
| content = ' '.join(str(content).split()) | |
| if len(content) > 800: | |
| content = content[:797] + "..." | |
| url = result.get('url', 'N/A') | |
| results_context += ( | |
| f"\nResult {i+1}:\nTitle: {title}\nContent: {content}\nURL: {url}\n" | |
| ) |
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/handlers/faq.py around lines 98 to 105, the
current code appends full content from search results which can cause token
bloat in prompts. To fix this, limit the length of the content string to a
reasonable maximum, sanitize it by collapsing multiple whitespace characters
into single spaces, and if the content is missing or empty, use the 'snippet'
field as a fallback. This will keep the prompt size bounded and cleaner.
| # Validate state first | ||
| if not _validate_state(state): | ||
| logger.error(f"Invalid state for session {getattr(state, 'session_id', 'unknown')}") | ||
| return _create_error_response(state, "Invalid state") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Good early state validation; strengthen _validate_state to enforce dict context
Early exit is solid. Ensure _validate_state rejects non-dict context to prevent attribute errors downstream.
Apply the _validate_state diff in the dedicated comment below (Lines 228-246).
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/react_supervisor.py around lines 20 to 24,
the early state validation is good but the _validate_state function should be
enhanced to explicitly check that the state context is a dictionary. Modify
_validate_state to return False if the state is not a dict, preventing attribute
errors later. Implement this change as indicated in the related comment for
lines 228 to 246.
| tool_results = state.context.get("tool_results", []) | ||
| iteration_count = state.context.get("iteration_count", 0) | ||
|
|
||
| prompt = REACT_SUPERVISOR_PROMPT.format( | ||
| latest_message=latest_message, | ||
| platform=state.platform, | ||
| interaction_count=state.interaction_count, | ||
| iteration_count=iteration_count, | ||
| conversation_history=conversation_history, | ||
| tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results" | ||
| ) | ||
| # Safety check for max iterations | ||
| if iteration_count >= MAX_ITERATIONS: | ||
| logger.warning(f"Max iterations ({MAX_ITERATIONS}) reached for session {state.session_id}") | ||
| return _create_completion_response(state, "Maximum iterations reached") | ||
|
|
||
| response = await llm.ainvoke([HumanMessage(content=prompt)]) | ||
| decision = _parse_supervisor_decision(response.content) | ||
| logger.debug(f"Current iteration: {iteration_count}") | ||
| logger.debug(f"Latest message length: {len(latest_message)}") | ||
|
|
||
| logger.info(f"ReAct Supervisor decision: {decision['action']}") | ||
| prompt = REACT_SUPERVISOR_PROMPT.format( | ||
| latest_message=latest_message, | ||
| platform=getattr(state, 'platform', 'unknown'), | ||
| interaction_count=getattr(state, 'interaction_count', 0), | ||
| iteration_count=iteration_count, | ||
| conversation_history=conversation_history, | ||
| tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results" | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Harden tool_results serialization and guard token growth
json.dumps can fail on non-serializable objects; also passing the full list can bloat the prompt. Slice and use default=str.
Apply this diff to sanitize and bound tool_results for the prompt:
- tool_results = state.context.get("tool_results", [])
+ tool_results = state.context.get("tool_results", [])
+ # Use only the last 10 entries to control token usage
+ sanitized_tool_results = tool_results[-10:] if isinstance(tool_results, list) else tool_results
- prompt = REACT_SUPERVISOR_PROMPT.format(
+ prompt = REACT_SUPERVISOR_PROMPT.format(
latest_message=latest_message,
platform=getattr(state, 'platform', 'unknown'),
interaction_count=getattr(state, 'interaction_count', 0),
iteration_count=iteration_count,
conversation_history=conversation_history,
- tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results"
+ tool_results=json.dumps(sanitized_tool_results, indent=2, ensure_ascii=False, default=str)
+ if sanitized_tool_results else "No previous tool results"
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| tool_results = state.context.get("tool_results", []) | |
| iteration_count = state.context.get("iteration_count", 0) | |
| prompt = REACT_SUPERVISOR_PROMPT.format( | |
| latest_message=latest_message, | |
| platform=state.platform, | |
| interaction_count=state.interaction_count, | |
| iteration_count=iteration_count, | |
| conversation_history=conversation_history, | |
| tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results" | |
| ) | |
| # Safety check for max iterations | |
| if iteration_count >= MAX_ITERATIONS: | |
| logger.warning(f"Max iterations ({MAX_ITERATIONS}) reached for session {state.session_id}") | |
| return _create_completion_response(state, "Maximum iterations reached") | |
| response = await llm.ainvoke([HumanMessage(content=prompt)]) | |
| decision = _parse_supervisor_decision(response.content) | |
| logger.debug(f"Current iteration: {iteration_count}") | |
| logger.debug(f"Latest message length: {len(latest_message)}") | |
| logger.info(f"ReAct Supervisor decision: {decision['action']}") | |
| prompt = REACT_SUPERVISOR_PROMPT.format( | |
| latest_message=latest_message, | |
| platform=getattr(state, 'platform', 'unknown'), | |
| interaction_count=getattr(state, 'interaction_count', 0), | |
| iteration_count=iteration_count, | |
| conversation_history=conversation_history, | |
| tool_results=json.dumps(tool_results, indent=2) if tool_results else "No previous tool results" | |
| ) | |
| tool_results = state.context.get("tool_results", []) | |
| # Use only the last 10 entries to control token usage | |
| sanitized_tool_results = tool_results[-10:] if isinstance(tool_results, list) else tool_results | |
| iteration_count = state.context.get("iteration_count", 0) | |
| # Safety check for max iterations | |
| if iteration_count >= MAX_ITERATIONS: | |
| logger.warning(f"Max iterations ({MAX_ITERATIONS}) reached for session {state.session_id}") | |
| return _create_completion_response(state, "Maximum iterations reached") | |
| logger.debug(f"Current iteration: {iteration_count}") | |
| logger.debug(f"Latest message length: {len(latest_message)}") | |
| prompt = REACT_SUPERVISOR_PROMPT.format( | |
| latest_message=latest_message, | |
| platform=getattr(state, 'platform', 'unknown'), | |
| interaction_count=getattr(state, 'interaction_count', 0), | |
| iteration_count=iteration_count, | |
| conversation_history=conversation_history, | |
| tool_results=json.dumps( | |
| sanitized_tool_results, indent=2, ensure_ascii=False, default=str | |
| ) if sanitized_tool_results else "No previous tool results" | |
| ) |
🤖 Prompt for AI Agents
In backend/app/agents/devrel/nodes/react_supervisor.py around lines 31 to 49,
the current code directly serializes the entire tool_results list with
json.dumps, which can fail if tool_results contains non-serializable objects and
can cause prompt bloat. To fix this, limit the number of tool_results items
included by slicing the list to a reasonable length, and use json.dumps with the
default=str parameter to safely serialize any non-serializable objects. This
will harden serialization and prevent excessive token growth in the prompt.
| # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ | ||
| handler_result = await handle_faq_node(state, search_tool, llm) | ||
| tool_result = handler_result.get("task_result", {}) | ||
| return add_tool_result(state, "faq_handler", tool_result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Harden faq_handler_tool_node with error handling
Same rationale as web_search and GitHub tool — contain node failures.
- # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
- handler_result = await handle_faq_node(state, search_tool, llm)
- tool_result = handler_result.get("task_result", {})
- return add_tool_result(state, "faq_handler", tool_result)
+ # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ
+ try:
+ handler_result = await handle_faq_node(state, search_tool, llm)
+ tool_result = handler_result.get("task_result", {})
+ except Exception as e:
+ logger.error(f"Error in FAQ handler: {str(e)}")
+ tool_result = {
+ "type": "faq",
+ "error": str(e),
+ "status": "error",
+ }
+ return add_tool_result(state, "faq_handler", tool_result)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ | |
| handler_result = await handle_faq_node(state, search_tool, llm) | |
| tool_result = handler_result.get("task_result", {}) | |
| return add_tool_result(state, "faq_handler", tool_result) | |
| # Updated to use search_tool and llm instead of faq_tool for dynamic FAQ | |
| try: | |
| handler_result = await handle_faq_node(state, search_tool, llm) | |
| tool_result = handler_result.get("task_result", {}) | |
| except Exception as e: | |
| logger.error(f"Error in FAQ handler: {str(e)}") | |
| tool_result = { | |
| "type": "faq", | |
| "error": str(e), | |
| "status": "error", | |
| } | |
| return add_tool_result(state, "faq_handler", tool_result) |
🤖 Prompt for AI Agents
In backend/app/agents/devrel/tool_wrappers.py around lines 23 to 26, the
faq_handler_tool_node lacks error handling which can cause unhandled exceptions.
Wrap the call to handle_faq_node and subsequent processing in a try-except block
to catch any exceptions. On exception, log the error and return a safe fallback
result to ensure the node failure is contained and does not propagate.
| async def _handle_devrel_message(self, message, triage_result: Dict[str, Any]): | ||
| """This now handles both new requests and follow-ups in threads.""" | ||
| try: | ||
| user_id = str(message.author.id) | ||
| thread_id = await self._get_or_create_thread(message, user_id) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider guarding per-user thread creation to prevent races
If two messages from the same user arrive near-simultaneously, both could attempt to create a thread before active_threads is updated. A lightweight per-user asyncio.Lock around _get_or_create_thread would prevent duplicate threads.
If helpful, I can provide a focused patch adding a per-user lock map and wrapping the method body with async with lock.
🤖 Prompt for AI Agents
In backend/integrations/discord/bot.py around lines 69 to 74, there is a risk of
race conditions when two messages from the same user trigger simultaneous thread
creation. To fix this, implement a per-user asyncio.Lock map and wrap the call
to _get_or_create_thread inside an async with block using the lock for that
user. This will ensure only one thread creation attempt per user at a time,
preventing duplicate threads.
| # Set up cleanup method based on availability | ||
| if TASKS_AVAILABLE: | ||
| self._setup_tasks_cleanup() | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Undefined method _setup_tasks_cleanup causes runtime error
init calls self._setup_tasks_cleanup(), but no such method exists. This will raise at cog load.
Apply one of:
- Remove the call and rely on on_ready to start cleanup logic.
- Or implement _setup_tasks_cleanup to wire the tasks.loop.
Minimal fix:
def __init__(self, bot: DiscordBot, queue_manager: AsyncQueueManager):
self.bot = bot
self.queue = queue_manager
self._cleanup_task = None
-
- # Set up cleanup method based on availability
- if TASKS_AVAILABLE:
- self._setup_tasks_cleanup()
+ # Cleanup is started in on_ready (tasks loop or manual fallback)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Set up cleanup method based on availability | |
| if TASKS_AVAILABLE: | |
| self._setup_tasks_cleanup() | |
| def __init__(self, bot: DiscordBot, queue_manager: AsyncQueueManager): | |
| self.bot = bot | |
| self.queue = queue_manager | |
| self._cleanup_task = None | |
| # Cleanup is started in on_ready (tasks loop or manual fallback) |
🤖 Prompt for AI Agents
In backend/integrations/discord/cogs.py around lines 30 to 33, the __init__
method calls self._setup_tasks_cleanup(), but this method is not defined,
causing a runtime error. To fix this, either remove the call to
self._setup_tasks_cleanup() and rely on the on_ready event to start the cleanup
logic, or implement the _setup_tasks_cleanup method to properly initialize and
wire the tasks.loop for cleanup. Choose one approach and update the code
accordingly to prevent the runtime error.
| if TASKS_AVAILABLE: | ||
| if hasattr(self, 'cleanup_expired_tokens') and not self.cleanup_expired_tokens.is_running(): | ||
| print("--> Starting the token cleanup task...") | ||
| self.cleanup_expired_tokens.start() | ||
| elif not self._cleanup_task: | ||
| self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop()) | ||
| print("--> Starting manual token cleanup task...") | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Manual cleanup fallback never starts when tasks are unavailable
The manual loop is nested under the TASKS_AVAILABLE branch, so it won’t run when tasks isn’t available. Move the manual path to the else branch.
Suggested fix:
@commands.Cog.listener()
async def on_ready(self):
- if TASKS_AVAILABLE:
- if hasattr(self, 'cleanup_expired_tokens') and not self.cleanup_expired_tokens.is_running():
- print("--> Starting the token cleanup task...")
- self.cleanup_expired_tokens.start()
- elif not self._cleanup_task:
- self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
- print("--> Starting manual token cleanup task...")
+ if TASKS_AVAILABLE:
+ if not self.cleanup_expired_tokens.is_running():
+ logger.info("--> Starting the token cleanup task...")
+ self.cleanup_expired_tokens.start()
+ else:
+ if not self._cleanup_task:
+ self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop())
+ logger.info("--> Starting manual token cleanup task...")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if TASKS_AVAILABLE: | |
| if hasattr(self, 'cleanup_expired_tokens') and not self.cleanup_expired_tokens.is_running(): | |
| print("--> Starting the token cleanup task...") | |
| self.cleanup_expired_tokens.start() | |
| elif not self._cleanup_task: | |
| self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop()) | |
| print("--> Starting manual token cleanup task...") | |
| @commands.Cog.listener() | |
| async def on_ready(self): | |
| if TASKS_AVAILABLE: | |
| if not self.cleanup_expired_tokens.is_running(): | |
| logger.info("--> Starting the token cleanup task...") | |
| self.cleanup_expired_tokens.start() | |
| else: | |
| if not self._cleanup_task: | |
| self._cleanup_task = asyncio.create_task(self._manual_cleanup_loop()) | |
| logger.info("--> Starting manual token cleanup task...") |
🤖 Prompt for AI Agents
In backend/integrations/discord/cogs.py around lines 36 to 43, the manual token
cleanup fallback is incorrectly nested inside the TASKS_AVAILABLE condition,
preventing it from running when tasks are unavailable. To fix this, move the
manual cleanup loop code to the else branch of the TASKS_AVAILABLE check so it
runs only when tasks are not available.
| "uvicorn (>=0.35.0,<0.36.0)", | ||
| "ddgs (>=9.0.2,<10.0.0)", | ||
| "discord-py (>=2.5.2,<3.0.0)", | ||
| "discord-py (>=2.4.0,<3.0.0)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Lowering discord-py minimum version: verify API compatibility before merging
You’ve lowered the minimum to >=2.4.0. Please verify that all used APIs exist in 2.4.x (notably: app_commands, tasks.loop, Message.create_thread, and Message.interaction_metadata). If any are missing, keep the previous floor or add guards.
Run the quick checks below:
Optionally confirm with docs:
🏁 Script executed:
#!/usr/bin/env bash
# 1) Find usages that are sensitive to discord.py version
rg -n "app_commands|tasks\.loop|create_thread|interaction_metadata" backend/integrations/discord
# 2) Double-check dependency pin normalization (Poetry accepts both forms, but ensure resolution works in your env)
rg -n '"discord-py' pyproject.tomlLength of output: 1261
🌐 Web query:
In discord.py 2.4.x, are the following available:
- discord.app_commands
- discord.ext.tasks.loop
- discord.Message.create_thread
- discord.Message.interaction_metadata
💡 Result:
Here is the availability of each feature in discord.py 2.4.x:
-
discord.app_commands:
Available. Thediscord.app_commandsnamespace, which implements application commands (slash commands), is present in discord.py 2.0 and later, including 2.4.x[1][6][7]. The documentation references features likediscord.app_commands.AppCommandContextas "New in version 2.4"[1]. -
discord.ext.tasks.loop:
Available. Thediscord.ext.tasks.loopdecorator and the background task feature are present and supported since 1.3, with improvements ("New in version 2.4") documented as well[5][10]. -
discord.Message.create_thread:
Not Available as a method on Message.
As of discord.py 2.4.x, there is nocreate_thread()method directly on thediscord.Messageobject. Threads in discord.py are typically created via theTextChannel.create_threadmethod, not on the message itself. While you can create a thread from a message, the method to do so is attached to the channel, not the message object[3].
In other libraries (e.g., Pycord), there might bethreadattributes or methods, but in discord.py 2.4.x, you do not callmessage.create_thread(). -
discord.Message.interaction_metadata:
Not Available (as of discord.py 2.4.x).
The Discord API provides an "interaction metadata" object for messages that originate from interactions[4]. In discord.py 2.4.x, there is no documented.interaction_metadataattribute ondiscord.Message—that is, you cannot access this information directly via a message attribute as you can in some other libraries (e.g., Pycord 2.6+)[3]. Related information about interactions is accessible via other means, but not as amessage.interaction_metadataproperty or field.
Summary Table
| Feature | discord.py 2.4.x Support | Notes |
|---|---|---|
| discord.app_commands | Yes | Available and featured for app (slash) commands |
| discord.ext.tasks.loop | Yes | Supported; new features in 2.4.x |
| discord.Message.create_thread | No | Not a Message method; use TextChannel.create_thread() instead |
| discord.Message.interaction_metadata | No | Not available as message.interaction_metadata in discord.py 2.4.x |
Citations:
- 1: https://discordpy.readthedocs.io/en/latest/interactions/api.html
- 2: https://www.youtube.com/watch?v=PgN9U1wBTAg
- 3: https://docs.pycord.dev/en/v2.6.0/api/models.html
- 4: https://discord.com/developers/docs/resources/message
- 5: https://docs.discord4py.dev/en/developer/ext/tasks/
- 6: https://www.pythondiscord.com/pages/guides/python-guides/app-commands/
- 7: app_commands: suppressing "the application did not respond" Rapptz/discord.py#8457
- 8: https://docs.nextcord.dev/en/stable/api.html
- 9: https://github.com/Rapptz/discord.py
- 10: https://discordpy.readthedocs.io/en/stable/ext/tasks/
Adjust thread creation & interaction metadata for Discord.py 2.4.x
While discord.app_commands and discord.ext.tasks.loop work in 2.4.x, both
Message.create_thread() and Message.interaction_metadata do not exist until later versions. You’ll need to either bump the minimum floor back (e.g. >= 2.6.0) or update the code:
• In backend/integrations/discord/bot.py line 50:
Replace
if message.interaction_metadata is not None:with a check using the raw interaction payload or guard behind a try/except if you only support newer versions.
• In backend/integrations/discord/bot.py lines 121–122:
Change
thread = await message.create_thread(name=thread_name, auto_archive_duration=60)to use the channel’s API, e.g.:
thread = await message.channel.create_thread(
name=thread_name,
auto_archive_duration=60,
type=discord.ChannelType.public_thread,
message=message
)Adjust your version floor or code accordingly before merging.
🤖 Prompt for AI Agents
In pyproject.toml at line 27, the discord-py version is set to >=2.4.0,<3.0.0,
but Message.create_thread() and Message.interaction_metadata require at least
version 2.6.0. To fix this, either raise the minimum discord-py version to
>=2.6.0 or update backend/integrations/discord/bot.py: at line 50, replace the
check for message.interaction_metadata with a safer alternative using the raw
interaction payload or a try/except block; at lines 121-122, replace
message.create_thread() with message.channel.create_thread(), passing the
appropriate parameters including name, auto_archive_duration,
type=discord.ChannelType.public_thread, and message=message.
🌟 Feature Implementation: Dynamic FAQ Handler
Problem Solved
Key Features Implemented
✅ Smart Query Routing: ReAct Supervisor intelligently identifies organizational queries
✅ Dynamic Web Search: Real-time information retrieval from organization's public presence
✅ Automated Synthesis: Converts search results into comprehensive, user-friendly responses
✅ Source Attribution: Provides links for users to explore topics further
Testing Results
Successfully tested with query: "How does this organization work?"
Technical Implementation
Ready for review and production deployment.
Closes #97
bandicam.2025-08-10.19-47-44-389.mp4
Summary by CodeRabbit
New Features
Bug Fixes
Chores
Refactor