Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: debugging assistant #326

Open
wants to merge 9 commits into
base: development
Choose a base branch
from
Open

Conversation

maciejmajek
Copy link
Member

@maciejmajek maciejmajek commented Nov 28, 2024

Purpose

This PR adds a new ROS 2 debugging assistant with an interactive Streamlit interface. ROS 2 is a complex distributed robotics middleware with many moving parts (nodes, topics, services, actions, etc.) that can be challenging to debug and monitor. This assistant provides an intuitive chat interface for:

  • Inspecting ROS 2 system state
  • Debugging communication issues
  • Monitoring topics, services, and actions
  • Querying node information and parameters
  • Executing common ROS 2 CLI commands safely

Proposed Changes

  • Adds new Streamlit-based debugging assistant UI (debugging_assistant.py)
  • Implements ROS 2 CLI tool wrappers for common debugging commands:
  • ros2_topic - Topic inspection and monitoring
  • ros2_service - Service inspection and calls
  • ros2_node - Node information and listing
  • ros2_action - Action server inspection
  • ros2_interface - Interface inspection
  • ros2_param - Parameter management
  • Adds Streamlit integration utilities for real-time UI updates and streaming responses
  • Updates model initialization to support streaming capabilities
  • Implements command safety checks to prevent shell injection

Issues

  • Links to relevant issues

Testing

  • How was it tested, what were the results?

Summary by CodeRabbit

Release Notes

  • New Features

    • Introduced a Streamlit integration for dynamic UI updates based on language model interactions.
    • Launched a debugging assistant application for ROS 2 queries, featuring a conversational agent and interactive chat interface.
  • Enhancements

    • Improved command execution safety for ROS2 commands with structured output handling.
    • Expanded the model initialization function to accept additional parameters for enhanced flexibility.
  • Bug Fixes

    • Ensured robust error handling for command inputs in the ROS2 command execution functions.

@maciejmajek

This comment was marked as resolved.

@maciejmajek
Copy link
Member Author

@CodeRabbit full review

Copy link

coderabbitai bot commented Nov 28, 2024

Walkthrough

This pull request introduces several new files and functionalities that integrate Streamlit with a callback handler for dynamic UI updates based on language model interactions. It includes a debugging assistant for ROS 2 queries, a set of functions for executing ROS 2 commands safely, and modifications to a model initialization function to accept additional parameters. The changes enhance user interaction and provide structured command execution with safety checks.

Changes

File Path Change Summary
src/rai/rai/agents/integrations/streamlit.py - Added get_streamlit_cb function for creating a Streamlit callback handler.
- Introduced StreamHandler class with methods for handling LLM token updates and tool interactions.
- Added add_streamlit_context decorator function and streamlit_invoke utility function.
src/rai/rai/tools/debugging_assistant.py - Added initialize_graph function to set up a conversational agent for ROS 2 queries in a Streamlit app.
src/rai/rai/tools/ros/debugging.py - Added functions for executing ROS 2 commands (run_command, run_with_timeout, and various command-specific functions).
- Implemented safety checks for command execution.
src/rai/rai/utils/model_initialization.py - Updated get_llm_model function signature to accept additional keyword arguments for model instantiation.

Possibly related PRs

  • feat: implement configuration gui #304: The changes in this PR involve the implementation of a configuration GUI using Streamlit, which is directly related to the new Streamlit integration functionality introduced in the main PR. Both PRs focus on enhancing user interaction through Streamlit, making them closely connected.

Suggested reviewers

  • boczekbartek

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Outside diff range and nitpick comments (11)
src/rai/rai/tools/debugging_assistant.py (2)

30-38: Consider improving configurability and error handling.

The function could benefit from the following improvements:

  1. Move hardcoded values ("complex_model" and system prompt) to configuration
  2. Add error handling for model initialization
  3. Enhance the system prompt with more specific ROS 2 debugging context

Consider applying this refactor:

+from typing import Optional
+from rai.config import get_config  # Assuming config module exists
+
+def initialize_graph(
+    model_type: str = "complex_model",
+    system_prompt: Optional[str] = None
+) -> Any:
+    try:
+        config = get_config()
+        model_type = config.get("model_type", model_type)
+        default_prompt = """You are a helpful assistant specialized in ROS 2 debugging.
+            You can help with:
+            - Inspecting nodes, topics, services, and actions
+            - Analyzing message types and interfaces
+            - Troubleshooting communication issues
+            - Monitoring system state
+            Please provide clear explanations and relevant commands."""
+        system_prompt = config.get("system_prompt", system_prompt or default_prompt)
+        
         llm = get_llm_model(model_type=model_type, streaming=True)
         agent = create_conversational_agent(
             llm,
             [ros2_topic, ros2_interface, ros2_node, ros2_service, ros2_action],
-            system_prompt="You are a helpful assistant that can answer questions about ROS 2.",
+            system_prompt=system_prompt,
         )
         return agent
+    except Exception as e:
+        st.error(f"Failed to initialize debugging assistant: {str(e)}")
+        raise

54-64: Consider adding rate limiting for message processing.

The message processing loop could benefit from rate limiting to prevent UI lag with large message histories.

Consider implementing message batching or pagination:

+import time
+
+MAX_MESSAGES_PER_BATCH = 20
+
 prompt = st.chat_input()
-for msg in st.session_state.messages:
+# Display messages in batches
+start_idx = max(0, len(st.session_state.messages) - MAX_MESSAGES_PER_BATCH)
+for msg in st.session_state.messages[start_idx:]:
     if isinstance(msg, AIMessage):
         if msg.content:
             st.chat_message("assistant").write(msg.content)
     elif isinstance(msg, HumanMessage):
         st.chat_message("user").write(msg.content)
     elif isinstance(msg, ToolMessage):
         with st.sidebar.expander(f"Tool: {msg.name}", expanded=False):
             st.code(msg.content, language="json")
+    time.sleep(0.01)  # Prevent UI freezing
+
+if start_idx > 0:
+    st.info(f"Showing last {MAX_MESSAGES_PER_BATCH} messages. {start_idx} older messages are hidden.")
src/rai/rai/utils/model_initialization.py (1)

97-97: Consider documenting supported kwargs.

Since this function is a core utility, it would be helpful to document commonly used kwargs (e.g., streaming, callbacks) in the function's docstring to guide users.

Example docstring addition:

 def get_llm_model(
     model_type: Literal["simple_model", "complex_model"], 
     vendor: str = None, 
     **kwargs
 ):
+    """Get an LLM model instance based on the specified type and vendor.
+    
+    Args:
+        model_type: Type of model to instantiate ("simple_model" or "complex_model")
+        vendor: Optional vendor override. If None, uses the configured default
+        **kwargs: Additional arguments passed to the model constructor, such as:
+            - streaming: Enable streaming responses
+            - callbacks: List of callback handlers for model events
+    """
src/rai/rai/tools/ros/debugging.py (5)

18-18: Ensure compatibility with Python versions below 3.8

The typing.Literal type hint was introduced in Python 3.8. If your codebase needs to support earlier Python versions, consider importing Literal from the typing_extensions package instead.


25-35: Simplify run_with_timeout using subprocess.run with a timeout

The current implementation uses Popen with Timer to enforce a timeout, which can be complex and may not handle child processes correctly. Python's subprocess.run function supports a timeout parameter that raises a TimeoutExpired exception if the process exceeds the specified time.

Consider refactoring run_with_timeout to use subprocess.run with the timeout parameter for cleaner and more reliable timeout handling. Additionally, handle the TimeoutExpired exception to provide meaningful feedback.

Here's a suggested change:

+import subprocess
+from subprocess import PIPE, TimeoutExpired
...
-def run_with_timeout(cmd: str, timeout_sec: int):
-    command = shlex.split(cmd)
-    proc = Popen(command, stdout=PIPE, stderr=PIPE)
-    timer = Timer(timeout_sec, proc.kill)
-    try:
-        timer.start()
-        stdout, stderr = proc.communicate()
-        return stdout, stderr
-    finally:
-        timer.cancel()
+def run_with_timeout(cmd_list: List[str], timeout_sec: int):
+    try:
+        result = subprocess.run(
+            cmd_list, stdout=PIPE, stderr=PIPE, timeout=timeout_sec
+        )
+        return result.stdout, result.stderr
+    except TimeoutExpired:
+        return b'', f'Command timed out after {timeout_sec} seconds'.encode()

54-54: Return structured data instead of string

Currently, run_command returns str(output), which serializes the dictionary to a string that's not easily parsed. Consider returning the dictionary itself or converting it to JSON for better usability downstream.

Suggestion:

-    return str(output)
+    return output

Or, to return a JSON-formatted string:

+    import json
...
-    return str(output)
+    return json.dumps(output)

15-15: Remove unused import shlex

After refactoring run_with_timeout and run_command, the shlex module may no longer be necessary. If it's not used elsewhere, consider removing it to clean up the imports.


25-55: Add unit tests for command execution functions

To ensure run_with_timeout and run_command behave as expected, including handling of timeouts and exceptions, consider adding unit tests covering various scenarios and edge cases.

src/rai/rai/agents/integrations/streamlit.py (3)

36-107: Consider defining StreamHandler at the module level for better maintainability.

Defining the StreamHandler class inside the get_streamlit_cb function may reduce code readability and reusability. It's advisable to define classes at the module level to improve organization and allow for easier testing and reuse.


110-111: Use standard naming conventions for TypeVar.

The TypeVar name fn_return_type should follow the convention of using uppercase letters. Consider renaming it to T for consistency with Python's typing standards.


159-162: Add type annotations to streamlit_invoke function parameters.

For improved readability and type safety, consider adding type annotations to the streamlit_invoke function parameters: graph, messages, and callables. This will enhance code clarity and help with static analysis tools.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 53b457f and b6c949b.

📒 Files selected for processing (4)
  • src/rai/rai/agents/integrations/streamlit.py (1 hunks)
  • src/rai/rai/tools/debugging_assistant.py (1 hunks)
  • src/rai/rai/tools/ros/debugging.py (1 hunks)
  • src/rai/rai/utils/model_initialization.py (2 hunks)
🔇 Additional comments (5)
src/rai/rai/tools/debugging_assistant.py (2)

1-28: LGTM! Well-organized imports and proper licensing.

The code follows good practices with clear organization of imports and includes proper licensing information.


20-26: Verify security measures in ROS 2 tool implementations.

The imported ROS 2 tools should be checked for proper command sanitization and execution safety.

✅ Verification successful

The previous output showed command execution patterns but we need to verify the security measures more thoroughly by checking for forbidden characters and command argument handling.


Security measures are properly implemented in ROS 2 tools

The code implements several important security measures:

  • Uses a list of FORBIDDEN_CHARACTERS to block shell operators and dangerous characters
  • Validates commands before execution using explicit character checks
  • Utilizes shlex.split() for safe command tokenization
  • Implements timeout mechanism for command execution
  • Uses subprocess.Popen with specific streams (PIPE) rather than shell=True
  • Restricts commands to specific literal types, preventing arbitrary command injection
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check security measures in ROS 2 tool implementations

# Look for command execution patterns in the tools
echo "Checking for command execution patterns..."
rg -A 5 "subprocess|shell|exec|run|popen" src/rai/rai/tools/ros/debugging.py

# Look for input sanitization
echo "Checking for input sanitization..."
rg -A 5 "sanitize|clean|escape|validate" src/rai/rai/tools/ros/debugging.py

Length of output: 3430


Script:

#!/bin/bash
# Let's check for security-related constants and validation logic
echo "Checking for security constants and validation..."
rg -A 10 "FORBIDDEN|forbidden|unsafe|safe" src/rai/rai/tools/ros/debugging.py

# Let's also check how the command arguments are handled
echo "Checking command argument handling..."
ast-grep --pattern 'def run_command($$$) { $$$ }'

Length of output: 1260

src/rai/rai/utils/model_initialization.py (2)

97-97: LGTM! Good enhancement to function flexibility.

The addition of **kwargs maintains backward compatibility while allowing for additional configuration options needed by the debugging assistant.


113-113: Verify consistent kwargs usage across the codebase.

The kwargs propagation to model instantiation is implemented consistently across all vendors. However, let's verify that the calling code uses consistent parameter names.

Also applies to: 120-120, 125-125

✅ Verification successful

kwargs usage is consistent across the codebase

The verification shows that kwargs are handled consistently:

  • In model_initialization.py, kwargs are properly propagated to ChatOpenAI and ChatOllama instances
  • Only one call to get_llm_model with kwargs is found in debugging_assistant.py, using standard parameter streaming=True
  • Direct instantiations in tests and docs use consistent parameter names (model, model_id, base_url) matching the vendor-specific requirements
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for consistent parameter usage in get_llm_model calls
# Look for any calls to get_llm_model with kwargs to ensure parameter consistency

# Search for get_llm_model calls with named parameters
rg -A 2 "get_llm_model\([^)]*," --type python

# Search for ChatOpenAI, ChatBedrock, and ChatOllama instantiations to cross-reference parameter names
echo "=== Direct Model Instantiations ==="
rg -A 2 "Chat(OpenAI|Bedrock|Ollama)\(" --type python

Length of output: 246


Script:

#!/bin/bash
# Let's try without file type filtering since it's not supported
# Search for get_llm_model calls with named parameters
rg -A 2 "get_llm_model\([^)]*,"

# Search for Chat model instantiations
echo "=== Direct Model Instantiations ==="
rg -A 2 "Chat(OpenAI|Bedrock|Ollama)\("

# Also search for the model initialization function definition
echo "=== Model Initialization Function ==="
rg -A 10 "def get_llm_model"

Length of output: 3302

src/rai/rai/agents/integrations/streamlit.py (1)

23-23: Ensure code licensing and attribution compliance.

The comment indicates that code has been sourced from this repository. Please ensure that the code's license is compatible with the project's Apache License 2.0 and that proper attribution is provided as required by the original license.

src/rai/rai/tools/debugging_assistant.py Outdated Show resolved Hide resolved
src/rai/rai/tools/debugging_assistant.py Outdated Show resolved Hide resolved
src/rai/rai/tools/ros/debugging.py Outdated Show resolved Hide resolved
src/rai/rai/tools/ros/debugging.py Outdated Show resolved Hide resolved
src/rai/rai/tools/ros/debugging.py Outdated Show resolved Hide resolved
src/rai/rai/tools/ros/debugging.py Outdated Show resolved Hide resolved
src/rai/rai/tools/ros/debugging.py Outdated Show resolved Hide resolved
src/rai/rai/agents/integrations/streamlit.py Show resolved Hide resolved
src/rai/rai/agents/integrations/streamlit.py Show resolved Hide resolved
src/rai/rai/agents/integrations/streamlit.py Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant