-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: debugging assistant #326
base: development
Are you sure you want to change the base?
Conversation
28f80fb
to
af3c85a
Compare
This comment was marked as resolved.
This comment was marked as resolved.
af3c85a
to
b6c949b
Compare
@CodeRabbit full review |
WalkthroughThis pull request introduces several new files and functionalities that integrate Streamlit with a callback handler for dynamic UI updates based on language model interactions. It includes a debugging assistant for ROS 2 queries, a set of functions for executing ROS 2 commands safely, and modifications to a model initialization function to accept additional parameters. The changes enhance user interaction and provide structured command execution with safety checks. Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 11
🧹 Outside diff range and nitpick comments (11)
src/rai/rai/tools/debugging_assistant.py (2)
30-38
: Consider improving configurability and error handling.The function could benefit from the following improvements:
- Move hardcoded values ("complex_model" and system prompt) to configuration
- Add error handling for model initialization
- Enhance the system prompt with more specific ROS 2 debugging context
Consider applying this refactor:
+from typing import Optional +from rai.config import get_config # Assuming config module exists + +def initialize_graph( + model_type: str = "complex_model", + system_prompt: Optional[str] = None +) -> Any: + try: + config = get_config() + model_type = config.get("model_type", model_type) + default_prompt = """You are a helpful assistant specialized in ROS 2 debugging. + You can help with: + - Inspecting nodes, topics, services, and actions + - Analyzing message types and interfaces + - Troubleshooting communication issues + - Monitoring system state + Please provide clear explanations and relevant commands.""" + system_prompt = config.get("system_prompt", system_prompt or default_prompt) + llm = get_llm_model(model_type=model_type, streaming=True) agent = create_conversational_agent( llm, [ros2_topic, ros2_interface, ros2_node, ros2_service, ros2_action], - system_prompt="You are a helpful assistant that can answer questions about ROS 2.", + system_prompt=system_prompt, ) return agent + except Exception as e: + st.error(f"Failed to initialize debugging assistant: {str(e)}") + raise
54-64
: Consider adding rate limiting for message processing.The message processing loop could benefit from rate limiting to prevent UI lag with large message histories.
Consider implementing message batching or pagination:
+import time + +MAX_MESSAGES_PER_BATCH = 20 + prompt = st.chat_input() -for msg in st.session_state.messages: +# Display messages in batches +start_idx = max(0, len(st.session_state.messages) - MAX_MESSAGES_PER_BATCH) +for msg in st.session_state.messages[start_idx:]: if isinstance(msg, AIMessage): if msg.content: st.chat_message("assistant").write(msg.content) elif isinstance(msg, HumanMessage): st.chat_message("user").write(msg.content) elif isinstance(msg, ToolMessage): with st.sidebar.expander(f"Tool: {msg.name}", expanded=False): st.code(msg.content, language="json") + time.sleep(0.01) # Prevent UI freezing + +if start_idx > 0: + st.info(f"Showing last {MAX_MESSAGES_PER_BATCH} messages. {start_idx} older messages are hidden.")src/rai/rai/utils/model_initialization.py (1)
97-97
: Consider documenting supported kwargs.Since this function is a core utility, it would be helpful to document commonly used kwargs (e.g., streaming, callbacks) in the function's docstring to guide users.
Example docstring addition:
def get_llm_model( model_type: Literal["simple_model", "complex_model"], vendor: str = None, **kwargs ): + """Get an LLM model instance based on the specified type and vendor. + + Args: + model_type: Type of model to instantiate ("simple_model" or "complex_model") + vendor: Optional vendor override. If None, uses the configured default + **kwargs: Additional arguments passed to the model constructor, such as: + - streaming: Enable streaming responses + - callbacks: List of callback handlers for model events + """src/rai/rai/tools/ros/debugging.py (5)
18-18
: Ensure compatibility with Python versions below 3.8The
typing.Literal
type hint was introduced in Python 3.8. If your codebase needs to support earlier Python versions, consider importingLiteral
from thetyping_extensions
package instead.
25-35
: Simplifyrun_with_timeout
usingsubprocess.run
with a timeoutThe current implementation uses
Popen
withTimer
to enforce a timeout, which can be complex and may not handle child processes correctly. Python'ssubprocess.run
function supports atimeout
parameter that raises aTimeoutExpired
exception if the process exceeds the specified time.Consider refactoring
run_with_timeout
to usesubprocess.run
with thetimeout
parameter for cleaner and more reliable timeout handling. Additionally, handle theTimeoutExpired
exception to provide meaningful feedback.Here's a suggested change:
+import subprocess +from subprocess import PIPE, TimeoutExpired ... -def run_with_timeout(cmd: str, timeout_sec: int): - command = shlex.split(cmd) - proc = Popen(command, stdout=PIPE, stderr=PIPE) - timer = Timer(timeout_sec, proc.kill) - try: - timer.start() - stdout, stderr = proc.communicate() - return stdout, stderr - finally: - timer.cancel() +def run_with_timeout(cmd_list: List[str], timeout_sec: int): + try: + result = subprocess.run( + cmd_list, stdout=PIPE, stderr=PIPE, timeout=timeout_sec + ) + return result.stdout, result.stderr + except TimeoutExpired: + return b'', f'Command timed out after {timeout_sec} seconds'.encode()
54-54
: Return structured data instead of stringCurrently,
run_command
returnsstr(output)
, which serializes the dictionary to a string that's not easily parsed. Consider returning the dictionary itself or converting it to JSON for better usability downstream.Suggestion:
- return str(output) + return outputOr, to return a JSON-formatted string:
+ import json ... - return str(output) + return json.dumps(output)
15-15
: Remove unused importshlex
After refactoring
run_with_timeout
andrun_command
, theshlex
module may no longer be necessary. If it's not used elsewhere, consider removing it to clean up the imports.
25-55
: Add unit tests for command execution functionsTo ensure
run_with_timeout
andrun_command
behave as expected, including handling of timeouts and exceptions, consider adding unit tests covering various scenarios and edge cases.src/rai/rai/agents/integrations/streamlit.py (3)
36-107
: Consider definingStreamHandler
at the module level for better maintainability.Defining the
StreamHandler
class inside theget_streamlit_cb
function may reduce code readability and reusability. It's advisable to define classes at the module level to improve organization and allow for easier testing and reuse.
110-111
: Use standard naming conventions forTypeVar
.The
TypeVar
namefn_return_type
should follow the convention of using uppercase letters. Consider renaming it toT
for consistency with Python's typing standards.
159-162
: Add type annotations tostreamlit_invoke
function parameters.For improved readability and type safety, consider adding type annotations to the
streamlit_invoke
function parameters:graph
,messages
, andcallables
. This will enhance code clarity and help with static analysis tools.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
📒 Files selected for processing (4)
src/rai/rai/agents/integrations/streamlit.py
(1 hunks)src/rai/rai/tools/debugging_assistant.py
(1 hunks)src/rai/rai/tools/ros/debugging.py
(1 hunks)src/rai/rai/utils/model_initialization.py
(2 hunks)
🔇 Additional comments (5)
src/rai/rai/tools/debugging_assistant.py (2)
1-28
: LGTM! Well-organized imports and proper licensing.
The code follows good practices with clear organization of imports and includes proper licensing information.
20-26
: Verify security measures in ROS 2 tool implementations.
The imported ROS 2 tools should be checked for proper command sanitization and execution safety.
✅ Verification successful
The previous output showed command execution patterns but we need to verify the security measures more thoroughly by checking for forbidden characters and command argument handling.
Security measures are properly implemented in ROS 2 tools
The code implements several important security measures:
- Uses a list of
FORBIDDEN_CHARACTERS
to block shell operators and dangerous characters - Validates commands before execution using explicit character checks
- Utilizes
shlex.split()
for safe command tokenization - Implements timeout mechanism for command execution
- Uses
subprocess.Popen
with specific streams (PIPE) rather than shell=True - Restricts commands to specific literal types, preventing arbitrary command injection
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check security measures in ROS 2 tool implementations
# Look for command execution patterns in the tools
echo "Checking for command execution patterns..."
rg -A 5 "subprocess|shell|exec|run|popen" src/rai/rai/tools/ros/debugging.py
# Look for input sanitization
echo "Checking for input sanitization..."
rg -A 5 "sanitize|clean|escape|validate" src/rai/rai/tools/ros/debugging.py
Length of output: 3430
Script:
#!/bin/bash
# Let's check for security-related constants and validation logic
echo "Checking for security constants and validation..."
rg -A 10 "FORBIDDEN|forbidden|unsafe|safe" src/rai/rai/tools/ros/debugging.py
# Let's also check how the command arguments are handled
echo "Checking command argument handling..."
ast-grep --pattern 'def run_command($$$) { $$$ }'
Length of output: 1260
src/rai/rai/utils/model_initialization.py (2)
97-97
: LGTM! Good enhancement to function flexibility.
The addition of **kwargs
maintains backward compatibility while allowing for additional configuration options needed by the debugging assistant.
113-113
: Verify consistent kwargs usage across the codebase.
The kwargs propagation to model instantiation is implemented consistently across all vendors. However, let's verify that the calling code uses consistent parameter names.
Also applies to: 120-120, 125-125
✅ Verification successful
kwargs usage is consistent across the codebase
The verification shows that kwargs are handled consistently:
- In
model_initialization.py
, kwargs are properly propagated toChatOpenAI
andChatOllama
instances - Only one call to
get_llm_model
with kwargs is found indebugging_assistant.py
, using standard parameterstreaming=True
- Direct instantiations in tests and docs use consistent parameter names (
model
,model_id
,base_url
) matching the vendor-specific requirements
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for consistent parameter usage in get_llm_model calls
# Look for any calls to get_llm_model with kwargs to ensure parameter consistency
# Search for get_llm_model calls with named parameters
rg -A 2 "get_llm_model\([^)]*," --type python
# Search for ChatOpenAI, ChatBedrock, and ChatOllama instantiations to cross-reference parameter names
echo "=== Direct Model Instantiations ==="
rg -A 2 "Chat(OpenAI|Bedrock|Ollama)\(" --type python
Length of output: 246
Script:
#!/bin/bash
# Let's try without file type filtering since it's not supported
# Search for get_llm_model calls with named parameters
rg -A 2 "get_llm_model\([^)]*,"
# Search for Chat model instantiations
echo "=== Direct Model Instantiations ==="
rg -A 2 "Chat(OpenAI|Bedrock|Ollama)\("
# Also search for the model initialization function definition
echo "=== Model Initialization Function ==="
rg -A 10 "def get_llm_model"
Length of output: 3302
src/rai/rai/agents/integrations/streamlit.py (1)
23-23
: Ensure code licensing and attribution compliance.
The comment indicates that code has been sourced from this repository. Please ensure that the code's license is compatible with the project's Apache License 2.0 and that proper attribution is provided as required by the original license.
fdef682
to
8c732b0
Compare
…ng gui when smoke testing
Purpose
This PR adds a new ROS 2 debugging assistant with an interactive Streamlit interface. ROS 2 is a complex distributed robotics middleware with many moving parts (nodes, topics, services, actions, etc.) that can be challenging to debug and monitor. This assistant provides an intuitive chat interface for:
Proposed Changes
Issues
Testing
Summary by CodeRabbit
Release Notes
New Features
Enhancements
Bug Fixes