-
Notifications
You must be signed in to change notification settings - Fork 61
Apply & update ruff in pre-commit #100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (9)
agents-core/vision_agents/core/utils/video_forwarder.py (1)
69-79: Fix: avoid CancelledError in done-callback and correct exc_info usage
- Calling task.exception() before checking task.cancelled() can raise CancelledError.
- logging.error(..., exc_info=exc) won’t include the task’s traceback; pass a tuple instead.
def _task_done(self, task: asyncio.Task) -> None: """Callback to remove completed tasks from the set.""" self._tasks.discard(task) - exc = task.exception() - if exc: - logger.error( - "%s: Task failed with exception: %s", self.name, exc, exc_info=exc - ) - - if task.cancelled(): - return + # Handle cancellation first to avoid raising CancelledError here + if task.cancelled(): + return + exc = task.exception() + if exc: + logger.error( + "%s: Task failed with exception: %s", + self.name, + exc, + exc_info=(type(exc), exc, exc.__traceback__), + )agents-core/vision_agents/core/edge/types.py (1)
111-126: Fix audio dtype handling and format metadata
- from_bytes ignores format and always uses int16.
- resample coerces to int16 but preserves self.format, creating mismatched metadata.
Honor format in from_bytes and set format="s16" after resample.
@classmethod def from_bytes( - cls, audio_bytes: bytes, sample_rate: int = 16000, format: str = "s16" + cls, audio_bytes: bytes, sample_rate: int = 16000, format: str = "s16" ) -> "PcmData": @@ - audio_array = np.frombuffer(audio_bytes, dtype=np.int16) - return cls(samples=audio_array, sample_rate=sample_rate, format=format) + if format == "s16": + dtype = np.int16 + elif format == "f32": + dtype = np.float32 + else: + logger.warning("Unknown audio format '%s' in PcmData.from_bytes; defaulting to s16", format) + dtype = np.int16 + audio_array = np.frombuffer(audio_bytes, dtype=dtype) + return cls(samples=audio_array, sample_rate=sample_rate, format=format) @@ return PcmData( samples=resampled_samples, sample_rate=target_sample_rate, - format=self.format, + format="s16", pts=self.pts, dts=self.dts, time_base=self.time_base, )Also applies to: 169-176
tests/test_function_calling.py (1)
6-6: Avoid mocking in testsTests must not use unittest.mock. Replace patch/Mock with lightweight fakes or fixtures.
I can help refactor these tests to use small fake client classes stubbing just the used methods (e.g., responses.create/messages.create/send_message_stream). As per coding guidelines
plugins/gemini/tests/test_realtime_function_calling.py (1)
48-48: Remove@pytest.mark.asynciodecorators.Per coding guidelines: "Do not use @pytest.mark.asyncio; async support is automatic" for test files. Pytest's async support will handle these test methods without the explicit decorator.
As per coding guidelines.
Apply this diff to remove the decorators:
- @pytest.mark.asyncio async def test_convert_tools_to_provider_format(self):@pytest.mark.integration - @pytest.mark.asyncio async def test_live_function_calling_basic(self, realtime_instance):@pytest.mark.integration - @pytest.mark.asyncio async def test_live_function_calling_error_handling(self, realtime_instance):@pytest.mark.integration - @pytest.mark.asyncio async def test_live_function_calling_multiple_functions(self, realtime_instance):- @pytest.mark.asyncio async def test_create_config_with_tools(self):- @pytest.mark.asyncio async def test_create_config_without_tools(self):Also applies to: 102-102, 153-153, 201-201, 257-257, 277-277
tests/test_stt_base.py (1)
59-59: Remove@pytest.mark.asynciodecorators.Per coding guidelines: "Do not use @pytest.mark.asyncio; async support is automatic" for test files.
As per coding guidelines.
Apply this diff to remove the decorators from all async test methods:
-@pytest.mark.asyncio async def test_validate_pcm_data_valid(mock_stt, valid_pcm_data):(Repeat for all other
@pytest.mark.asynciodecorated async test functions in this file.)Also applies to: 65-65, 71-71, 80-80, 89-89, 98-98, 127-127, 155-155, 179-179, 189-189, 221-221, 234-234
agents-core/vision_agents/core/agents/conversation.py (1)
33-36: Don’t overwrite provided timestamp; use timezone-aware UTC.
__post_init__always resetstimestampand makes it naive. Respect a provided value and default to UTC.Apply:
def __post_init__(self): - self.id = self.id or str(uuid.uuid4()) - self.timestamp = datetime.datetime.now() + self.id = self.id or str(uuid.uuid4()) + if self.timestamp is None: + self.timestamp = datetime.datetime.now(datetime.timezone.utc)agents-core/vision_agents/core/vad/events.py (1)
11-16: Type mismatch: Optional annotation needed for audio_data.
audio_datadefaults to None but is typedPcmData. Make it optional to satisfy type checkers and ruff.- audio_data: PcmData = None + audio_data: Optional[PcmData] = Nonetests/test_events.py (1)
26-28: Remove @pytest.mark.asyncio; async support is automatic per guidelines.Project rules: tests shouldn’t use
@pytest.mark.asyncio. Please drop it from all tests in this file (and suite); keep@pytest.mark.integrationwhere needed.Example (apply similarly everywhere):
-@pytest.mark.asyncio async def test_register_invalid_event_raises_value_error(): ...As per coding guidelines
plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (1)
212-218: Fix runtime bug: typing.List is invalid in isinstance checks.Use built-in
list, nottyping.List, to avoidTypeError.- if not isinstance(gemini_input, List): + if not isinstance(gemini_input, list): gemini_input = [gemini_input]
♻️ Duplicate comments (1)
examples/other_examples/plugins_examples/video_moderation/main.py (1)
88-102: Code duplication noted.The
moderatefunction is identical to the one inaudio_moderation/main.py. This concern has been flagged in the audio_moderation file review with a suggestion to extract the shared logic.
🧹 Nitpick comments (18)
examples/other_examples/plugins_examples/audio_moderation/main.py (1)
44-58: Consider extracting the sharedmoderatefunction.The
moderatefunction is duplicated identically invideo_moderation/main.py(lines 88-102). Consider extracting it to a shared utility module to reduce duplication and improve maintainability.For example, create a shared module:
# examples/other_examples/plugins_examples/common/moderation_utils.py def moderate(client: Stream, text: str, user_name: str) -> CheckResponse: """Moderate a transcript using Stream Moderation. This helper is synchronous on purpose so it can be executed in a background thread with ``asyncio.to_thread`` from async code without blocking the event loop. """ return client.moderation.check( config_key="custom:python-ai-test", entity_creator_id=user_name, entity_id=str(uuid.uuid4()), entity_type="transcript", moderation_payload=ModerationPayload(texts=[text]), ).dataThen import in both files:
from examples.other_examples.plugins_examples.common.moderation_utils import moderateagents-core/vision_agents/core/utils/utils.py (1)
90-113: Consider moving logger initialization to module level.The error handling addition is excellent, but initializing the logger inside the function on every call is inefficient. Module-level logger initialization is the standard pattern.
Apply this diff to move the logger to module level:
+logger = logging.getLogger(__name__) + + def frame_to_png_bytes(frame) -> bytes: """ Convert a video frame to PNG bytes. Args: frame: Video frame object that can be converted to an image Returns: PNG bytes of the frame, or empty bytes if conversion fails """ - logger = logging.getLogger(__name__) try:Additionally, confirm that returning empty bytes on failure is the desired behavior rather than propagating the exception to the caller.
agents-core/vision_agents/core/vad/vad.py (1)
293-343: Unify naming and confirm event schema compatibility
- The param is named user but upstream calls pass participant; consider renaming for consistency.
- Verify that VADAudioEvent doesn’t require sample_rate/audio_format/channels; Silero’s version includes them.
Example rename for clarity:
- async def _flush_speech_buffer( - self, user: Optional[Union[Dict[str, Any], Participant]] = None - ) -> None: + async def _flush_speech_buffer( + self, participant: Optional[Union[Dict[str, Any], Participant]] = None + ) -> None: @@ - user_metadata=user, + user_metadata=participant, @@ - user_metadata=user, + user_metadata=participant,agents-core/vision_agents/core/mcp/mcp_server_remote.py (1)
51-56: Prefer lazy logging formatting over f-strings (minor perf/log hygiene)Use parameterized logging to avoid eager interpolation and align with common logging best practices.
- self.logger.info(f"Connecting to remote MCP server at {self.url}") + self.logger.info("Connecting to remote MCP server at %s", self.url) @@ - self.logger.info( - f"Successfully connected to remote MCP server at {self.url} (session: {session_id})" - ) + self.logger.info( + "Successfully connected to remote MCP server at %s (session: %s)", + self.url, + session_id, + ) @@ - self.logger.info( - f"Successfully connected to remote MCP server at {self.url} (session ID unavailable: {e})" - ) + self.logger.info( + "Successfully connected to remote MCP server at %s (session ID unavailable: %s)", + self.url, + e, + ) @@ - self.logger.info( - f"Successfully connected to remote MCP server at {self.url}" - ) + self.logger.info( + "Successfully connected to remote MCP server at %s", self.url + )Also applies to: 82-92
examples/other_examples/07_function_calling_example/gemini_example.py (1)
16-16: Optional: use keyword for model to match other examplesMinor readability/consistency nit.
- llm = gemini.LLM("gemini-2.0-flash") + llm = gemini.LLM(model="gemini-2.0-flash")agents-core/vision_agents/core/utils/video_forwarder.py (1)
96-106: Optional: align type annotations for framesrecv() returns av.VideoFrame; queue is typed as LatestNQueue[Frame] and next_frame returns av.VideoFrame. Consider consistently using av.VideoFrame for clarity.
Also applies to: 119-125
plugins/xai/vision_agents/plugins/xai/llm.py (3)
147-151: Include plugin_name in CompletedEvent for consistencyStreaming path uses plugin_name="xai", but this branch omits it. Align both for uniform event consumers.
- self.events.send( - LLMResponseCompletedEvent( - original=llm_response.original, text=llm_response.text - ) - ) + self.events.send( + LLMResponseCompletedEvent( + plugin_name="xai", + original=llm_response.original, + text=llm_response.text, + ) + )
97-103: Docstring should follow Google styleAdd Args and Returns to match repo guidelines.
async def create_response( self, *args: Any, **kwargs: Any ) -> LLMResponseEvent[Response]: - """ - create_response gives you full support/access to the native xAI chat.sample() and chat.stream() methods - this method wraps the xAI method and ensures we broadcast an event which the agent class hooks into - """ + """ + Create a response using xAI chat.sample() or chat.stream(). + + Args: + *args: Positional arguments forwarded to the underlying xAI SDK. + **kwargs: Keyword args such as: + - input (str): User message text. + - instructions (str | None): System instructions to prepend. + - model (str): Model name (defaults to self.model). + - stream (bool): Whether to stream the response (default: True). + + Returns: + LLMResponseEvent[Response]: Normalized response with original xAI payload and text. + """
121-124: Initialize llm_response before branchingMinor clarity: define llm_response before the stream flag branch to avoid shadowing.
- # Get response based on streaming preference - if stream: + # Get response based on streaming preference + llm_response: Optional[LLMResponseEvent[Response]] = None + if stream: - # Handle streaming response - llm_response: Optional[LLMResponseEvent[Response]] = None + # Handle streaming responseplugins/deepgram/vision_agents/plugins/deepgram/stt.py (1)
216-219: Signature reflow looks good; consider optional auto-resampleFormatting LGTM. Consider resampling when pcm_data.sample_rate != self.sample_rate using pcm_data.resample(...) to reduce transcription drift.
plugins/silero/vision_agents/plugins/silero/vad.py (2)
447-460: Unify AudioFormat referenceUse the already imported AudioFormat for consistency.
- vad.events.VADAudioEvent( + vad.events.VADAudioEvent( session_id=self.session_id, plugin_name=self.provider_name, audio_data=speech_data.tobytes(), sample_rate=self.sample_rate, - audio_format=vad.events.AudioFormat.PCM_S16, + audio_format=AudioFormat.PCM_S16, channels=1, duration_ms=duration_ms, speech_probability=avg_speech_prob, frame_count=len(speech_data) // self.frame_size, user_metadata=user, )
581-596: Minor cleanup: remove local numpy import and reuse module-level npThe block above imports numpy inside the function. Use the module-level import to avoid redundant imports.
- import numpy as np - current_samples = np.frombuffer( self.speech_buffer, dtype=np.int16 ).copy()agents-core/vision_agents/core/events/manager.py (4)
492-494: Starting background task requires a running loop; add a safe fallbackget_running_loop() raises if called outside a running loop (e.g., instantiated in sync code). Prefer a safer fallback.
- loop = asyncio.get_running_loop() - self._processing_task = loop.create_task(self._process_events_loop()) + try: + loop = asyncio.get_running_loop() + self._processing_task = loop.create_task(self._process_events_loop()) + except RuntimeError: + # No running loop; delay start until first send()/wait() call + logger.debug("No running event loop; deferring event processing task start")Optionally, call _start_processing_task() from send() and wait() to ensure it starts when a loop is available.
196-199: Cancel-and-forget leaves pending task warnings; add cleanupAfter cancel(), consider ensuring the task is awaited or cleaned up to avoid “Task was destroyed” warnings.
- if em._processing_task and not em._processing_task.done(): - em._processing_task.cancel() + if em._processing_task and not em._processing_task.done(): + em._processing_task.cancel() + em._processing_task.add_done_callback(lambda _t: None)Or make merge async and await the cancelled task with contextlib.suppress(asyncio.CancelledError).
349-350: Typo in error message“seperated” → “separated”.
- "Multiple seperated events per handler are not supported, use Union instead" + "Multiple separated events per handler are not supported, use Union instead"
408-416: Use truncated event strings in logs to reduce noiseReuse _truncate_event_for_logging for dict/protobuf unknown events to keep logs compact.
- logger.info(f"Event not registered {_truncate_event_for_logging(event)}") + logger.info(f"Event not registered {_truncate_event_for_logging(event)}")And similarly for the protobuf not registered case above.
agents-core/vision_agents/core/agents/conversation.py (1)
161-170: Comments contradict behavior: ID is assigned by Message, not add_message.
Message.__post_init__setsid, whileInMemoryConversation.add_messagedoesn’t. Update comments for accuracy.Apply:
- id=None, # Will be assigned during add + id=None, # __post_init__ assigns a UUID if not provided- # The message now has an ID assigned by the add_message flow + # The message now has an ID from Message.__post_init__ (or provider)- # Message IDs and user_ids are always set by add_message + # Message ID set by Message.__post_init__; user_id set aboveplugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (1)
102-110: Remove duplicate import; types already imported at module top.Redundant
from google.genai import typesinside method. Use the existing module import.- if tools_spec: - from google.genai import types + if tools_spec: conv_tools = self._convert_tools_to_provider_format(tools_spec) cfg = kwargs.get("config") if not isinstance(cfg, types.GenerateContentConfig): cfg = types.GenerateContentConfig()
| proto_list = getattr(self.payload, "publish_options", []) | ||
| return ( | ||
| [PublishOption.from_proto(item) for item in proto_list] | ||
| if proto_list | ||
| else None | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Revert returning None for empty repeated fields
This rewrite now emits None whenever the repeated protobuf field is empty, whereas the prior list-comprehension returned an empty list. Downstream code that iterates (e.g. for option in event.publish_options) will now raise instead of running zero iterations. Please restore the behaviour to always return a list (possibly empty) here, and audit the other repeated-field helpers changed in the same PR to keep their outputs iterable.
Apply this diff:
- proto_list = getattr(self.payload, "publish_options", [])
- return (
- [PublishOption.from_proto(item) for item in proto_list]
- if proto_list
- else None
- )
+ proto_list = getattr(self.payload, "publish_options", [])
+ return [PublishOption.from_proto(item) for item in proto_list]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| proto_list = getattr(self.payload, "publish_options", []) | |
| return ( | |
| [PublishOption.from_proto(item) for item in proto_list] | |
| if proto_list | |
| else None | |
| ) | |
| proto_list = getattr(self.payload, "publish_options", []) | |
| return [PublishOption.from_proto(item) for item in proto_list] |
🤖 Prompt for AI Agents
In agents-core/vision_agents/core/edge/sfu_events.py around lines 700 to 705,
the helper now returns None for empty repeated protobuf fields which breaks
iteration; change it to always return a list (use an empty list when proto_list
is falsy) so callers can iterate safely, and scan other repeated-field helpers
changed in the same PR to ensure they also return lists (not None) when empty.
| async def close(self): | ||
| pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enforce subclass implementation for close()
A no-op close() can hide resource leaks. Raise NotImplementedError or document a no-op explicitly.
Apply:
async def close(self):
- pass
+ raise NotImplementedError("Connection.close() must be implemented by subclasses")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def close(self): | |
| pass | |
| async def close(self): | |
| raise NotImplementedError("Connection.close() must be implemented by subclasses") |
🤖 Prompt for AI Agents
In agents-core/vision_agents/core/edge/types.py around lines 34 to 35, the
current async close() is a no-op which can mask resource leaks; change it to
enforce subclass implementation by raising NotImplementedError (or, if
intentional, add an explicit docstring stating it's a no-op and safe to leave
unimplemented). Update the method to raise NotImplementedError so subclasses
must implement cleanup, and add a short docstring explaining the required
behavior and expected cleanup responsibilities.
| if hasattr(event, "pcm_data") and hasattr(event.pcm_data, "samples"): | ||
| # Replace the full array with a summary | ||
| samples = event.pcm_data.samples | ||
| array_summary = f"array([{samples[0]}, {samples[1]}, ..., {samples[-1]}], dtype={samples.dtype}, size={len(samples)})" | ||
| event_str = event_str.replace(str(samples), array_summary) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix IndexError risk in audio logging when samples length < 3
Accessing samples[0], samples[1], samples[-1] can fail for small arrays. Guard for short/empty arrays.
- if hasattr(event, "pcm_data") and hasattr(event.pcm_data, "samples"):
- # Replace the full array with a summary
- samples = event.pcm_data.samples
- array_summary = f"array([{samples[0]}, {samples[1]}, ..., {samples[-1]}], dtype={samples.dtype}, size={len(samples)})"
- event_str = event_str.replace(str(samples), array_summary)
+ if hasattr(event, "pcm_data") and hasattr(event.pcm_data, "samples"):
+ samples = event.pcm_data.samples
+ # Build a safe summary even for very short arrays
+ try:
+ n = len(samples) # works for numpy arrays and sequences
+ except Exception:
+ n = 0
+ if n >= 3:
+ head = f"{samples[0]}, {samples[1]}, ..., {samples[-1]}"
+ elif n == 2:
+ head = f"{samples[0]}, {samples[1]}"
+ elif n == 1:
+ head = f"{samples[0]}"
+ else:
+ head = ""
+ dtype = getattr(samples, "dtype", type(samples).__name__)
+ array_summary = f"array([{head}], dtype={dtype}, size={n})"
+ event_str = event_str.replace(str(samples), array_summary)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if hasattr(event, "pcm_data") and hasattr(event.pcm_data, "samples"): | |
| # Replace the full array with a summary | |
| samples = event.pcm_data.samples | |
| array_summary = f"array([{samples[0]}, {samples[1]}, ..., {samples[-1]}], dtype={samples.dtype}, size={len(samples)})" | |
| event_str = event_str.replace(str(samples), array_summary) | |
| if hasattr(event, "pcm_data") and hasattr(event.pcm_data, "samples"): | |
| samples = event.pcm_data.samples | |
| # Build a safe summary even for very short arrays | |
| try: | |
| n = len(samples) # works for numpy arrays and sequences | |
| except Exception: | |
| n = 0 | |
| if n >= 3: | |
| head = f"{samples[0]}, {samples[1]}, ..., {samples[-1]}" | |
| elif n == 2: | |
| head = f"{samples[0]}, {samples[1]}" | |
| elif n == 1: | |
| head = f"{samples[0]}" | |
| else: | |
| head = "" | |
| dtype = getattr(samples, "dtype", type(samples).__name__) | |
| array_summary = f"array([{head}], dtype={dtype}, size={n})" | |
| event_str = event_str.replace(str(samples), array_summary) |
🤖 Prompt for AI Agents
In agents-core/vision_agents/core/events/manager.py around lines 35 to 40, the
code indexes samples[0], samples[1], and samples[-1] which will raise IndexError
for empty or very short arrays; update the logic to first check samples length
and handle cases: if len==0 produce "array([], dtype=..., size=0)"; if len==1
produce "array([v0], dtype=..., size=1)"; if len==2 produce "array([v0, v1],
dtype=..., size=2)"; otherwise use the existing summary format using safe
accesses for first, second, and last elements and the dtype/size metadata.
Ensure you never access out-of-range indices and that the summary is used to
replace the full samples string.
| def __init__(self): | ||
| super().__init__("example-server") | ||
|
|
||
| async def get_tools(self): | ||
| """Return available tools.""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix wrong MCPBaseServer initialization and missing abstract methods
- super().init("example-server") passes a str where MCPBaseServer expects a session_timeout: float.
- ExampleMCPServer doesn’t implement abstract connect()/disconnect(), so instantiation will fail.
- Method should be list_tools to align with MCPBaseServer, not get_tools.
class ExampleMCPServer(MCPBaseServer):
"""Example MCP server that provides weather information."""
def __init__(self):
- super().__init__("example-server")
+ super().__init__(session_timeout=300.0)
+
+ async def connect(self) -> None:
+ """Connect to the (example) MCP server."""
+ self._is_connected = True
+
+ async def disconnect(self) -> None:
+ """Disconnect from the (example) MCP server."""
+ self._is_connected = False
- async def get_tools(self):
+ async def list_tools(self):
"""Return available tools."""
return [
{
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"],
},
}
]Additionally, consider aligning call_tool’s return shape with your MCP types if consumers expect a structured result.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__(self): | |
| super().__init__("example-server") | |
| async def get_tools(self): | |
| """Return available tools.""" | |
| class ExampleMCPServer(MCPBaseServer): | |
| """Example MCP server that provides weather information.""" | |
| def __init__(self): | |
| super().__init__(session_timeout=300.0) | |
| async def connect(self) -> None: | |
| """Connect to the (example) MCP server.""" | |
| self._is_connected = True | |
| async def disconnect(self) -> None: | |
| """Disconnect from the (example) MCP server.""" | |
| self._is_connected = False | |
| async def list_tools(self): | |
| """Return available tools.""" | |
| return [ | |
| { | |
| "name": "get_weather", | |
| "description": "Get current weather for a location", | |
| "parameters": { | |
| "type": "object", | |
| "properties": { | |
| "location": {"type": "string", "description": "City name"} | |
| }, | |
| "required": ["location"], | |
| }, | |
| } | |
| ] |
🤖 Prompt for AI Agents
In examples/other_examples/plugins_examples/mcp/main.py around lines 32-36, the
class initializer and methods are incorrect: replace
super().__init__("example-server") with a call that passes a numeric
session_timeout (e.g., super().__init__(session_timeout=30.0) or another
appropriate float) so MCPBaseServer gets the expected type; implement the
abstract async connect(self) and async disconnect(self) methods (provide minimal
working implementations that establish/tear down whatever session/resources your
example needs); rename get_tools() to list_tools() so the method signature
matches MCPBaseServer and return the expected tool list type; and review
call_tool’s return value to ensure it matches the MCP types/shape consumers
expect (adjust to structured result if required).
| self.events.send( | ||
| LLMResponseChunkEvent( | ||
| plugin_name="antrhopic", | ||
| content_index=delta_event.index, | ||
| item_id="", | ||
| output_index=0, | ||
| sequence_number=0, | ||
| delta=delta_event.delta.text, | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Typo remains unfixed.
Line 363 still contains plugin_name="antrhopic" instead of "anthropic". The AI summary indicates this typo was fixed, but it persists in the code. This will cause event routing issues as the plugin name won't match the expected value.
Apply this diff to fix the typo:
self.events.send(
LLMResponseChunkEvent(
- plugin_name="antrhopic",
+ plugin_name="anthropic",
content_index=delta_event.index,
item_id="",
output_index=0,
sequence_number=0,
delta=delta_event.delta.text,
)
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| self.events.send( | |
| LLMResponseChunkEvent( | |
| plugin_name="antrhopic", | |
| content_index=delta_event.index, | |
| item_id="", | |
| output_index=0, | |
| sequence_number=0, | |
| delta=delta_event.delta.text, | |
| ) | |
| ) | |
| self.events.send( | |
| LLMResponseChunkEvent( | |
| plugin_name="anthropic", | |
| content_index=delta_event.index, | |
| item_id="", | |
| output_index=0, | |
| sequence_number=0, | |
| delta=delta_event.delta.text, | |
| ) | |
| ) |
🤖 Prompt for AI Agents
In plugins/anthropic/vision_agents/plugins/anthropic/anthropic_llm.py around
lines 361 to 370, the LLMResponseChunkEvent is created with
plugin_name="antrhopic" (typo); change the value to plugin_name="anthropic" so
the plugin name matches expected routing; update the string only and keep the
rest of the event fields unchanged.
| """Event emitted when an LLM encounters an error.""" | ||
| type: str = field(default='plugin.llm.error', init=False) | ||
|
|
||
| type: str = field(default="plugin.llm.error", init=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore LLM error event type string.
LLMErrorEvent.type now points to "plugin.llm.error", dropping the openai namespace. Our event dispatcher indexes handlers by the exact type literal (see EventManager.subscribe), so this change silently decouples existing subscribers expecting "plugin.openai.llm_error". Please revert to the original literal.
- type: str = field(default="plugin.llm.error", init=False)
+ type: str = field(default="plugin.openai.llm_error", init=False)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| type: str = field(default="plugin.llm.error", init=False) | |
| type: str = field(default="plugin.openai.llm_error", init=False) |
🤖 Prompt for AI Agents
In plugins/openai/vision_agents/plugins/openai/events.py around line 19, the
LLMErrorEvent.type field was changed to "plugin.llm.error" which breaks existing
subscribers expecting the original "plugin.openai.llm_error"; revert the default
value of the type field back to the original literal "plugin.openai.llm_error"
so the event dispatcher matches existing handlers exactly.
| """Test LLM function calling functionality.""" | ||
|
|
||
| @pytest.mark.asyncio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove @pytest.mark.asyncio — async support is automatic
Drop these markers across the file.
Example edit:
-@pytest.mark.asyncio
async def test_llm_function_registration(self):Repeat for all listed lines.
As per coding guidelines
Also applies to: 145-145, 168-168, 238-238, 310-310, 355-355, 397-397, 421-421, 483-483, 515-515
🤖 Prompt for AI Agents
In tests/test_function_calling.py around lines 131, 145, 168, 238, 310, 355,
397, 421, 483, and 515, remove the explicit @pytest.mark.asyncio decorators from
those test functions because async support is automatic; for each listed line
delete the decorator line above the async test definition so the async test
remains as async def test_... without the marker, and run the test suite to
confirm no regressions.
|
|
||
|
|
||
| @pytest.mark.skip(reason="Conversation class has not fully been wired into Agent yet") | ||
| @pytest.mark.asyncio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove @pytest.mark.asyncio — async support is automatic
Drop these markers to comply with the test style rules.
Apply:
-@pytest.mark.asyncio
async def test_agent_conversation_updates_with_realtime():Repeat similarly for the other occurrences on the listed lines.
As per coding guidelines
Also applies to: 270-270, 295-295, 301-301, 321-321, 369-369
🤖 Prompt for AI Agents
In tests/test_realtime_base.py around lines 93, 270, 295, 301, 321, and 369, the
tests include explicit @pytest.mark.asyncio markers which are unnecessary
because async support is enabled automatically; remove those decorators on each
listed line (and any duplicate occurrences) so the tests rely on built-in async
support and adhere to the project test style guidelines.
|
I can rebase & update & resolve conflicts when necessary 🙌🏻 |
I found the repo thanks to José Ramalho reaching out to me over LinkedIn. I was looking to inspect the repo and test the tool, but I realized pre-commit hooks are outdated and they are not enforced in CI.
I applied only ruff fixes and updated the pre-commit hook, but still it created a huge diff. If this gets merged, I can create 2 more PRs for
pre-commit-hooksandlocal, then add a CI workflow to enforce the pre-commit checks - similar to #1 or #2Because this PR only focuses on ruff fixes and update, and coderabbit was generating a misleading summary, I deleted the automated AI summary from PR description.