-
Notifications
You must be signed in to change notification settings - Fork 114
[AI-194] Openrouter #117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AI-194] Openrouter #117
Conversation
WalkthroughAdds a new OpenRouter LLM plugin with docs, example, tests, and packaging; refactors OpenAI LLM to centralize conversation helpers; and updates the development lint check to run Ruff with Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Caller
participant OpenAILLM
participant OpenRouterLLM
participant RemoteAPI
rect rgb(240,248,255)
Note over OpenAILLM,OpenRouterLLM: Conversation setup helpers (new)
end
Caller->>OpenAILLM: create_response(kwargs)
OpenAILLM->>OpenAILLM: create_conversation() (ensure)
OpenAILLM->>OpenAILLM: add_conversation_history(kwargs)
alt Using OpenRouterLLM
Caller->>OpenRouterLLM: create_response(kwargs)
OpenRouterLLM->>OpenRouterLLM: create_conversation() (no-op)
OpenRouterLLM->>OpenRouterLLM: add_conversation_history(kwargs) (normalize/store)
OpenRouterLLM->>RemoteAPI: POST request with updated kwargs
RemoteAPI-->>OpenRouterLLM: response
OpenRouterLLM-->>Caller: return response
else Using OpenAILLM (legacy backend)
OpenAILLM->>RemoteAPI: POST request with conversation id in kwargs
RemoteAPI-->>OpenAILLM: response/events
OpenAILLM-->>Caller: return/stream response
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
dev.py (1)
11-12: Fixrun()return typing and docstring to match behavior; keep to Google style.When
check=Falseand the command fails, the function returnsCalledProcessError, but the return type only declaresCompletedProcess. Also, add Args/Returns/Raises per Google style.Apply this diff:
-from typing import Optional +from typing import Optional, Mapping @@ -def run( - command: str, env: Optional[dict] = None, check: bool = True -) -> subprocess.CompletedProcess: - """Run a shell command with automatic argument parsing.""" +def run( + command: str, + env: Optional[Mapping[str, str]] = None, + check: bool = True, +) -> subprocess.CompletedProcess | subprocess.CalledProcessError: + """Run a shell command with automatic argument parsing. + + Args: + command: Command string to execute (parsed with shlex, no shell). + env: Extra environment variables to merge into the current environment. + check: If True, exit the process on non‑zero return code; otherwise return the error. + + Returns: + A CompletedProcess on success, or a CalledProcessError if `check` is False and the command fails. + + Raises: + SystemExit: If `check` is True and the command exits with a non‑zero status. + """Also applies to: 16-38
🧹 Nitpick comments (12)
dev.py (2)
103-106: Make ruff auto-fix opt-in forcheck(); full checks should be non-mutating by default.
ruff check . --fixrewrites files during a "full check" run. That’s surprising and risky if contributors expect a read-only verification step. Recommend gating fixes behind a--fixflag.Apply this diff:
@cli.command() -def check(): - """Run full check: ruff, mypy, and unit tests.""" - click.echo("Running full development check...") +@click.option("--fix/--no-fix", default=False, help="Apply auto-fixes during ruff lint.") +def check(fix: bool = False): + """Run full check: ruff, mypy, and unit tests. + + Args: + fix: If True, run ruff with --fix to apply auto-fixes. + """ + click.echo("Running full development check...") @@ - run("uv run ruff check . --fix") + run(f"uv run ruff check .{' --fix' if fix else ''}")
70-74: Useruff formatfor the formatter; current command runs the linter.The function is named/echoed as “format” but executes
ruff check --fix. Switch to the dedicated formatter for clarity.Apply this diff:
def format(): - """Run ruff formatting with auto-fix.""" + """Run ruff formatter.""" click.echo("Running ruff format...") - run("uv run ruff check --fix") + run("uv run ruff format .")plugins/openrouter/README.md (1)
34-34: Explicit API key guidance and inline example.Spell out the exact env var name and show direct-constructor usage so users don’t hunt around.
Apply:
-This plugin uses the OpenAI-compatible API provided by OpenRouter. You'll need to set your OpenRouter API key as an environment variable or pass it directly to the LLM. +This plugin uses the OpenAI‑compatible API provided by OpenRouter. Set your API key as OPENROUTER_API_KEY or pass it directly: + +```bash +export OPENROUTER_API_KEY=sk-or-... +``` + +```python +llm = openrouter.LLM(model="anthropic/claude-haiku-4.5", api_key="sk-or-...") +```plugins/openrouter/example/pyproject.toml (1)
6-15: Pin examples via a lockfile; keep pyproject lean.Examples pull latest of many plugins; reproducibility will vary across machines. Consider adding a uv lock (uv pip compile) and keeping pyproject minimal.
plugins/openrouter/pyproject.toml (2)
26-28: Avoid shipping tests/examples in the wheel.packages = ["."] will include tests and the example folder. Limit to the actual package path.
Apply:
-[tool.hatch.build.targets.wheel] -packages = ["."] +[tool.hatch.build.targets.wheel] +# Only ship the library code +packages = ["vision_agents"] + +# Optionally exclude non-package dirs defensively +[tool.hatch.build.targets.wheel.force-include] +# none
12-15: Transitive dependency is fine but consider an explicit lower bound.vision-agents-plugins-openai is required. Add a minimal compatible version once CI proves it, to prevent accidental breaking upgrades.
plugins/openrouter/example/openrouter_example.py (1)
18-24: Docstring: tighten to Google style.Add Args/Returns for clarity and to match repo guidelines.
Apply:
-async def start_agent() -> None: - """Example agent using OpenRouter LLM. - - This example demonstrates how to use the OpenRouter plugin with a Vision Agent. - OpenRouter provides access to multiple LLM providers through a unified API. - - Set OPENROUTER_API_KEY environment variable before running. - """ +async def start_agent() -> None: + """Run an example agent using the OpenRouter LLM. + + Args: + None + + Returns: + None + + Notes: + Set OPENROUTER_API_KEY before running. OpenRouter routes to multiple LLM providers. + """plugins/openrouter/tests/test_openrouter_llm.py (1)
90-107: Name nit: use snake_case for locals.streamingWorks -> streaming_works.
Apply:
- streamingWorks = False + streaming_works = False @@ - nonlocal streamingWorks - streamingWorks = True + nonlocal streaming_works + streaming_works = True @@ - assert streamingWorks, "Streaming should have generated chunk events" + assert streaming_works, "Streaming should have generated chunk events"plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (4)
34-41: Fail fast if API key is missing (clear UX).Without a key, requests will 401 later. Raise early with a helpful message.
Apply:
if api_key is None: api_key = os.environ.get("OPENROUTER_API_KEY") + if not api_key: + raise ValueError( + "OpenRouter API key not found. Set OPENROUTER_API_KEY or pass api_key." + )
43-46: Add a short docstring and spell “don’t”.Minor polish; also clarifies why we no-op.
Apply:
- async def create_conversation(self): - # Do nothing, dont call super + async def create_conversation(self): + """OpenRouter does not use provider-side conversation IDs.""" + # Do nothing, don't call super() pass
47-61: Conversation growth can explode token usage; cap and include assistant turns.
- Each call prepends the entire history, which will grow unbounded.
- Ensure assistant replies are also stored (if base class doesn’t already) so the model sees both sides.
- Optionally set OpenRouter headers for better routing/quotas.
Apply:
- def add_conversation_history(self, kwargs): + def add_conversation_history(self, kwargs): # Use the manual storage # ensure the AI remembers the past conversation # TODO: there are additional formats to support here. new_messages = kwargs["input"] if not isinstance(new_messages, list): new_messages = [dict(content=new_messages, role="user", type="message")] - if hasattr(self, '_conversation') and self._conversation: - old_messages = [m.original for m in self._conversation.messages] - kwargs["input"] = old_messages + new_messages + if hasattr(self, "_conversation") and self._conversation: + # Optional: cap history to last N messages to control token cost + HISTORY_LIMIT = 40 + old_messages = [m.original for m in self._conversation.messages][-HISTORY_LIMIT:] + kwargs["input"] = old_messages + new_messages # Add messages to conversation normalized_messages = self._normalize_message(new_messages) for msg in normalized_messages: self._conversation.messages.append(msg)If assistant replies aren’t appended elsewhere, consider appending them after create_response completes. I can sketch that hook if needed. Based on learnings.
18-25: Optional: set OpenRouter identifying headers.OpenRouter recommends HTTP-Referer and X-Title for better routing/quota accounting.
Follow-up (in OpenAILLM init path) after super().init:
# Example: if hasattr(self, "client"): self.client.default_headers = { **getattr(self.client, "default_headers", {}) or {}, "HTTP-Referer": os.getenv("OPENROUTER_REFERRER", "https://visionagents.ai"), "X-Title": os.getenv("OPENROUTER_TITLE", "Vision Agents"), }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (9)
dev.py(1 hunks)plugins/openai/vision_agents/plugins/openai/openai_llm.py(2 hunks)plugins/openrouter/README.md(1 hunks)plugins/openrouter/example/openrouter_example.py(1 hunks)plugins/openrouter/example/pyproject.toml(1 hunks)plugins/openrouter/pyproject.toml(1 hunks)plugins/openrouter/tests/test_openrouter_llm.py(1 hunks)plugins/openrouter/vision_agents/plugins/openrouter/__init__.py(1 hunks)plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Do not modify sys.path in Python code
Docstrings must follow the Google style guide
Files:
plugins/openrouter/vision_agents/plugins/openrouter/__init__.pyplugins/openai/vision_agents/plugins/openai/openai_llm.pyplugins/openrouter/tests/test_openrouter_llm.pyplugins/openrouter/example/openrouter_example.pydev.pyplugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
🧬 Code graph analysis (5)
plugins/openrouter/vision_agents/plugins/openrouter/__init__.py (2)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (1)
OpenRouterLLM(8-60)agents-core/vision_agents/core/llm/llm.py (1)
LLM(36-363)
plugins/openai/vision_agents/plugins/openai/openai_llm.py (2)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (2)
create_conversation(43-45)add_conversation_history(47-60)plugins/getstream/vision_agents/plugins/getstream/stream_edge_transport.py (1)
create_conversation(224-231)
plugins/openrouter/tests/test_openrouter_llm.py (4)
agents-core/vision_agents/core/agents/conversation.py (1)
InMemoryConversation(230-237)agents-core/vision_agents/core/llm/events.py (1)
LLMResponseChunkEvent(90-105)plugins/openai/vision_agents/plugins/openai/openai_llm.py (3)
_normalize_message(351-369)simple_response(80-108)create_response(118-214)agents-core/vision_agents/core/events/manager.py (1)
wait(470-484)
plugins/openrouter/example/openrouter_example.py (2)
agents-core/vision_agents/core/edge/types.py (1)
User(15-18)agents-core/vision_agents/core/agents/agents.py (2)
Agent(56-1050)finish(410-430)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (2)
agents-core/vision_agents/core/llm/llm.py (1)
LLM(36-363)plugins/openai/vision_agents/plugins/openai/openai_llm.py (4)
OpenAILLM(31-518)create_conversation(110-112)add_conversation_history(114-116)_normalize_message(351-369)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: unit / Ruff & mypy
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Ruff & mypy
🔇 Additional comments (4)
plugins/openrouter/example/pyproject.toml (1)
17-26: Workspace sources look good.Editable sources match the repo layout and should speed local dev.
plugins/openrouter/vision_agents/plugins/openrouter/__init__.py (1)
2-4: Clean public surface.Alias export is clear and predictable (openrouter.LLM). No nits.
plugins/openrouter/tests/test_openrouter_llm.py (1)
60-69: Asyncio mode already enabled
The project’s pytest.ini setsasyncio_mode = auto, so async fixtures will run correctly without adding module‐level markers. No changes needed.plugins/openrouter/example/openrouter_example.py (1)
43-46: Ignore finish() inside the join context. TheAgentSessionContextManager.__exit__method schedules onlyagent.close(), notagent.finish(), so the explicitawait agent.finish()call inside thewithblock is required.Likely an incorrect or invalid review comment.
| # create the conversation if needed and add the required args | ||
| await self.create_conversation() | ||
| self.add_conversation_history(kwargs) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Restore input before injecting history
create_response now calls add_conversation_history before guaranteeing kwargs["input"] exists. The OpenRouter override of add_conversation_history immediately reads kwargs["input"], so any call like await llm.create_response("hi") (no explicit input=) now crashes with a KeyError. Please populate kwargs["input"] from args (current logic at Lines 146-151) before invoking add_conversation_history, or make the helper resilient to missing input. Moving the input-defaulting block above this call fixes the regression.
if "stream" not in kwargs:
kwargs["stream"] = True
+ if "input" not in kwargs:
+ input_content = args[0] if args else "Hello"
+ kwargs["input"] = input_content
+
# create the conversation if needed and add the required args
await self.create_conversation()
self.add_conversation_history(kwargs)
-
- # Set up input parameter for OpenAI Responses API
- if "input" not in kwargs:
- input_content = args[0] if args else "Hello"
- kwargs["input"] = input_content📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # create the conversation if needed and add the required args | |
| await self.create_conversation() | |
| self.add_conversation_history(kwargs) | |
| if "stream" not in kwargs: | |
| kwargs["stream"] = True | |
| if "input" not in kwargs: | |
| input_content = args[0] if args else "Hello" | |
| kwargs["input"] = input_content | |
| # create the conversation if needed and add the required args | |
| await self.create_conversation() | |
| self.add_conversation_history(kwargs) |
🤖 Prompt for AI Agents
In plugins/openai/vision_agents/plugins/openai/openai_llm.py around lines 130 to
133, the call to self.add_conversation_history(kwargs) happens before
kwargs["input"] is guaranteed, causing a KeyError in overrides that read
kwargs["input"] immediately; move the existing input-defaulting logic (the block
that copies positional args into kwargs["input"] found at lines ~146-151) to
before await self.create_conversation() and
self.add_conversation_history(kwargs), or alternatively update
add_conversation_history to safely handle a missing "input" key; prefer the
first approach: populate kwargs["input"] from args before calling
add_conversation_history so the history helper always sees an input.
| logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s [call_id=%(call_id)s] %(name)s: %(message)s") | ||
| logger = logging.getLogger(__name__) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Formatter references call_id but most records won’t have it → KeyError.
With this format, any log without record.call_id will crash formatting. Add a filter to set a default.
Apply:
-import logging
+import logging
@@
-logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s [call_id=%(call_id)s] %(name)s: %(message)s")
+logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s [call_id=%(call_id)s] %(name)s: %(message)s")
+class _CallIdFallback(logging.Filter):
+ def filter(self, record: logging.LogRecord) -> bool:
+ if not hasattr(record, "call_id"):
+ record.call_id = "-"
+ return True
+logging.getLogger().addFilter(_CallIdFallback())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s [call_id=%(call_id)s] %(name)s: %(message)s") | |
| logger = logging.getLogger(__name__) | |
| import logging | |
| logging.basicConfig( | |
| level=logging.INFO, | |
| format="%(asctime)s %(levelname)s [call_id=%(call_id)s] %(name)s: %(message)s" | |
| ) | |
| class _CallIdFallback(logging.Filter): | |
| def filter(self, record: logging.LogRecord) -> bool: | |
| if not hasattr(record, "call_id"): | |
| record.call_id = "-" | |
| return True | |
| logging.getLogger().addFilter(_CallIdFallback()) | |
| logger = logging.getLogger(__name__) |
🤖 Prompt for AI Agents
In plugins/openrouter/example/openrouter_example.py around lines 13 to 15, the
logging formatter references record.call_id which will raise KeyError for
records without that attribute; add a logging.Filter (or small callable) that
ensures every LogRecord has a call_id attribute (e.g., set record.call_id =
getattr(record, "call_id", "-")) and attach that filter to the root logger or
the handler used by basicConfig before setting the formatter so logs without
call_id do not crash formatting.
| api_key: OpenRouter API key. If not provided, uses OPENROUTER_API_KEY env var. | ||
| base_url: OpenRouter API base URL. | ||
| model: Model to use. Defaults to openai/gpt-4o. | ||
| **kwargs: Additional arguments passed to OpenAI LLM. | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Docstring default model is incorrect.
Says default is openai/gpt-4o, but the code sets "openrouter/andromeda-alpha".
Apply:
- model: Model to use. Defaults to openai/gpt-4o.
+ model: Model to use. Defaults to openrouter/andromeda-alpha.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| api_key: OpenRouter API key. If not provided, uses OPENROUTER_API_KEY env var. | |
| base_url: OpenRouter API base URL. | |
| model: Model to use. Defaults to openai/gpt-4o. | |
| **kwargs: Additional arguments passed to OpenAI LLM. | |
| """ | |
| api_key: OpenRouter API key. If not provided, uses OPENROUTER_API_KEY env var. | |
| base_url: OpenRouter API base URL. | |
| model: Model to use. Defaults to openrouter/andromeda-alpha. | |
| **kwargs: Additional arguments passed to OpenAI LLM. | |
| """ |
🤖 Prompt for AI Agents
In plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py around
lines 29–33 the docstring claims the default model is "openai/gpt-4o" but the
code sets the default to "openrouter/andromeda-alpha"; update the docstring to
reflect the actual default model ("openrouter/andromeda-alpha") or, if the
intended default is "openai/gpt-4o", change the default in the code to that
value; ensure the docstring and any examples/tests remain consistent with the
chosen default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (8)
plugins/openrouter/tests/test_openrouter_llm.py (8)
14-16: Mark module for asyncio to avoid runner config couplingIf pytest-asyncio isn’t set to auto in this repo, these async tests will error. Add a module-level marker for safety.
load_dotenv() - +# Run all tests in this module with an event loop +pytestmark = pytest.mark.asyncioPlease confirm whether pytest-asyncio is configured in auto mode; if yes, this change is optional.
36-59: Avoid testing private API directly (or clearly mark it as such)
LLM._normalize_messageis private; refactors can break this test. Prefer exercising normalization via public surfaces (e.g.,add_conversation_historythroughcreate_responsewithstream=Falseand mocking network), or explicitly name tests to indicate internal coupling.Example options:
- Use public path: call
llm.add_conversation_history({"input": ...})on a test double conversation.- Or rename tests to
test__normalize_message__basic/with_imageto acknowledge internals.
80-88: Force non‑streaming to stabilize native API assertions
create_responsedefaults tostream=True; assertingresponse.original.idis more reliable with non‑streaming.- response = await llm.create_response( - input="say hi", instructions="You are a helpful assistant." - ) + response = await llm.create_response( + input="say hi", + instructions="You are a helpful assistant.", + stream=False, + )Please confirm OpenRouter returns an OpenAI‑compatible
idfor non‑streaming responses in your client version.
90-107: Tighten streaming assertion and use snake_caseUse PEP8 naming and assert at least one non‑empty delta chunk to prove streaming occurred.
- streamingWorks = False + streaming_works = False + deltas = 0 @llm.events.subscribe async def passed(event: LLMResponseChunkEvent): - nonlocal streamingWorks - streamingWorks = True + nonlocal streaming_works, deltas + if event.delta: + deltas += 1 + streaming_works = True @@ - assert streamingWorks, "Streaming should have generated chunk events" + assert streaming_works, "Streaming should have generated chunk events" + assert deltas > 0, "Expected at least one non-empty text delta"
109-121: Reduce flakiness in memory tests that depend on model behaviorLive LLM answers vary. Consider marking these as flaky (rerun) or xfail when model confidence drops, or strengthen prompts to elicit numeric-only answers.
Options:
- If pytest-rerunfailures is available:
@pytest.mark.flaky(reruns=2, reruns_delay=1)
- Or:
pytest.xfail("Memory reasoning can be non-deterministic across models")Confirm whether pytest-rerunfailures is part of dev dependencies; if not, I can open a PR to add it.
Also applies to: 124-136
139-156: Optional: Convert permanent skip to xfail with conditionSince instruction-following is intermittently successful, consider
xfailwith a condition rather than unconditional skip to still exercise the path locally when the model cooperates.- pytest.skip("instruction following doesnt always work") + pytest.xfail("Instruction following is not stable across providers/models")
44-59: Add a true multimodal integration test to cover Qwen VL objectiveThe normalization test builds an image message but doesn’t hit the API. To align with AI-194 (“Qwen VL demo”), add an integration test that sends text+image to a VL-capable model when OPENROUTER_MODEL points to Qwen VL.
Example (guarded by model capability):
@pytest.mark.integration async def test_multimodal_integration(self, llm: LLM): if "qwen" not in llm.model and "vl" not in llm.model.lower(): pytest.skip("Model not VL-capable") img_url = "https://upload.wikimedia.org/wikipedia/commons/d/d5/2023_06_08_Raccoon1.jpg" resp = await llm.create_response( input=[{"type": "input_text", "text": "What animal is this?"}, {"type": "input_image", "image_url": img_url}], stream=False, ) self.assert_response_successful(resp)
61-68: Make model configurable; fix fixture docstring mismatchDocstring still references the wrong model and the fixture hardcodes an invalid ID. Read
OPENROUTER_MODELwith a valid default and update the docstring:async def llm(self) -> LLM: - """Fixture for OpenRouter LLM with z-ai/glm-4.6 model.""" + """Fixture for OpenRouter LLM. + + Returns: + LLM: Configured OpenRouter LLM instance. Uses OPENROUTER_MODEL if set. + """ if not os.environ.get("OPENROUTER_API_KEY"): pytest.skip("OPENROUTER_API_KEY environment variable not set") - - llm = LLM(model="anthropic/claude-haiku-4.5") + model = os.getenv("OPENROUTER_MODEL", "anthropic/claude-3.5-sonnet") + llm = LLM(model=model) llm._conversation = InMemoryConversation("be friendly", []) return llm
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
plugins/openrouter/README.md(1 hunks)plugins/openrouter/tests/test_openrouter_llm.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- plugins/openrouter/README.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Do not modify sys.path in Python code
Docstrings must follow the Google style guide
Files:
plugins/openrouter/tests/test_openrouter_llm.py
🧬 Code graph analysis (1)
plugins/openrouter/tests/test_openrouter_llm.py (5)
agents-core/vision_agents/core/agents/conversation.py (1)
InMemoryConversation(230-237)agents-core/vision_agents/core/llm/events.py (1)
LLMResponseChunkEvent(90-105)agents-core/vision_agents/core/llm/llm.py (2)
LLM(36-363)_set_instructions(164-168)plugins/openai/vision_agents/plugins/openai/openai_llm.py (3)
_normalize_message(351-369)simple_response(80-108)create_response(118-214)agents-core/vision_agents/core/events/manager.py (1)
wait(470-484)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Ruff & mypy
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Ruff & mypy
commit ec32383 Author: Neevash Ramdial (Nash) <mail@neevash.dev> Date: Mon Oct 27 15:51:53 2025 -0600 mypy clean up (GetStream#130) commit c52fe4c Author: Neevash Ramdial (Nash) <mail@neevash.dev> Date: Mon Oct 27 15:28:00 2025 -0600 remove turn keeping from example (GetStream#129) commit e1072e8 Merge: 5bcffa3 fea101a Author: Yarik <43354956+yarikdevcom@users.noreply.github.com> Date: Mon Oct 27 14:28:05 2025 +0100 Merge pull request GetStream#106 from tjirab/feat/20251017_gh-labeler feat: Github pull request labeler commit 5bcffa3 Merge: 406673c bfe888f Author: Thierry Schellenbach <thierry@getstream.io> Date: Sat Oct 25 10:56:27 2025 -0600 Merge pull request GetStream#119 from GetStream/fix-screensharing Fix screensharing commit bfe888f Merge: 8019c14 406673c Author: Thierry Schellenbach <thierry@getstream.io> Date: Sat Oct 25 10:56:15 2025 -0600 Merge branch 'main' into fix-screensharing commit 406673c Author: Stefan Blos <stefan.blos@gmail.com> Date: Sat Oct 25 03:03:10 2025 +0200 Update README (GetStream#118) * Changed README to LaRaes version * Remove arrows from table * Add table with people & projects to follow * Update images and links in README.md commit 3316908 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Fri Oct 24 23:48:06 2025 +0200 Simplify TTS plugin and audio utils (GetStream#123) - Simplified TTS plugin - AWS Polly TTS plugin - OpenAI TTS plugin - Improved audio utils commit 8019c14 Author: Max Kahan <max.kahan@getstream.io> Date: Fri Oct 24 17:32:26 2025 +0100 remove video forwarder lazy init commit ca62d37 Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 16:44:03 2025 +0100 use correct codec commit 8cf8788 Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:27:18 2025 +0100 rename variable to fix convention commit 33fd70d Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:24:42 2025 +0100 unsubscribe from events commit 3692131 Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:19:53 2025 +0100 remove nonexistent type commit c5f68fe Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:10:07 2025 +0100 cleanup tests to fit style commit 8b3c61a Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 13:55:08 2025 +0100 clean up resources when track cancelled commit d8e08cb Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 13:24:55 2025 +0100 fix track republishing in agent commit 0f8e116 Author: Max Kahan <max.kahan@getstream.io> Date: Wed Oct 22 15:37:11 2025 +0100 add tests commit 08e6133 Author: Max Kahan <max.kahan@getstream.io> Date: Wed Oct 22 15:25:37 2025 +0100 ensure video track dimensions are an even number commit 6a725b0 Merge: 5f001e0 5088709 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 15:23:58 2025 -0600 Merge pull request GetStream#122 from GetStream/cleanup_stt Cleanup STT commit 5088709 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 15:23:34 2025 -0600 cleanup of stt commit f185120 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 15:08:42 2025 -0600 more cleanup commit 05ccbfd Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:51:48 2025 -0600 cleanup commit bb834ca Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:28:53 2025 -0600 more cleanup for stt commit 7a3f2d2 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:11:35 2025 -0600 more test cleanup commit ad7f4fe Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:10:57 2025 -0600 cleanup test commit 9e50cdd Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:03:45 2025 -0600 large cleanup commit 5f001e0 Merge: 95a03e4 5d204f3 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 12:01:52 2025 -0600 Merge pull request GetStream#121 from GetStream/fish_stt [AI-201] Fish speech to text (partial) commit 5d204f3 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 11:48:16 2025 -0600 remove ugly tests commit ee9a241 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 11:46:19 2025 -0600 cleanup commit 6eb8270 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 11:23:00 2025 -0600 fix 48khz support commit 3b90548 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 10:59:08 2025 -0600 first attempt at fish stt, doesnt entirely work just yet commit 95a03e4 Merge: b90c9e3 b4c0da8 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Thu Oct 23 10:11:39 2025 +0200 Merge branch 'main' of github.com:GetStream/Vision-Agents commit b90c9e3 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Wed Oct 22 23:28:28 2025 +0200 remove print and double event handling commit b4c0da8 Merge: 3d06446 a426bc2 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 15:08:51 2025 -0600 Merge pull request GetStream#117 from GetStream/openrouter [AI-194] Openrouter commit a426bc2 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 15:03:10 2025 -0600 skip broken test commit ba6c027 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 14:50:23 2025 -0600 almost working openrouter commit 0b1c873 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 14:47:12 2025 -0600 almost working, just no instruction following commit ce63233 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 14:35:53 2025 -0600 working memory for openai commit 149e886 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:32:43 2025 -0600 todo commit e0df1f6 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:20:38 2025 -0600 first pass at adding openrouter commit 3d06446 Merge: 4eb8ef4 ef55d66 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:20:11 2025 -0600 Merge branch 'main' of github.com:GetStream/Vision-Agents commit 4eb8ef4 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:20:01 2025 -0600 cleanup ai plugin instructions commit ef55d66 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 12:54:33 2025 -0600 Add link to stash_pomichter for spatial memory commit 9c9737f Merge: c954409 390c45b Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:45:09 2025 -0600 Merge pull request GetStream#115 from GetStream/fish [AI-195] Fish support commit 390c45b Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:44:37 2025 -0600 cleannup commit 1cc1cf1 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:42:03 2025 -0600 happy tests commit 8163d32 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:39:21 2025 -0600 fix gemini rule following commit ada3ac9 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:20:18 2025 -0600 fish tts commit 61a26cf Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 16:44:03 2025 -0600 attempt at fish commit c954409 Merge: ab27e48 c71da10 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 14:18:15 2025 -0600 Merge pull request GetStream#104 from GetStream/bedrock [AI-192] - Bedrock, AWS & Nova commit c71da10 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Tue Oct 21 22:00:25 2025 +0200 maybe commit b5482da Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Tue Oct 21 21:46:15 2025 +0200 debugging commit 9a36e45 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 13:14:58 2025 -0600 echo environment name commit 6893968 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 12:53:58 2025 -0600 more debugging commit c35fc47 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 12:45:44 2025 -0600 add some debug info commit 0d6d3fd Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 12:03:13 2025 -0600 run test fix commit c3a31bd Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:52:25 2025 -0600 log cache hit commit 04554ae Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:48:03 2025 -0600 fix glob commit 7da96db Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:33:56 2025 -0600 mypy commit 186053f Merge: 4b540c9 ab27e48 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:17:17 2025 -0600 happy tests commit 4b540c9 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 10:20:04 2025 -0600 happy tests commit b05a60a Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 09:17:45 2025 -0600 add readme commit 71affcc Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 09:13:01 2025 -0600 rename to aws commit d2eeba7 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 21:32:01 2025 -0600 ai tts instructions commit 98a4f9d Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 16:49:00 2025 -0600 small edits commit ab27e48 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Mon Oct 20 21:42:04 2025 +0200 Ensure user agent is initialized before joining the call (GetStream#113) * ensure user agent is initialized before joining the call * wip commit 3cb339b Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Mon Oct 20 21:22:57 2025 +0200 New conversation API (GetStream#102) * trying to resurrect * test transcription events for openai * more tests for openai and gemini llm * more tests for openai and gemini llm * update py-client * wip * ruff * wip * ruff * snap * another way * another way, a better way * ruff * ruff * rev * ruffit * mypy everything * brief * tests * openai dep bump * snap - broken * nothingfuckingworks * message id * fix test * ruffit commit cb6f00a Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 13:18:03 2025 -0600 use qwen commit f84b2ad Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 13:02:24 2025 -0600 fix tests commit e61acca Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 12:50:40 2025 -0600 testing and linting commit 5f4d353 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 12:34:14 2025 -0600 working commit c2a15a9 Merge: a310771 1025a42 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 11:40:00 2025 -0600 Merge branch 'main' of github.com:GetStream/Vision-Agents into bedrock commit a310771 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 11:39:48 2025 -0600 wip commit b4370f4 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 11:22:43 2025 -0600 something isn't quite working commit 2dac975 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 10:30:04 2025 -0600 add the examples commit 6885289 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 20:19:42 2025 -0600 ai realtime docs commit a0fa3cc Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 18:48:06 2025 -0600 wip commit b914fc3 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 18:40:22 2025 -0600 fix ai llm commit b5b00a7 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 17:11:26 2025 -0600 work audio input commit ac72260 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 16:47:19 2025 -0600 fix model id commit 2b5863c Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 16:32:54 2025 -0600 wip on bedrock commit 8bb4162 Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 15:22:03 2025 -0600 next up the connect method commit 7a21e4e Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 14:12:00 2025 -0600 nova progress commit 16e8ba0 Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 13:16:00 2025 -0600 docs for bedrock nova commit 1025a42 Author: Bart Schuijt <schuijt.bart@gmail.com> Date: Fri Oct 17 21:05:45 2025 +0200 fix: Update .env.example for Gemini Live (GetStream#108) commit e12112d Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 11:49:07 2025 -0600 wip commit fea101a Author: Bart Schuijt <schuijt.bart@gmail.com> Date: Fri Oct 17 09:25:55 2025 +0200 workflow file update commit bb2d74c Author: Bart Schuijt <schuijt.bart@gmail.com> Date: Fri Oct 17 09:22:33 2025 +0200 initial commit commit d2853cd Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 16 19:44:59 2025 -0600 always remember pep 420 commit 30a8eca Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 16 19:36:58 2025 -0600 start of bedrock branch commit fc032bf Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Thu Oct 16 09:17:42 2025 +0200 Remove cli handler from examples (GetStream#101) commit 39a821d Author: Dan Gusev <dangusev92@gmail.com> Date: Tue Oct 14 12:20:41 2025 +0200 Update Deepgram plugin to use SDK v5.0.0 (GetStream#98) * Update Deepgram plugin to use SDK v5.0.0 * Merge test_realtime and test_stt and update the remaining tests * Make deepgram.STT.start() idempotent * Clean up unused import * Use uv as the default package manager > pip --------- Co-authored-by: Neevash Ramdial (Nash) <mail@neevash.dev> commit 2013be5 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Mon Oct 13 16:57:37 2025 +0200 ensure chat works with default types (GetStream#99)
* Basic structure setup and stubbing * Basic person detection and test * multi-object detection * Clean up and focus in on only detection * Further simplification * Rebase latest main commit ec32383 Author: Neevash Ramdial (Nash) <mail@neevash.dev> Date: Mon Oct 27 15:51:53 2025 -0600 mypy clean up (#130) commit c52fe4c Author: Neevash Ramdial (Nash) <mail@neevash.dev> Date: Mon Oct 27 15:28:00 2025 -0600 remove turn keeping from example (#129) commit e1072e8 Merge: 5bcffa3 fea101a Author: Yarik <43354956+yarikdevcom@users.noreply.github.com> Date: Mon Oct 27 14:28:05 2025 +0100 Merge pull request #106 from tjirab/feat/20251017_gh-labeler feat: Github pull request labeler commit 5bcffa3 Merge: 406673c bfe888f Author: Thierry Schellenbach <thierry@getstream.io> Date: Sat Oct 25 10:56:27 2025 -0600 Merge pull request #119 from GetStream/fix-screensharing Fix screensharing commit bfe888f Merge: 8019c14 406673c Author: Thierry Schellenbach <thierry@getstream.io> Date: Sat Oct 25 10:56:15 2025 -0600 Merge branch 'main' into fix-screensharing commit 406673c Author: Stefan Blos <stefan.blos@gmail.com> Date: Sat Oct 25 03:03:10 2025 +0200 Update README (#118) * Changed README to LaRaes version * Remove arrows from table * Add table with people & projects to follow * Update images and links in README.md commit 3316908 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Fri Oct 24 23:48:06 2025 +0200 Simplify TTS plugin and audio utils (#123) - Simplified TTS plugin - AWS Polly TTS plugin - OpenAI TTS plugin - Improved audio utils commit 8019c14 Author: Max Kahan <max.kahan@getstream.io> Date: Fri Oct 24 17:32:26 2025 +0100 remove video forwarder lazy init commit ca62d37 Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 16:44:03 2025 +0100 use correct codec commit 8cf8788 Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:27:18 2025 +0100 rename variable to fix convention commit 33fd70d Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:24:42 2025 +0100 unsubscribe from events commit 3692131 Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:19:53 2025 +0100 remove nonexistent type commit c5f68fe Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 14:10:07 2025 +0100 cleanup tests to fit style commit 8b3c61a Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 13:55:08 2025 +0100 clean up resources when track cancelled commit d8e08cb Author: Max Kahan <max.kahan@getstream.io> Date: Thu Oct 23 13:24:55 2025 +0100 fix track republishing in agent commit 0f8e116 Author: Max Kahan <max.kahan@getstream.io> Date: Wed Oct 22 15:37:11 2025 +0100 add tests commit 08e6133 Author: Max Kahan <max.kahan@getstream.io> Date: Wed Oct 22 15:25:37 2025 +0100 ensure video track dimensions are an even number commit 6a725b0 Merge: 5f001e0 5088709 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 15:23:58 2025 -0600 Merge pull request #122 from GetStream/cleanup_stt Cleanup STT commit 5088709 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 15:23:34 2025 -0600 cleanup of stt commit f185120 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 15:08:42 2025 -0600 more cleanup commit 05ccbfd Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:51:48 2025 -0600 cleanup commit bb834ca Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:28:53 2025 -0600 more cleanup for stt commit 7a3f2d2 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:11:35 2025 -0600 more test cleanup commit ad7f4fe Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:10:57 2025 -0600 cleanup test commit 9e50cdd Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 14:03:45 2025 -0600 large cleanup commit 5f001e0 Merge: 95a03e4 5d204f3 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 12:01:52 2025 -0600 Merge pull request #121 from GetStream/fish_stt [AI-201] Fish speech to text (partial) commit 5d204f3 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 11:48:16 2025 -0600 remove ugly tests commit ee9a241 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 11:46:19 2025 -0600 cleanup commit 6eb8270 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 11:23:00 2025 -0600 fix 48khz support commit 3b90548 Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 23 10:59:08 2025 -0600 first attempt at fish stt, doesnt entirely work just yet commit 95a03e4 Merge: b90c9e3 b4c0da8 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Thu Oct 23 10:11:39 2025 +0200 Merge branch 'main' of github.com:GetStream/Vision-Agents commit b90c9e3 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Wed Oct 22 23:28:28 2025 +0200 remove print and double event handling commit b4c0da8 Merge: 3d06446 a426bc2 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 15:08:51 2025 -0600 Merge pull request #117 from GetStream/openrouter [AI-194] Openrouter commit a426bc2 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 15:03:10 2025 -0600 skip broken test commit ba6c027 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 14:50:23 2025 -0600 almost working openrouter commit 0b1c873 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 14:47:12 2025 -0600 almost working, just no instruction following commit ce63233 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 14:35:53 2025 -0600 working memory for openai commit 149e886 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:32:43 2025 -0600 todo commit e0df1f6 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:20:38 2025 -0600 first pass at adding openrouter commit 3d06446 Merge: 4eb8ef4 ef55d66 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:20:11 2025 -0600 Merge branch 'main' of github.com:GetStream/Vision-Agents commit 4eb8ef4 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 13:20:01 2025 -0600 cleanup ai plugin instructions commit ef55d66 Author: Thierry Schellenbach <thierry@getstream.io> Date: Wed Oct 22 12:54:33 2025 -0600 Add link to stash_pomichter for spatial memory commit 9c9737f Merge: c954409 390c45b Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:45:09 2025 -0600 Merge pull request #115 from GetStream/fish [AI-195] Fish support commit 390c45b Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:44:37 2025 -0600 cleannup commit 1cc1cf1 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:42:03 2025 -0600 happy tests commit 8163d32 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:39:21 2025 -0600 fix gemini rule following commit ada3ac9 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 19:20:18 2025 -0600 fish tts commit 61a26cf Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 16:44:03 2025 -0600 attempt at fish commit c954409 Merge: ab27e48 c71da10 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 14:18:15 2025 -0600 Merge pull request #104 from GetStream/bedrock [AI-192] - Bedrock, AWS & Nova commit c71da10 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Tue Oct 21 22:00:25 2025 +0200 maybe commit b5482da Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Tue Oct 21 21:46:15 2025 +0200 debugging commit 9a36e45 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 13:14:58 2025 -0600 echo environment name commit 6893968 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 12:53:58 2025 -0600 more debugging commit c35fc47 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 12:45:44 2025 -0600 add some debug info commit 0d6d3fd Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 12:03:13 2025 -0600 run test fix commit c3a31bd Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:52:25 2025 -0600 log cache hit commit 04554ae Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:48:03 2025 -0600 fix glob commit 7da96db Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:33:56 2025 -0600 mypy commit 186053f Merge: 4b540c9 ab27e48 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 11:17:17 2025 -0600 happy tests commit 4b540c9 Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 10:20:04 2025 -0600 happy tests commit b05a60a Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 09:17:45 2025 -0600 add readme commit 71affcc Author: Thierry Schellenbach <thierry@getstream.io> Date: Tue Oct 21 09:13:01 2025 -0600 rename to aws commit d2eeba7 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 21:32:01 2025 -0600 ai tts instructions commit 98a4f9d Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 16:49:00 2025 -0600 small edits commit ab27e48 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Mon Oct 20 21:42:04 2025 +0200 Ensure user agent is initialized before joining the call (#113) * ensure user agent is initialized before joining the call * wip commit 3cb339b Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Mon Oct 20 21:22:57 2025 +0200 New conversation API (#102) * trying to resurrect * test transcription events for openai * more tests for openai and gemini llm * more tests for openai and gemini llm * update py-client * wip * ruff * wip * ruff * snap * another way * another way, a better way * ruff * ruff * rev * ruffit * mypy everything * brief * tests * openai dep bump * snap - broken * nothingfuckingworks * message id * fix test * ruffit commit cb6f00a Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 13:18:03 2025 -0600 use qwen commit f84b2ad Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 13:02:24 2025 -0600 fix tests commit e61acca Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 12:50:40 2025 -0600 testing and linting commit 5f4d353 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 12:34:14 2025 -0600 working commit c2a15a9 Merge: a310771 1025a42 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 11:40:00 2025 -0600 Merge branch 'main' of github.com:GetStream/Vision-Agents into bedrock commit a310771 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 11:39:48 2025 -0600 wip commit b4370f4 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 11:22:43 2025 -0600 something isn't quite working commit 2dac975 Author: Thierry Schellenbach <thierry@getstream.io> Date: Mon Oct 20 10:30:04 2025 -0600 add the examples commit 6885289 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 20:19:42 2025 -0600 ai realtime docs commit a0fa3cc Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 18:48:06 2025 -0600 wip commit b914fc3 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 18:40:22 2025 -0600 fix ai llm commit b5b00a7 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 17:11:26 2025 -0600 work audio input commit ac72260 Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 16:47:19 2025 -0600 fix model id commit 2b5863c Author: Thierry Schellenbach <thierry@getstream.io> Date: Sun Oct 19 16:32:54 2025 -0600 wip on bedrock commit 8bb4162 Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 15:22:03 2025 -0600 next up the connect method commit 7a21e4e Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 14:12:00 2025 -0600 nova progress commit 16e8ba0 Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 13:16:00 2025 -0600 docs for bedrock nova commit 1025a42 Author: Bart Schuijt <schuijt.bart@gmail.com> Date: Fri Oct 17 21:05:45 2025 +0200 fix: Update .env.example for Gemini Live (#108) commit e12112d Author: Thierry Schellenbach <thierry@getstream.io> Date: Fri Oct 17 11:49:07 2025 -0600 wip commit fea101a Author: Bart Schuijt <schuijt.bart@gmail.com> Date: Fri Oct 17 09:25:55 2025 +0200 workflow file update commit bb2d74c Author: Bart Schuijt <schuijt.bart@gmail.com> Date: Fri Oct 17 09:22:33 2025 +0200 initial commit commit d2853cd Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 16 19:44:59 2025 -0600 always remember pep 420 commit 30a8eca Author: Thierry Schellenbach <thierry@getstream.io> Date: Thu Oct 16 19:36:58 2025 -0600 start of bedrock branch commit fc032bf Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Thu Oct 16 09:17:42 2025 +0200 Remove cli handler from examples (#101) commit 39a821d Author: Dan Gusev <dangusev92@gmail.com> Date: Tue Oct 14 12:20:41 2025 +0200 Update Deepgram plugin to use SDK v5.0.0 (#98) * Update Deepgram plugin to use SDK v5.0.0 * Merge test_realtime and test_stt and update the remaining tests * Make deepgram.STT.start() idempotent * Clean up unused import * Use uv as the default package manager > pip --------- Co-authored-by: Neevash Ramdial (Nash) <mail@neevash.dev> commit 2013be5 Author: Tommaso Barbugli <tbarbugli@gmail.com> Date: Mon Oct 13 16:57:37 2025 +0200 ensure chat works with default types (#99) * remove extra av * Okay detection * further cleanup for detection * Experimenting with HF version * Move processing to CPU for MPS (CUDA/Model limit) * Basic test for inference, device selction and model load * Rename public detection classes * Extract moondream video track to a common file * Use util video track instead * Update plugins/moondream/vision_agents/plugins/moondream/moondream_cloud_processor.py Co-authored-by: Dan Gusev <dangusev92@gmail.com> * avoid swallowing too many exceptions Co-authored-by: Dan Gusev <dangusev92@gmail.com> * clean up * Extract detection logic to utils * ruff and mypy clean up * Update public exports * Fix test imports * Clean up remaining issues * Doc string clean up * Clean up readme * Update plugins/moondream/README.md * Update plugins/moondream/README.md --------- Co-authored-by: Dan Gusev <dangusev92@gmail.com>
Openrouter support. Uses openAI SDK but disables the usage of openAI conversation storage.
Summary by CodeRabbit
New Features
Documentation
Tests
Chores