Skip to content

Conversation

@nk-ag
Copy link
Contributor

@nk-ag nk-ag commented Aug 16, 2025

No description provided.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 16, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Warning

Rate limit exceeded

@nk-ag has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 20 minutes and 22 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between f23a7fa and 2d3f7fb.

📒 Files selected for processing (2)
  • .github/workflows/test-state-manager.yml (1 hunks)
  • state-manager/tests/unit/controller/test_trigger_graph.py (1 hunks)
📝 Walkthrough

Summary by CodeRabbit

  • New Features
    • Added graph triggering endpoint to start executions and return a run_id.
    • Introduced run_id across states and responses for grouping executions.
    • New listing endpoints: current states, states by run_id, registered nodes, and graph templates.
  • Documentation
    • Added tests README with structure, guidelines, and run instructions.
  • Tests
    • Extensive unit tests for controllers and a full workflow integration test.
  • Chores
    • Added testing/config tooling (PyTest config, pytest-asyncio, pytest-cov) and httpx dependency.
    • Minor editor settings update.

Walkthrough

Adds run_id propagation and a trigger_graph API; replaces bulk DB updates with in-memory mutation + await state.save() for executed/errored flows; introduces listing controllers/routes and state-list response models; adds pytest config, many unit tests and an integration test, plus test/runtime dependencies.

Changes

Cohort / File(s) Summary
Controllers — creation & trigger
state-manager/app/controller/create_states.py, state-manager/app/routes.py
New trigger_graph API that generates a run_id, builds a CreateRequestModel with run_id and calls create_states; create_states now attaches run_id to created State records and responses; route POST /graph/{graph_name}/trigger added.
Controllers — execution & errored
state-manager/app/controller/executed_state.py, state-manager/app/controller/errored_state.py
Switched from bulk DB .set(...) updates to in-memory mutation of State objects followed by await state.save(); executed_state creates child State records for additional outputs and propagates run_id; state_id types use PydanticObjectId.
Controllers — listings & queries
state-manager/app/controller/get_current_states.py, state-manager/app/controller/get_states_by_run_id.py, state-manager/app/controller/list_graph_templates.py, state-manager/app/controller/list_registered_nodes.py
New async controllers to list current states, states by run_id, graph templates, and registered nodes; they perform Beanie queries and structured logging.
Tasks
state-manager/app/tasks/create_next_state.py
Propagates run_id=state.run_id when constructing next State objects.
Models — DB & API
state-manager/app/models/db/state.py, state-manager/app/models/create_models.py, state-manager/app/models/list_models.py, state-manager/app/models/state_list_models.py
Added required run_id: str on DB State and request/response state models; introduced TriggerGraphRequestModel/TriggerGraphResponseModel; added list response models and state-list view models.
Routes & serialization
state-manager/app/routes.py
Added routes: POST /graph/{graph_name}/trigger, GET /nodes/, GET /graphs/, GET /states/, GET /states/run/{run_id}; handlers serialize states (stringify parents/ids) and pass PydanticObjectId for state_id endpoints.
Testing — unit & integration
state-manager/tests/unit/controller/*, state-manager/tests/integration/peinding_test_full_workflow_integration.py, state-manager/tests/README.md, state-manager/tests/**/__init__.py
Added many unit tests for controllers (create/enqueue/executed/errored/get_*/upsert/trigger), an integration full-workflow test, tests README, and test package initializers.
Tooling & config
state-manager/pyproject.toml, state-manager/pytest.ini, .vscode/settings.json
Added dependencies (httpx, pytest, pytest-asyncio, pytest-cov), pytest.ini for discovery/markers/asyncio_mode, and VSCode pytest settings.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Routes
  participant Controller as trigger_graph
  participant Creator as create_states
  participant DB as State DB

  Client->>Routes: POST /v0/namespace/{ns}/graph/{graph}/trigger
  Routes->>Controller: trigger_graph(ns, graph, TriggerGraphRequest)
  Controller->>Controller: generate run_id (UUID)
  Controller->>Creator: create_states(ns, graph, CreateRequest(run_id, states))
  Creator->>DB: insert initial State docs (with run_id)
  DB-->>Creator: inserted states
  Creator-->>Controller: CreateResponse(states, status)
  Controller-->>Routes: TriggerGraphResponse(run_id, status, states)
  Routes-->>Client: 200 OK
Loading
sequenceDiagram
  participant Client
  participant Routes
  participant Exec as executed_state
  participant Task as create_next_state
  participant DB as State DB

  Client->>Routes: POST /states/{state_id}/executed
  Routes->>Exec: executed_state(state_id, outputs)
  Exec->>DB: State.find_one(state_id)
  DB-->>Exec: state
  Exec->>Exec: mutate state.status/outputs/parents
  Exec->>DB: await state.save()
  alt multiple outputs
    Exec->>DB: create & save child State(s) with run_id
    Exec->>Task: schedule create_next_state(child)
  end
  Exec->>Task: schedule create_next_state(updated state)
  Exec-->>Routes: ExecutedResponseModel
  Routes-->>Client: 200 OK
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • NiveditJain

Poem

I nibble run_ids, fresh and bright,
I save my states into the night.
I stitch parents, spawn the next in line,
With tiny hops, the graphs align.
Carrot cheers — a new run's fine! 🥕🐇

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @nk-ag, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the state-manager service by introducing a comprehensive suite of unit and integration tests. The primary goal is to ensure the robustness, correctness, and reliability of state transitions and data handling within the manager. Alongside the extensive testing efforts, several critical bug fixes and improvements have been implemented to address existing issues and refine the state management workflow.

Highlights

  • Introduced Extensive Testing Framework: The project now includes a robust testing setup with pytest, pytest-asyncio, and pytest-cov, configured via pytest.ini and documented in a new tests/README.md.
  • Comprehensive Unit Tests: New unit tests have been added for all major controller functions (create_states, enqueue_states, executed_state, errored_state, get_graph_template, get_secrets, upsert_graph_template), significantly improving code reliability and maintainability.
  • Full Workflow Integration Tests: A new integration test (test_full_workflow_integration.py) validates the end-to-end process of node registration, graph template creation, state creation, enqueuing, and execution, ensuring system-level functionality and correct interaction between components.
  • Bug Fix in executed_state Controller: An issue in the executed_state function was fixed where the create_next_state background task was not receiving the most up-to-date state object, which could lead to incorrect subsequent state processing.
  • Improved create_next_state Task Logic: The create_next_state task now gracefully handles scenarios where a node template has no subsequent nodes by marking the state as SUCCESS instead of raising an exception. Additionally, the logic for propagating parent states during new state creation has been corrected.
  • Pydantic Model Configuration Update: The State model now includes ConfigDict(arbitrary_types_allowed=True) to correctly handle ObjectId types from MongoDB, resolving potential Pydantic validation errors.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a comprehensive set of unit and integration tests for the state manager, which is a significant improvement for the project's reliability. Along with the tests, several important bug fixes have been implemented, particularly in the state transition logic within create_next_state and executed_state. My review focuses on improving the efficiency of some database operations, enhancing the structure and clarity of the newly added tests to follow best practices, and correcting some issues within the test mocks and assertions to ensure they are robust and accurate.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 38

🔭 Outside diff range comments (6)
state-manager/tests/unit/controller/test_create_states.py (1)

236-298: Multiple-states test is on-point; assert payload size to insert_many is great

Verifying insert_many receives two items is valuable. Optional: also assert that State.find().to_list() is called once to read back the inserted documents.

state-manager/app/controller/executed_state.py (1)

24-31: Inconsistency: zero-outputs branch still passes stale state; refresh for parity

The zero-outputs branch schedules create_next_state with the pre-update in-memory state. For consistency (and to avoid subtle bugs resolving parents/outputs), refresh the state after the update as you do in the non-empty outputs branch.

Apply:

@@
-        if len(body.outputs) == 0:
-            await State.find_one(State.id == state_id).set(
-                {"status": StateStatusEnum.EXECUTED, "outputs": {}, "parents": {**state.parents, state.identifier: ObjectId(state.id)}}
-            )
-
-            background_tasks.add_task(create_next_state, state)
+        if len(body.outputs) == 0:
+            await State.find_one(State.id == state_id).set(
+                {"status": StateStatusEnum.EXECUTED, "outputs": {}, "parents": {**state.parents, state.identifier: ObjectId(state.id)}}
+            )
+            refreshed_state = await State.find_one(State.id == state_id)
+            background_tasks.add_task(create_next_state, refreshed_state)
state-manager/app/tasks/create_next_state.py (4)

61-69: Non-string inputs and dead condition: this will raise at runtime for numbers/objects

  • temporary_input may be non-string (e.g., JSON schema type number/object). Calling split on non-strings will raise AttributeError.
  • len(splits) == 0 is never true; str.split always returns at least one element.

Guard non-string inputs and use a more explicit check for placeholders.

Apply this diff to make the handling safe:

-            for field_name, _ in next_node_input_model.model_fields.items():
-                temporary_input = next_node_template.inputs[field_name]
-                splits = temporary_input.split("${{")
-                    
-                if len(splits) == 0:
-                    next_node_input_data[field_name] = temporary_input
-                    continue
+            for field_name, _ in next_node_input_model.model_fields.items():
+                if field_name not in next_node_template.inputs:
+                    raise Exception(f"Missing required input '{field_name}' for node '{next_node_template.identifier}'")
+                temporary_input = next_node_template.inputs[field_name]
+                # If it's not a string (e.g., number/object/array), pass through as-is
+                if not isinstance(temporary_input, str):
+                    next_node_input_data[field_name] = temporary_input
+                    continue
+                if "${{" not in temporary_input:
+                    next_node_input_data[field_name] = temporary_input
+                    continue

69-103: Placeholder parsing is incompatible with template syntax and will mis-parse '{{node1.output1}}'

  • The parser assumes ${{...}} format and three-part tokens like node.outputs.field.
  • Integration tests (and likely user input) use double braces without “outputs”: "{{node1.output1}}".
  • As written, strings like "{{node1.output1}}" will hit the "}}" branch, produce parts of length 2, and raise "Invalid input placeholder format".

Support both syntaxes or standardize on one. Also replace the manual split logic with a regex-based substitution to handle multiple placeholders and suffix/prefix text robustly.

Proposed refactor for this block using regex; this supports:

  • ${{ nodeId.outputs.field }}
  • {{ nodeId.outputs.field }}
  • {{ nodeId.field }} (legacy, maps to outputs.field)
    Note: add import re at the top of the file.
-                constructed_string = ""
-                for split in splits:
-                    if "}}" in split:
-                        placeholder_content = split.split("}}")[0]
-                        parts = [p.strip() for p in placeholder_content.split('.')]
-                            
-                        if len(parts) != 3 or parts[1] != 'outputs':
-                            raise Exception(f"Invalid input placeholder format: '{placeholder_content}' for field {field_name}")
-                            
-                        input_identifier = parts[0]
-                        input_field = parts[2]                        
-
-                        parent_id = parents.get(input_identifier)
-                            
-                        if not parent_id:
-                            raise Exception(f"Parent identifier '{input_identifier}' not found in state parents.")
-
-                        if parent_id not in cache_states:
-                            dependent_state = await State.get(ObjectId(parent_id))
-                            if not dependent_state:
-                                raise Exception(f"Dependent state {input_identifier} not found")
-                            cache_states[parent_id] = dependent_state
-                        else:
-                            dependent_state = cache_states[parent_id]
-                            
-                        if input_field not in dependent_state.outputs:
-                            raise Exception(f"Input field {input_field} not found in dependent state {input_identifier}")
-                            
-                        constructed_string += dependent_state.outputs[input_field] + split.split("}}")[1]
-
-                    else:
-                        constructed_string += split
-                    
-                next_node_input_data[field_name] = constructed_string
+                pattern = r'\{\{\s*([^{}\s]+)(?:\.outputs)?\.([^\s{}]+)\s*\}\}|\$\{\{\s*([^{}\s]+)\.outputs\.([^\s{}]+)\s*\}\}'
+
+                async def _replace_placeholder(m):
+                    # Support both {{id.field}} and {{id.outputs.field}} and ${{id.outputs.field}}
+                    input_identifier = m.group(1) or m.group(3)
+                    input_field = m.group(2) or m.group(4)
+
+                    parent_id = parents.get(input_identifier)
+                    if not parent_id:
+                        raise Exception(f"Parent identifier '{input_identifier}' not found in state parents.")
+
+                    # Fetch and cache the dependent state
+                    if parent_id not in cache_states:
+                        dependent_state = await State.get(parent_id)
+                        if not dependent_state:
+                            raise Exception(f"Dependent state {input_identifier} not found")
+                        cache_states[parent_id] = dependent_state
+                    else:
+                        dependent_state = cache_states[parent_id]
+
+                    if input_field not in dependent_state.outputs:
+                        raise Exception(f"Input field '{input_field}' not found in dependent state '{input_identifier}'")
+
+                    return str(dependent_state.outputs[input_field])
+
+                # Perform async placeholder replacement
+                result = temporary_input
+                for m in list(re.finditer(pattern, temporary_input)):
+                    replacement = await _replace_placeholder(m)
+                    result = result.replace(m.group(0), replacement)
+                next_node_input_data[field_name] = result

Also add at the top of the file:

import re

81-89: Avoid double-wrapping ObjectId and prefer a consistent retrieval method

parent_id is already an ObjectId (you constructed one in parents). Wrapping again with ObjectId(parent_id) can raise a TypeError. Use the id as-is.

Apply this diff:

-                        if parent_id not in cache_states:
-                            dependent_state = await State.get(ObjectId(parent_id))
+                        if parent_id not in cache_states:
+                            dependent_state = await State.get(parent_id)

94-97: Type safety: concatenating non-string outputs will raise

dependent_state.outputs[input_field] can be a number or other type. Ensure conversion to string before concatenation.

Apply this diff:

-                        constructed_string += dependent_state.outputs[input_field] + split.split("}}")[1]
+                        constructed_string += str(dependent_state.outputs[input_field]) + split.split("}}")[1]

Alternatively, with the regex approach in the previous comment, all replacements are already cast via str().

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 10b8357 and 1c58eb8.

⛔ Files ignored due to path filters (1)
  • state-manager/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (18)
  • .vscode/settings.json (1 hunks)
  • state-manager/app/controller/executed_state.py (1 hunks)
  • state-manager/app/models/db/state.py (1 hunks)
  • state-manager/app/tasks/create_next_state.py (3 hunks)
  • state-manager/pyproject.toml (2 hunks)
  • state-manager/pytest.ini (1 hunks)
  • state-manager/tests/README.md (1 hunks)
  • state-manager/tests/__init__.py (1 hunks)
  • state-manager/tests/integration/test_full_workflow_integration.py (1 hunks)
  • state-manager/tests/unit/__init__.py (1 hunks)
  • state-manager/tests/unit/controller/__init__.py (1 hunks)
  • state-manager/tests/unit/controller/test_create_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_enqueue_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_errored_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_executed_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_get_graph_template.py (1 hunks)
  • state-manager/tests/unit/controller/test_get_secrets.py (1 hunks)
  • state-manager/tests/unit/controller/test_upsert_graph_template.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (11)
state-manager/app/controller/executed_state.py (3)
state-manager/app/models/db/state.py (1)
  • State (8-19)
state-manager/app/tasks/create_next_state.py (1)
  • create_next_state (14-126)
state-manager/app/routes.py (1)
  • executed_state_route (84-94)
state-manager/tests/unit/controller/test_upsert_graph_template.py (6)
state-manager/app/models/graph_models.py (2)
  • UpsertGraphTemplateRequest (8-10)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/db/graph_template_model.py (1)
  • set_secrets (67-69)
state-manager/app/controller/upsert_graph_template.py (1)
  • upsert_graph_template (12-62)
state-manager/app/tasks/verify_graph.py (1)
  • verify_graph (222-250)
state-manager/tests/unit/controller/test_get_secrets.py (4)
state-manager/app/models/secrets_response.py (1)
  • SecretsResponseModel (5-6)
state-manager/app/controller/get_secrets.py (1)
  • get_secrets (10-56)
state-manager/app/routes.py (1)
  • get_secrets_route (181-191)
state-manager/app/models/db/graph_template_model.py (2)
  • validate_secrets (39-49)
  • set_secrets (67-69)
state-manager/tests/unit/controller/test_errored_state.py (4)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/app/models/errored_models.py (1)
  • ErroredRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/routes.py (1)
  • errored_state_route (104-114)
state-manager/tests/unit/controller/test_get_graph_template.py (5)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/db/graph_template_model.py (1)
  • GraphTemplate (13-81)
state-manager/app/models/graph_models.py (1)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/controller/get_graph_template.py (1)
  • get_graph_template (9-48)
state-manager/tests/unit/controller/test_create_states.py (6)
state-manager/app/controller/create_states.py (2)
  • create_states (23-64)
  • get_node_template (16-20)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (21-22)
  • RequestStateModel (7-9)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/graph_template_model.py (2)
  • GraphTemplate (13-81)
  • get_node_by_identifier (30-35)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/tests/unit/controller/test_enqueue_states.py (5)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
state-manager/app/models/enqueue_request.py (1)
  • EnqueueRequestModel (4-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/routes.py (1)
  • enqueue_state (44-54)
state-manager/app/models/enqueue_response.py (1)
  • EnqueueResponseModel (14-19)
state-manager/tests/integration/test_full_workflow_integration.py (7)
state-manager/app/models/register_nodes_request.py (2)
  • RegisterNodesRequestModel (12-14)
  • NodeRegistrationModel (5-9)
state-manager/app/models/graph_models.py (1)
  • UpsertGraphTemplateRequest (8-10)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (21-22)
  • RequestStateModel (7-9)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/enqueue_request.py (1)
  • EnqueueRequestModel (4-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/tests/unit/controller/test_executed_state.py (3)
state-manager/app/controller/executed_state.py (1)
  • executed_state (12-61)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/state.py (2)
state-manager/app/models/db/base.py (1)
  • BaseDatabaseModel (7-15)
state-manager/app/models/enqueue_response.py (1)
  • StateModel (6-11)
state-manager/app/tasks/create_next_state.py (1)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
🪛 LanguageTool
state-manager/tests/README.md

[grammar] ~1-~1: Use correct spacing
Context: # State Manager Tests This directory contains comprehensive un...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~3-~3: Use correct spacing
Context: ...tests for the state-manager application. ## Test Structure ``` tests/ ├── unit/ │ ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~5-~5: Use correct spacing
Context: ...-manager application. ## Test Structure tests/ ├── unit/ │ └── controller/ │ ├── test_create_states.py │ ├── test_enqueue_states.py │ ├── test_executed_state.py │ ├── test_errored_state.py │ ├── test_get_graph_template.py │ ├── test_get_secrets.py │ ├── test_register_nodes.py │ └── test_upsert_graph_template.py └── README.md ## Test Coverage The unit tests cover all ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~22-~22: Use correct spacing
Context: ...e.py └── README.md ``` ## Test Coverage The unit tests cover all controller func...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~24-~24: Use hyphens correctly
Context: ...er all controller functions in the state-manager: ### 1. create_states.py - ✅ ...

(QB_NEW_EN_OTHER_ERROR_IDS_29)


[grammar] ~24-~24: Use correct spacing
Context: ...ntroller functions in the state-manager: ### 1. create_states.py - ✅ Successful sta...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~26-~26: There might be a mistake here.
Context: ...functions in the state-manager: ### 1. create_states.py - ✅ Successful state creation - ✅ Graph te...

(QB_NEW_EN)


[grammar] ~27-~27: There might be a mistake here.
Context: ...states.py` - ✅ Successful state creation - ✅ Graph template not found scenarios - ✅...

(QB_NEW_EN)


[grammar] ~28-~28: There might be a mistake here.
Context: ...n - ✅ Graph template not found scenarios - ✅ Node template not found scenarios - ✅ ...

(QB_NEW_EN)


[grammar] ~29-~29: There might be a mistake here.
Context: ...os - ✅ Node template not found scenarios - ✅ Database error handling - ✅ Multiple s...

(QB_NEW_EN)


[grammar] ~30-~30: There might be a mistake here.
Context: ...nd scenarios - ✅ Database error handling - ✅ Multiple states creation ### 2. `enqu...

(QB_NEW_EN)


[grammar] ~31-~31: Use correct spacing
Context: ...or handling - ✅ Multiple states creation ### 2. enqueue_states.py - ✅ Successful st...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~33-~33: There might be a mistake here.
Context: ...ng - ✅ Multiple states creation ### 2. enqueue_states.py - ✅ Successful state enqueuing - ✅ No stat...

(QB_NEW_EN)


[grammar] ~34-~34: There might be a mistake here.
Context: ...tates.py` - ✅ Successful state enqueuing - ✅ No states found scenarios - ✅ Multiple...

(QB_NEW_EN)


[grammar] ~35-~35: There might be a mistake here.
Context: ... enqueuing - ✅ No states found scenarios - ✅ Multiple states enqueuing - ✅ Database...

(QB_NEW_EN)


[grammar] ~36-~36: There might be a mistake here.
Context: ... scenarios - ✅ Multiple states enqueuing - ✅ Database error handling - ✅ Different ...

(QB_NEW_EN)


[grammar] ~37-~37: There might be a mistake here.
Context: ...es enqueuing - ✅ Database error handling - ✅ Different batch sizes ### 3. `execute...

(QB_NEW_EN)


[grammar] ~38-~38: Use correct spacing
Context: ...error handling - ✅ Different batch sizes ### 3. executed_state.py - ✅ Successful st...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~40-~40: There might be a mistake here.
Context: ...dling - ✅ Different batch sizes ### 3. executed_state.py - ✅ Successful state execution with single...

(QB_NEW_EN)


[grammar] ~41-~41: There might be a mistake here.
Context: ...ssful state execution with single output - ✅ Multiple outputs handling - ✅ State no...

(QB_NEW_EN)


[grammar] ~42-~42: There might be a mistake here.
Context: ...gle output - ✅ Multiple outputs handling - ✅ State not found scenarios - ✅ Invalid ...

(QB_NEW_EN)


[grammar] ~43-~43: There might be a mistake here.
Context: ...s handling - ✅ State not found scenarios - ✅ Invalid state status scenarios - ✅ Emp...

(QB_NEW_EN)


[grammar] ~44-~44: There might be a mistake here.
Context: ...arios - ✅ Invalid state status scenarios - ✅ Empty outputs handling - ✅ Database er...

(QB_NEW_EN)


[grammar] ~45-~45: There might be a mistake here.
Context: ...tus scenarios - ✅ Empty outputs handling - ✅ Database error handling ### 4. `error...

(QB_NEW_EN)


[grammar] ~46-~46: Use correct spacing
Context: ...uts handling - ✅ Database error handling ### 4. errored_state.py - ✅ Successful err...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~48-~48: There might be a mistake here.
Context: ...ing - ✅ Database error handling ### 4. errored_state.py - ✅ Successful error marking for queued st...

(QB_NEW_EN)


[grammar] ~49-~49: There might be a mistake here.
Context: ...ccessful error marking for queued states - ✅ Successful error marking for executed ...

(QB_NEW_EN)


[grammar] ~50-~50: There might be a mistake here.
Context: ...essful error marking for executed states - ✅ State not found scenarios - ✅ Invalid ...

(QB_NEW_EN)


[grammar] ~51-~51: There might be a mistake here.
Context: ...ted states - ✅ State not found scenarios - ✅ Invalid state status scenarios - ✅ Dif...

(QB_NEW_EN)


[grammar] ~52-~52: There might be a mistake here.
Context: ...arios - ✅ Invalid state status scenarios - ✅ Different error messages - ✅ Database ...

(QB_NEW_EN)


[grammar] ~53-~53: There might be a mistake here.
Context: ...s scenarios - ✅ Different error messages - ✅ Database error handling ### 5. `get_g...

(QB_NEW_EN)


[grammar] ~54-~54: Use correct spacing
Context: ...ror messages - ✅ Database error handling ### 5. get_graph_template.py - ✅ Successfu...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~56-~56: There might be a mistake here.
Context: ...ges - ✅ Database error handling ### 5. get_graph_template.py - ✅ Successful template retrieval - ✅ Temp...

(QB_NEW_EN)


[grammar] ~57-~57: There might be a mistake here.
Context: ...te.py` - ✅ Successful template retrieval - ✅ Template not found scenarios - ✅ Valid...

(QB_NEW_EN)


[grammar] ~58-~58: There might be a mistake here.
Context: ...trieval - ✅ Template not found scenarios - ✅ Validation errors handling - ✅ Pending...

(QB_NEW_EN)


[grammar] ~59-~59: There might be a mistake here.
Context: ...scenarios - ✅ Validation errors handling - ✅ Pending validation scenarios - ✅ Empty...

(QB_NEW_EN)


[grammar] ~60-~60: There might be a mistake here.
Context: ...andling - ✅ Pending validation scenarios - ✅ Empty nodes handling - ✅ Complex secre...

(QB_NEW_EN)


[grammar] ~61-~61: There might be a mistake here.
Context: ...ation scenarios - ✅ Empty nodes handling - ✅ Complex secrets structure - ✅ Database...

(QB_NEW_EN)


[grammar] ~62-~62: There might be a mistake here.
Context: ...s handling - ✅ Complex secrets structure - ✅ Database error handling ### 6. `get_s...

(QB_NEW_EN)


[grammar] ~63-~63: Use correct spacing
Context: ...ts structure - ✅ Database error handling ### 6. get_secrets.py - ✅ Successful secre...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~65-~65: There might be a mistake here.
Context: ...ure - ✅ Database error handling ### 6. get_secrets.py - ✅ Successful secrets retrieval - ✅ State...

(QB_NEW_EN)


[grammar] ~66-~66: There might be a mistake here.
Context: ...ets.py` - ✅ Successful secrets retrieval - ✅ State not found scenarios - ✅ Namespac...

(QB_NEW_EN)


[grammar] ~67-~67: There might be a mistake here.
Context: ... retrieval - ✅ State not found scenarios - ✅ Namespace mismatch scenarios - ✅ Graph...

(QB_NEW_EN)


[grammar] ~68-~68: There might be a mistake here.
Context: ...enarios - ✅ Namespace mismatch scenarios - ✅ Graph template not found scenarios - ✅...

(QB_NEW_EN)


[grammar] ~69-~69: There might be a mistake here.
Context: ...s - ✅ Graph template not found scenarios - ✅ Empty secrets handling - ✅ Complex sec...

(QB_NEW_EN)


[grammar] ~70-~70: There might be a mistake here.
Context: ...und scenarios - ✅ Empty secrets handling - ✅ Complex secrets structure - ✅ Nested s...

(QB_NEW_EN)


[grammar] ~71-~71: There might be a mistake here.
Context: ...s handling - ✅ Complex secrets structure - ✅ Nested secrets handling - ✅ Database e...

(QB_NEW_EN)


[grammar] ~72-~72: There might be a mistake here.
Context: ...ts structure - ✅ Nested secrets handling - ✅ Database error handling ### 7. `regis...

(QB_NEW_EN)


[grammar] ~73-~73: Use correct spacing
Context: ...ets handling - ✅ Database error handling ### 7. register_nodes.py - ✅ New node regi...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~75-~75: There might be a mistake here.
Context: ...ing - ✅ Database error handling ### 7. register_nodes.py - ✅ New node registration - ✅ Existing nod...

(QB_NEW_EN)


[grammar] ~76-~76: There might be a mistake here.
Context: ...ster_nodes.py` - ✅ New node registration - ✅ Existing node updates - ✅ Multiple nod...

(QB_NEW_EN)


[grammar] ~77-~77: There might be a mistake here.
Context: ...e registration - ✅ Existing node updates - ✅ Multiple nodes registration - ✅ Empty ...

(QB_NEW_EN)


[grammar] ~78-~78: Make sure to use plural and singular nouns correctly
Context: ... - ✅ Existing node updates - ✅ Multiple nodes registration - ✅ Empty secrets handling...

(QB_NEW_EN_OTHER_ERROR_IDS_10)


[grammar] ~78-~78: There might be a mistake here.
Context: ... updates - ✅ Multiple nodes registration - ✅ Empty secrets handling - ✅ Complex sch...

(QB_NEW_EN)


[grammar] ~79-~79: There might be a mistake here.
Context: ... registration - ✅ Empty secrets handling - ✅ Complex schema handling - ✅ Database e...

(QB_NEW_EN)


[grammar] ~80-~80: There might be a mistake here.
Context: ...ets handling - ✅ Complex schema handling - ✅ Database error handling ### 8. `upser...

(QB_NEW_EN)


[grammar] ~81-~81: Use correct spacing
Context: ...ema handling - ✅ Database error handling ### 8. upsert_graph_template.py - ✅ Existi...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~83-~83: There might be a mistake here.
Context: ...ing - ✅ Database error handling ### 8. upsert_graph_template.py - ✅ Existing template updates - ✅ New temp...

(QB_NEW_EN)


[grammar] ~84-~84: There might be a mistake here.
Context: ...mplate.py` - ✅ Existing template updates - ✅ New template creation - ✅ Empty nodes ...

(QB_NEW_EN)


[grammar] ~85-~85: There might be a mistake here.
Context: ...mplate updates - ✅ New template creation - ✅ Empty nodes handling - ✅ Complex node ...

(QB_NEW_EN)


[grammar] ~86-~86: Make sure to use plural and singular nouns correctly
Context: ...tes - ✅ New template creation - ✅ Empty nodes handling - ✅ Complex node structures - ...

(QB_NEW_EN_OTHER_ERROR_IDS_10)


[grammar] ~86-~86: There might be a mistake here.
Context: ...mplate creation - ✅ Empty nodes handling - ✅ Complex node structures - ✅ Validation...

(QB_NEW_EN)


[grammar] ~87-~87: There might be a mistake here.
Context: ...des handling - ✅ Complex node structures - ✅ Validation errors handling - ✅ Databas...

(QB_NEW_EN)


[grammar] ~88-~88: Make sure to use plural and singular nouns correctly
Context: ... Complex node structures - ✅ Validation errors handling - ✅ Database error handling #...

(QB_NEW_EN_OTHER_ERROR_IDS_10)


[grammar] ~88-~88: There might be a mistake here.
Context: ...tructures - ✅ Validation errors handling - ✅ Database error handling ## Running Te...

(QB_NEW_EN)


[grammar] ~89-~89: Use correct spacing
Context: ...ors handling - ✅ Database error handling ## Running Tests ### Prerequisites Make s...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~91-~91: Use correct spacing
Context: ...atabase error handling ## Running Tests ### Prerequisites Make sure you have the de...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~93-~93: Use correct spacing
Context: ...ing ## Running Tests ### Prerequisites Make sure you have the development depen...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~95-~95: Use correct spacing
Context: ... the development dependencies installed: bash uv sync --group dev ### Run All Tests bash pytest ### R...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~101-~101: Use correct spacing
Context: ... sync --group dev ### Run All Tests bash pytest ### Run Unit Tests Only bash pytest test...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~107-~107: Use correct spacing
Context: ...bash pytest ### Run Unit Tests Only bash pytest tests/unit/ ### Run Specific Test File bash pytest t...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~113-~113: Use correct spacing
Context: ...ts/unit/ ### Run Specific Test File bash pytest tests/unit/controller/test_create_states.py ### Run Tests with Coverage bash pytest ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~119-~119: Use correct spacing
Context: ...ates.py ### Run Tests with Coverage bash pytest --cov=app tests/ ### Run Tests with Verbose Output bash p...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~125-~125: Use correct spacing
Context: .../ ### Run Tests with Verbose Output bash pytest -v ``` ### Run Tests and Generate HTML Coverage Rep...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~131-~131: Use correct spacing
Context: ... Tests and Generate HTML Coverage Report bash pytest --cov=app --cov-report=html tests/ ## Test Patterns ### Async Testing All con...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~137-~137: Use correct spacing
Context: ...report=html tests/ ``` ## Test Patterns ### Async Testing All controller functions a...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~139-~139: There might be a mistake here.
Context: ...``` ## Test Patterns ### Async Testing All controller functions are async, so t...

(QB_NEW_EN)


[grammar] ~140-~140: Use correct spacing
Context: ...st-asyncioand theasync defpattern. ### Mocking Tests useunittest.mock` to moc...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~142-~142: There might be a mistake here.
Context: ...nd the async def pattern. ### Mocking Tests use unittest.mock to mock: - Dat...

(QB_NEW_EN)


[grammar] ~143-~143: There might be a mistake here.
Context: ...cking Tests use unittest.mock to mock: - Database operations (Beanie ODM) - Exter...

(QB_NEW_EN)


[grammar] ~144-~144: There might be a mistake here.
Context: ...mock: - Database operations (Beanie ODM) - External dependencies - Background tasks...

(QB_NEW_EN)


[grammar] ~145-~145: There might be a mistake here.
Context: ...ons (Beanie ODM) - External dependencies - Background tasks - Logging ### Fixtures...

(QB_NEW_EN)


[grammar] ~146-~146: There might be a mistake here.
Context: ...External dependencies - Background tasks - Logging ### Fixtures Common test fixtur...

(QB_NEW_EN)


[grammar] ~147-~147: Use correct spacing
Context: ...ependencies - Background tasks - Logging ### Fixtures Common test fixtures are define...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~149-~149: There might be a mistake here.
Context: ...Background tasks - Logging ### Fixtures Common test fixtures are defined for: - ...

(QB_NEW_EN)


[grammar] ~150-~150: There might be a mistake here.
Context: ...es Common test fixtures are defined for: - Mock request IDs - Mock namespaces - Moc...

(QB_NEW_EN)


[grammar] ~151-~151: There might be a mistake here.
Context: ...ures are defined for: - Mock request IDs - Mock namespaces - Mock data structures -...

(QB_NEW_EN)


[grammar] ~152-~152: There might be a mistake here.
Context: ...or: - Mock request IDs - Mock namespaces - Mock data structures - Mock database obj...

(QB_NEW_EN)


[grammar] ~153-~153: There might be a mistake here.
Context: ...- Mock namespaces - Mock data structures - Mock database objects ### Error Handlin...

(QB_NEW_EN)


[grammar] ~154-~154: Use correct spacing
Context: ... data structures - Mock database objects ### Error Handling Tests cover both success ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~159-~159: There might be a mistake here.
Context: ...tions (404, 400, etc.) - Database errors - Validation errors - Business logic error...

(QB_NEW_EN)


[grammar] ~160-~160: There might be a mistake here.
Context: ....) - Database errors - Validation errors - Business logic errors ## Adding New Tes...

(QB_NEW_EN)


[grammar] ~161-~161: Use correct spacing
Context: ...alidation errors - Business logic errors ## Adding New Tests When adding new tests:...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~163-~163: Use correct spacing
Context: ...siness logic errors ## Adding New Tests When adding new tests: 1. Follow the ex...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~165-~165: Use correct spacing
Context: ...Adding New Tests When adding new tests: 1. Follow the existing naming convention: `...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~167-~167: There might be a mistake here.
Context: ... Follow the existing naming convention: test_*.py 2. Use descriptive test method names 3. Inc...

(QB_NEW_EN)


[grammar] ~168-~168: There might be a mistake here.
Context: ...py` 2. Use descriptive test method names 3. Include both success and error scenarios...

(QB_NEW_EN_OTHER)


[grammar] ~169-~169: There might be a mistake here.
Context: ...Include both success and error scenarios 4. Mock external dependencies 5. Use fixtur...

(QB_NEW_EN_OTHER)


[grammar] ~170-~170: There might be a mistake here.
Context: ... scenarios 4. Mock external dependencies 5. Use fixtures for common test data 6. Add...

(QB_NEW_EN_OTHER)


[grammar] ~171-~171: There might be a mistake here.
Context: ...ies 5. Use fixtures for common test data 6. Add proper docstrings explaining test pu...

(QB_NEW_EN_OTHER)


[grammar] ~172-~172: There might be a mistake here.
Context: ...roper docstrings explaining test purpose ## Test Quality Standards - Each test shou...

(QB_NEW_EN_OTHER)


[grammar] ~174-~174: Use correct spacing
Context: ... test purpose ## Test Quality Standards - Each test should be independent - Tests ...

(QB_NEW_EN_OTHER_ERROR_IDS_5)


[grammar] ~176-~176: There might be a mistake here.
Context: ...dards - Each test should be independent - Tests should be fast and reliable - Use ...

(QB_NEW_EN_OTHER)


[grammar] ~177-~177: There might be a mistake here.
Context: ...dent - Tests should be fast and reliable - Use meaningful assertions - Mock externa...

(QB_NEW_EN_OTHER)


[grammar] ~178-~178: There might be a mistake here.
Context: ...and reliable - Use meaningful assertions - Mock external dependencies - Test both h...

(QB_NEW_EN_OTHER)


[grammar] ~179-~179: There might be a mistake here.
Context: ... assertions - Mock external dependencies - Test both happy path and error scenarios...

(QB_NEW_EN_OTHER)


[grammar] ~180-~180: There might be a mistake here.
Context: ...Test both happy path and error scenarios - Include edge cases and boundary conditio...

(QB_NEW_EN_OTHER)


[grammar] ~181-~181: There might be a mistake here.
Context: ...clude edge cases and boundary conditions

(QB_NEW_EN_OTHER)

🪛 markdownlint-cli2 (0.17.2)
state-manager/tests/README.md

7-7: Fenced code blocks should have a language specified

(MD040, fenced-code-language)


26-26: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


27-27: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


33-33: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


34-34: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


40-40: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


41-41: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


48-48: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


49-49: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


56-56: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


57-57: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


65-65: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


66-66: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


75-75: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


76-76: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


83-83: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


84-84: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


139-139: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


142-142: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


144-144: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


149-149: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


151-151: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)


156-156: Headings should be surrounded by blank lines
Expected: 1; Actual: 0; Below

(MD022, blanks-around-headings)


158-158: Lists should be surrounded by blank lines

(MD032, blanks-around-lists)

🔇 Additional comments (16)
state-manager/pyproject.toml (1)

22-23: dependency-groups is a PDM-specific extension; use PEP 621 optional-dependencies for portability
The [dependency-groups] table in state-manager/pyproject.toml (lines 19–23) isn’t part of the PEP 621 spec and will be ignored by most PEP 517 build backends (e.g. poetry-core, hatchling, flit). If you need broader compatibility, migrate your dev dependencies to the standard [project.optional-dependencies] table:

[project.optional-dependencies]
dev = [
  "ruff>=0.12.5",
  "pytest>=8.0.0",
  "pytest-asyncio>=0.24.0",
  "pytest-cov>=6.2.1",
]

• If you’re exclusively using PDM (with build-backend = "pdm.backend") or uv/Rye, you can continue with dependency-groups.
• Otherwise, switch to optional-dependencies to ensure every PEP 517 backend picks up your dev tooling.

Let me know if you’d like me to open a follow-up PR to migrate this.
[optional_refactors_recommended]

.vscode/settings.json (1)

15-20: VSCode pytest configuration LGTM

Settings align with the new pytest suite and pytest.ini. This should enable smooth test discovery in the “state-manager” folder.

state-manager/tests/unit/controller/test_get_secrets.py (1)

41-74: LGTM: happy-path test is clear and verifies core behavior

Good use of AsyncMock and patching module-local symbols; assertions cover both response and DB calls.

state-manager/tests/unit/controller/test_get_graph_template.py (1)

53-81: LGTM: exercises success path and response shape thoroughly

Covers status, errors, secrets presence flags, and timestamps; mocking strategy is appropriate.

state-manager/tests/unit/controller/test_errored_state.py (1)

11-13: Good coverage and structure

Fixtures are clean, and the suite covers success, invalid state, not found, already executed, and error propagation paths. Once the mocking issues are fixed, this will provide solid coverage of control flow and HTTPException semantics.

state-manager/tests/unit/controller/test_enqueue_states.py (1)

39-47: LGTM: realistic async chaining for .find().limit().to_list() and update path

The mocking pattern for the selection and update phases closely mirrors the Beanie query chain and makes the tests resilient. Good use of side_effect to separate read vs update paths.

state-manager/tests/unit/controller/test_create_states.py (1)

16-51: get_node_template tests are crisp

Positive/negative cases are clear and aligned with controller behavior. No changes needed.

state-manager/tests/unit/controller/test_upsert_graph_template.py (2)

156-167: LGTM: create-new flow mocks are consistent with controller expectations

Using insert AsyncMock and ensuring set_secrets returns self is correct. Good assertion of background_tasks scheduling and secrets booleanization.


5-7: No typo found in upsert_graph_template controller
I searched the upsert_graph_template function in state-manager/app/controller/upsert_graph_template.py and all logger calls reference x_exosphere_request_id correctly. There are no occurrences of the proposed misspelling (x_exosphate_request_id), so the NameError won’t occur. You can disregard this suggestion.

Likely an incorrect or invalid review comment.

state-manager/app/controller/executed_state.py (1)

34-37: Good: refresh state after update before scheduling next steps

Reloading the state to pass a DB-hydrated object to create_next_state prevents stale parents/outputs from being used.

state-manager/app/tasks/create_next_state.py (2)

38-43: Terminal-node handling: marking SUCCESS is correct

Returning early when there are no next nodes is the right behavior and avoids unnecessary work. This aligns with the model’s SUCCESS terminal status.


46-47: Parents merge: ensure state.id type compatibility

parents = state.parents | {state.identifier: ObjectId(state.id)} assumes state.id is a hex string. If State.id is already an ObjectId, ObjectId(state.id) may fail. Normalize with ObjectId(str(state.id)) or, better, leave as-is if already ObjectId.

Apply this diff for safety:

-        parents = state.parents | {state.identifier: ObjectId(state.id)}
+        normalized_id = state.id if isinstance(state.id, ObjectId) else ObjectId(str(state.id))
+        parents = state.parents | {state.identifier: normalized_id}

If you prefer not to add a temporary variable:

-        parents = state.parents | {state.identifier: ObjectId(state.id)}
+        parents = state.parents | {state.identifier: state.id if isinstance(state.id, ObjectId) else ObjectId(str(state.id))}
state-manager/tests/unit/controller/test_executed_state.py (4)

49-61: Patch order and argument order are correct

The patch decorators and fixture injection ordering are correct; no changes needed here.


137-145: Correct expectations for multi-output path: find_one call count

Given the controller code, find_one is expected to be called 3 times, not 4. The rest of the assertions look reasonable.

Apply this diff:

-        assert mock_state_class.find_one.call_count == 4
+        assert mock_state_class.find_one.call_count == 3

202-244: Empty outputs path assertions look correct

The test setup and assertions for empty outputs accurately reflect the controller path: outputs set to {}, scheduling create_next_state with the original state.


246-270: DB error propagation test is solid

This test correctly verifies that exceptions from the DB layer bubble up and are not swallowed.

"beanie>=2.0.0",
"cryptography>=45.0.5",
"fastapi>=0.116.1",
"httpx>=0.28.1",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Confirm httpx is required at runtime; otherwise move it to dev dependencies

If httpx is only used by tests/integration clients and not in application code, it should live under the dev group instead of production deps.

Run this script to confirm whether httpx is imported or used in runtime code (excluding tests). If no matches are found, it’s safe to move to dev:


🏁 Script executed:

#!/bin/bash
# Search for httpx usage outside tests
rg -n -C2 -g '!state-manager/tests/**' --type=py -P '\bimport\s+httpx\b|\bhttpx\.' || true

Length of output: 91


To catch all import patterns and direct usages, let’s broaden the search:


🏁 Script executed:

#!/bin/bash
# Broad search for any httpx usage (imports or attribute calls) outside of tests
rg -n --type=py -C2 -g '!state-manager/tests/**' -P '\b(?:import\s+httpx|from\s+httpx\b|httpx\.)' || true

Length of output: 106


Move httpx into dev-dependencies

A search for import httpx, from httpx, or httpx. across all non-test .py files returned no hits—httpx is only used for tests/integration. Please relocate it from your production deps to dev-dependencies in state-manager/pyproject.toml:

Suggested change:

 [tool.poetry.dependencies]
-httpx>=0.28.1

 [tool.poetry.dev-dependencies]
+httpx>=0.28.1

– File: state-manager/pyproject.toml, line 11

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"httpx>=0.28.1",
[tool.poetry.dependencies]
# …other production dependencies…
[tool.poetry.dev-dependencies]
httpx>=0.28.1
🤖 Prompt for AI Agents
In state-manager/pyproject.toml around line 11, the httpx dependency is listed
as a production dependency but is only used in tests; remove the "httpx>=0.28.1"
entry from the main dependencies section and add the same version specifier
under the dev-dependencies section (e.g., [tool.poetry.dev-dependencies] or
[project.optional-dependencies] depending on project layout). After moving it,
update the lockfile (poetry lock or pip-tools regenerate) and run the test suite
to verify no import breakages.

@NiveditJain
Copy link
Member

@nk-ag ruff checks are also failing

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (6)
state-manager/tests/integration/test_full_workflow_integration.py (6)

26-28: Avoid mutating sys.path at runtime; import packages normally

This sys.path insertion is a test smell and can mask packaging/config issues. Import the app as a package by configuring PYTHONPATH/pytest.ini, and drop the path hack. Also remove the now-unused sys import.

Apply this diff:

-import sys
 import os
-# Add the state-manager app to the path
-sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))

If CI needs help resolving imports after this change, ensure tests run from repo root and app/ is importable (e.g., via editable install or PYTHONPATH). I can provide a minimal pytest.ini if helpful.

Also applies to: 16-16


43-48: Skip gracefully when service is down; make base URL configurable and set timeouts

Avoid CI flakes by probing the service with a short timeout and skipping if unreachable. Also use a configurable base URL and explicit client timeouts.

Apply this diff:

     @pytest.fixture
     async def state_manager_client(self):
-        """Create an HTTP client for the state-manager."""
-        async with httpx.AsyncClient(base_url="http://localhost:8000") as client:
-            yield client
+        """Create an HTTP client for the state-manager."""
+        base_url = os.getenv("STATE_MANAGER_BASE_URL", "http://localhost:8000")
+        timeout = httpx.Timeout(5.0, connect=2.0, read=5.0, write=5.0)
+        async with httpx.AsyncClient(base_url=base_url, timeout=timeout) as client:
+            try:
+                # Probe service availability to avoid hard failures in CI
+                resp = await client.get("/health", timeout=timeout)
+                resp.raise_for_status()
+            except httpx.RequestError as exc:
+                pytest.skip(f"State-manager unavailable at {base_url}: {exc}")
+            except httpx.HTTPStatusError as exc:
+                pytest.skip(f"State-manager unhealthy at {base_url}: {exc}")
+            yield client

Adjust the health path if your service uses a different probe endpoint.


110-113: Placeholder syntax likely won’t parse; align with parser’s expected format

Your parser expects the ${{ nodeId.outputs.field }} form. The current "{{node1.output1}}" form will not resolve.

Apply this diff:

-                    "input1": "{{node1.output1}}",
-                    "input2": "{{node1.output2}}"
+                    "input1": "${{ node1.outputs.output1 }}",
+                    "input2": "${{ node1.outputs.output2 }}"

280-281: Assert response payload string, not Enum object

The HTTP JSON contains strings. Compare against the Enum’s value (or a plain string) to avoid relying on Enum-string equality semantics.

Apply this diff:

-        assert response_data["status"] == StateStatusEnum.EXECUTED
+        assert response_data["status"] == StateStatusEnum.EXECUTED.value
+        # Alternatively: assert response_data["status"] == "EXECUTED"

299-339: Tests call other tests; refactor to helpers to enforce a single, ordered integration flow

Calling test_* from test_* is a pytest anti-pattern and also causes fixture issues (e.g., state_id fixture is not defined for standalone test execution). Make the step functions private helpers and keep only one collected test, test_full_workflow_happy_path.

Apply these diffs to rename step tests to helpers and update call sites:

-    async def test_register_nodes(self, state_manager_client, test_namespace: str, 
+    async def _register_nodes(self, state_manager_client, test_namespace: str, 
                                 test_api_key: str, test_runtime_name: str, 
                                 sample_node_registration: NodeRegistrationModel):
-    async def test_upsert_graph_template(self, state_manager_client, test_namespace: str,
+    async def _upsert_graph_template(self, state_manager_client, test_namespace: str,
                                        test_api_key: str, test_graph_name: str,
                                        sample_graph_nodes: List[NodeTemplate]):
-    async def test_get_graph_template(self, state_manager_client, test_namespace: str,
+    async def _get_graph_template(self, state_manager_client, test_namespace: str,
                                     test_api_key: str, test_graph_name: str):
-    async def test_create_states(self, state_manager_client, test_namespace: str,
+    async def _create_states(self, state_manager_client, test_namespace: str,
                                test_api_key: str, test_graph_name: str):
-    async def test_queued_state(self, state_manager_client, test_namespace: str,
+    async def _queued_state(self, state_manager_client, test_namespace: str,
                                 test_api_key: str):
-    async def test_execute_state(self, state_manager_client, test_namespace: str,
-                               test_api_key: str, state_id: str):
+    async def _execute_state(self, state_manager_client, test_namespace: str,
+                             test_api_key: str, state_id: str):
-    async def test_get_secrets(self, state_manager_client, test_namespace: str,
-                             test_api_key: str, state_id: str):
+    async def _get_secrets(self, state_manager_client, test_namespace: str,
+                           test_api_key: str, state_id: str):

And update the orchestrator to call the helpers:

-        await self.test_register_nodes(
+        await self._register_nodes(
             state_manager_client, test_namespace, test_api_key, 
             test_runtime_name, sample_node_registration
         )
@@
-        await self.test_upsert_graph_template(
+        await self._upsert_graph_template(
             state_manager_client, test_namespace, test_api_key,
             test_graph_name, sample_graph_nodes
         )
@@
-        await self.test_get_graph_template(
+        await self._get_graph_template(
             state_manager_client, test_namespace, test_api_key, test_graph_name
         )
@@
-        state_id = await self.test_create_states(
+        state_id = await self._create_states(
             state_manager_client, test_namespace, test_api_key, test_graph_name
         )
@@
-        await self.test_get_secrets(
+        await self._get_secrets(
             state_manager_client, test_namespace, test_api_key, state_id
         )
@@
-        await self.test_queued_state(
+        await self._queued_state(
             state_manager_client, test_namespace, test_api_key
         )
@@
-        await self.test_execute_state(
+        await self._execute_state(
             state_manager_client, test_namespace, test_api_key, state_id
         )

This ensures a single, deterministic integration test and avoids brittle inter-test dependencies.


345-348: Prefer logging or assertions over print in tests

Printing in tests creates noisy output and hides intent. Use logging if you need visibility, or assertions to validate expected outcomes.

Example:

import logging
logger = logging.getLogger(__name__)
logger.info("Full workflow completed for namespace=%s, graph=%s, state_id=%s, runtime=%s",
            test_namespace, test_graph_name, state_id, test_runtime_name)
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 1c58eb8 and 2ae95e3.

📒 Files selected for processing (1)
  • state-manager/tests/integration/test_full_workflow_integration.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
state-manager/tests/integration/test_full_workflow_integration.py (7)
state-manager/app/models/register_nodes_request.py (2)
  • RegisterNodesRequestModel (12-14)
  • NodeRegistrationModel (5-9)
state-manager/app/models/graph_models.py (1)
  • UpsertGraphTemplateRequest (8-10)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (21-22)
  • RequestStateModel (7-9)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/enqueue_request.py (1)
  • EnqueueRequestModel (4-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
🪛 GitHub Actions: Ruff check on changed files only
state-manager/tests/integration/test_full_workflow_integration.py

[error] 19-19: Ruff: F401 'asyncio' imported but unused.


[error] 21-21: Ruff: F401 'json' imported but unused.


[error] 22-22: Ruff: F401 'Dict' imported but unused.


[error] 22-22: Ruff: F401 'Any' imported but unused.


[error] 23-23: Ruff: F401 'datetime' imported but unused.


[error] 24-24: Ruff: F401 'uuid' imported but unused.

🔇 Additional comments (3)
state-manager/tests/integration/test_full_workflow_integration.py (3)

55-58: LGTM: API key fixture now matches docstring and is configurable

Using TEST_API_KEY with a sensible default is the right trade-off for local runs and CI.


16-25: False-Positive on uuid Import
Ruff’s F401 here is a false-positive—uuid is in fact used on lines 52, 62, and 67. Please:

• Remove the other unused imports.
• Re-run Ruff (CI) to confirm uuid is now detected as used.
• If the warning still appears, annotate the import with # noqa: F401.


287-289: No change needed—/state/{state_id}/secrets is the defined route
The test is correctly calling the singular /state/{state_id}/secrets endpoint, which matches the router in state-manager/app/routes.py. Although other state routes use /states, the get_secrets_route is explicitly defined under /state/{state_id}/secrets, so the proposed diff would break the test. If you’d like to standardize pluralization, update the route definition itself (and tests) to /states/{state_id}/secrets; otherwise, leave the test as is.

Likely an incorrect or invalid review comment.

@NiveditJain
Copy link
Member

@nk-ag take a merge from the main on this branch and complete CI/CD for running tests.

@nk-ag nk-ag changed the title Unit & Integration tests for State Manager with bug fixes Unit & Integration tests for State Manager Aug 16, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 37

🔭 Outside diff range comments (8)
state-manager/app/tasks/create_next_state.py (4)

65-67: Unreachable branch: len(splits) is never 0 after split

temporary_input.split("${{") returns at least one element; the no-placeholder case is len==1. This condition never fires.

Apply this diff:

-                if len(splits) == 0:
+                if len(splits) == 1:
                     next_node_input_data[field_name] = temporary_input
                     continue

81-88: Avoid double-wrapping ObjectId values

parents may already store ObjectId instances. Wrapping an ObjectId again is unnecessary and can be error-prone across libraries. Guard the conversion.

Apply this diff:

-                        parent_id = parents.get(input_identifier)
+                        parent_id = parents.get(input_identifier)
@@
-                        if parent_id not in cache_states:
-                            dependent_state = await State.get(ObjectId(parent_id))
+                        if parent_id not in cache_states:
+                            parent_oid = parent_id if isinstance(parent_id, ObjectId) else ObjectId(parent_id)
+                            dependent_state = await State.get(parent_oid)

94-98: Type safety when interpolating outputs in strings

dependent_state.outputs[input_field] may be non-string; concatenation will error. Cast to str.

Apply this diff:

-                        constructed_string += dependent_state.outputs[input_field] + split.split("}}")[1]
+                        constructed_string += str(dependent_state.outputs[input_field]) + split.split("}}")[1]

123-127: Log failures with context and stack trace

On exceptions, we silently mark ERRORED and return. Add structured logging to aid troubleshooting.

Apply this diff:

     except Exception as e:
-        state.status = StateStatusEnum.ERRORED
-        state.error = str(e)
-        await state.save()
-        return
+        state.status = StateStatusEnum.ERRORED
+        state.error = str(e)
+        await state.save()
+        try:
+            from app.singletons.logs_manager import LogsManager
+            logger = LogsManager().get_logger()
+            logger.error(
+                "Error creating next state",
+                state_id=str(state.id),
+                graph_name=state.graph_name,
+                namespace=state.namespace_name,
+                error=e,
+                exc_info=True,
+            )
+        except Exception:
+            # Best-effort logging; never mask original failure handling
+            pass
+        return

Note: this adds a localized import to avoid broad module import churn; we can lift it to the module top if preferred.

state-manager/tests/unit/controller/test_create_states.py (1)

1-299: Run Ruff and remove unused imports across the codebase

Ruff reports a number of F401 (“imported but unused”) errors. Please remove the following unused imports:

  • app/controller/get_current_states.py
    • Remove from beanie import PydanticObjectId
  • app/controller/get_states_by_run_id.py
    • Remove from beanie import PydanticObjectId
  • app/controller/list_graph_templates.py
    • Remove from beanie import PydanticObjectId
  • app/controller/list_registered_nodes.py
    • Remove from beanie import PydanticObjectId
  • app/models/state_list_models.py
    • Remove from bson import ObjectId
  • tests/integration/test_full_workflow_integration.py
    • Remove unused imports: asyncio, json, Dict, Any, datetime
  • tests/unit/controller/test_create_states.py
    • Remove unused imports: State, GraphTemplate, GraphTemplateValidationStatus
  • tests/unit/controller/test_trigger_graph.py
    • Remove unused import: AsyncMock

After cleaning up these imports, rerun

ruff check state-manager --select F401

to confirm no unused‐import diagnostics remain.

state-manager/app/controller/create_states.py (2)

78-85: Avoid wrapping ObjectId values; use inserted_ids as-is

inserted_ids are already ObjectId; wrapping can raise or produce unexpected ids. Also prefer snake_case for variables.

-        newStates = await State.find(
-            In(State.id, [ObjectId(id) for id in inserted_states.inserted_ids])
-        ).to_list()
+        new_states = await State.find(
+            In(State.id, list(inserted_states.inserted_ids))
+        ).to_list()

91-93: Use logger.exception and bare raise in create_states

Same rationale as trigger_graph.

-    except Exception as e:
-        logger.error(f"Error creating states for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise e
+    except Exception:
+        logger.exception(f"Error creating states for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
+        raise
state-manager/app/routes.py (1)

151-151: Ruff/flake8: trailing whitespace

There appears to be trailing whitespace after the closing parenthesis on Line 151. Please remove it to satisfy ruff (W291).

♻️ Duplicate comments (8)
state-manager/app/models/db/state.py (1)

1-5: Replace arbitrary_types_allowed with Beanie’s PydanticObjectId; tighten validation

Relying on arbitrary_types_allowed=True to accept ObjectId relaxes validation globally. Prefer Beanie’s PydanticObjectId for the parents field. This also aligns with prior feedback on this file.

Apply:

 from bson import ObjectId
 from .base import BaseDatabaseModel
 from ..state_status_enum import StateStatusEnum
-from pydantic import Field, ConfigDict
+from pydantic import Field, ConfigDict
+from beanie import PydanticObjectId
 from typing import Any, Optional
@@
-class State(BaseDatabaseModel):
-    model_config = ConfigDict(arbitrary_types_allowed=True)
+class State(BaseDatabaseModel):
@@
-    run_id: str = Field(..., description="Unique run ID for grouping states from the same graph execution")
+    run_id: str = Field(..., min_length=1, description="Unique run ID for grouping states from the same graph execution")
@@
-    parents: dict[str, ObjectId] = Field(default_factory=dict, description="Parents of the state")
+    parents: dict[str, PydanticObjectId] = Field(default_factory=dict, description="Parents of the state")

Optional: If no other fields need arbitrary types, remove ConfigDict(arbitrary_types_allowed=True) entirely (as above) for stricter validation.

Also applies to: 8-10, 15-20

state-manager/tests/unit/controller/test_errored_state.py (3)

77-92: Rename misleading test and fix update-chain mocking (same issue as above)

The test name implies EXECUTED, but the status is set to QUEUED. Also needs update-chain mocking. This mirrors prior review feedback.

-    async def test_errored_state_success_executed(
+    async def test_errored_state_success_from_queued_alt(
@@
-        """Test successful error marking of executed state"""
+        """Test successful error marking when state is QUEUED (alternate path)"""
@@
-        mock_state_executed.save = AsyncMock() 
-
-        mock_state_executed.status = StateStatusEnum.QUEUED
-        mock_state_class.find_one = AsyncMock(return_value=mock_state_executed)
+        mock_state_executed.save = AsyncMock()
+        mock_state_executed.status = StateStatusEnum.QUEUED
+        update_query = MagicMock()
+        update_query.set = AsyncMock()
+        mock_state_class.find_one = MagicMock()
+        mock_state_class.find_one.side_effect = [mock_state_executed, update_query]
@@
-        assert result.status == StateStatusEnum.ERRORED
-        assert mock_state_class.find_one.call_count == 2  # Called twice: once for finding, once for updating
+        assert result.status == StateStatusEnum.ERRORED
+        assert mock_state_class.find_one.call_count == 2
+        update_query.set.assert_awaited_once()

Also applies to: 101-104


221-224: Use AsyncMock for awaited find_one in DB error propagation test

find_one is awaited in production. Using MagicMock(side_effect=...) won’t raise on await.

-        mock_state_class.find_one = MagicMock(side_effect=Exception("Database error"))
+        mock_state_class.find_one = AsyncMock(side_effect=Exception("Database error"))

245-256: Fix update-chain mocking and assert update payload (controller doesn’t mutate in-memory doc)

The controller updates via .set(...); it does not mutate the fetched document, so mock_state_queued.error won’t change. Mock the update chain and assert the payload.

@@
-        mock_state_queued.save = AsyncMock()
-
-        mock_state_queued.status = StateStatusEnum.QUEUED
-        mock_state_class.find_one = AsyncMock(return_value=mock_state_queued)
+        mock_state_queued.save = AsyncMock()
+        mock_state_queued.status = StateStatusEnum.QUEUED
+        update_query = MagicMock()
+        update_query.set = AsyncMock()
+        mock_state_class.find_one = MagicMock()
+        mock_state_class.find_one.side_effect = [mock_state_queued, update_query]
@@
-        assert result.status == StateStatusEnum.ERRORED
-        assert mock_state_class.find_one.call_count == 2  # Called twice: once for finding, once for updating
-        assert mock_state_queued.error == "Different error message"
+        assert result.status == StateStatusEnum.ERRORED
+        assert mock_state_class.find_one.call_count == 2
+        update_query.set.assert_awaited_once()
+        args, kwargs = update_query.set.call_args
+        assert args[0]["status"] == StateStatusEnum.ERRORED
+        assert args[0]["error"] == "Different error message"

Also applies to: 264-267

state-manager/tests/unit/controller/test_executed_state.py (2)

64-87: Fix mocking flow and assertions for single-output path (matches controller semantics)

Controller calls State.find_one three times: load, set, reload (new_state). Schedule is with new_state, not original state. Current mock setup/expectations don’t reflect that and will be brittle or fail.

Apply this diff:

-        # Mock State.find_one() for finding the state
-        # Mock State.find_one().set() for updating the state
-        mock_update_query = MagicMock()
-        mock_update_query.set = AsyncMock()
-
-        mock_state.save = AsyncMock()
-
-        mock_state.status = StateStatusEnum.QUEUED 
-        mock_state_class.find_one = AsyncMock(return_value=mock_state)   
+        # Mock update query and the reloaded state
+        mock_update_query = MagicMock()
+        mock_update_query.set = AsyncMock()
+        new_state = MagicMock()
+
+        mock_state.save = AsyncMock()
+        mock_state.status = StateStatusEnum.QUEUED
+        # find_one: (1) load state, (2) update query, (3) reload new_state
+        mock_state_class.find_one = AsyncMock(side_effect=[mock_state, mock_update_query, new_state])
@@
-        assert mock_state_class.find_one.call_count == 1  # Called once for finding
-        mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
+        assert mock_state_class.find_one.call_count == 3
+        mock_update_query.set.assert_called_once()
+        mock_background_tasks.add_task.assert_called_once()
+        args, _ = mock_background_tasks.add_task.call_args
+        assert args[0] is mock_create_next_state
+        assert args[1] is new_state

110-143: Correct side effects and call counts for multi-output path

The loop does not call find_one; only two new State() instances are created for the extra outputs. find_one should be called three times total (load, set, reload new_state). Current side effects and assertions don’t match this and will fail.

Apply this diff:

-        # Configure State.find_one to return different values based on call
-        # First call returns the state object, second call returns a query object with set method
-        # Additional calls in the loop also return query objects with set method
-        mock_state_class.find_one = AsyncMock(return_value=mock_state)
-        mock_state_class.find_one.side_effect = [mock_state, mock_update_query, mock_update_query, mock_update_query]
+        # Configure State.find_one: load, update query, reload new_state
+        new_state_after_update = MagicMock()
+        mock_state_class.find_one = AsyncMock(side_effect=[mock_state, mock_update_query, new_state_after_update])
@@
-        # State.find_one should be called multiple times: once for finding, once for updating main state, and twice in the loop
-        assert mock_state_class.find_one.call_count == 4
+        # find_one: load, update query, reload only
+        assert mock_state_class.find_one.call_count == 3
+        mock_update_query.set.assert_called_once()
state-manager/tests/unit/controller/test_upsert_graph_template.py (2)

92-117: Do not AsyncMock set_secrets; it’s sync and returns self (adjust assertions accordingly)

set_secrets is synchronous per GraphTemplate model and should return the template to allow chaining. Also, update is awaited; use the async assertion.

-        mock_existing_template.set_secrets = AsyncMock()
-        mock_existing_template.update = AsyncMock()
+        # set_secrets is sync and must return self for chaining
+        mock_existing_template.set_secrets = MagicMock(return_value=mock_existing_template)
+        mock_existing_template.update = AsyncMock()
@@
-        mock_existing_template.set_secrets.assert_called_once_with(mock_upsert_request.secrets)
-        mock_existing_template.update.assert_called_once()
+        mock_existing_template.set_secrets.assert_called_once_with(mock_upsert_request.secrets)
+        mock_existing_template.update.assert_awaited_once()

197-226: Align with controller behavior: set_secrets sync and update awaited

For consistency with the earlier test, ensure set_secrets is MagicMock and update is AsyncMock with async assertion.

-        mock_existing_template.update = AsyncMock()
+        mock_existing_template.set_secrets = MagicMock(return_value=mock_existing_template)
+        mock_existing_template.update = AsyncMock()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 2ae95e3 and 6d7d76f.

📒 Files selected for processing (16)
  • state-manager/app/controller/create_states.py (4 hunks)
  • state-manager/app/controller/get_current_states.py (1 hunks)
  • state-manager/app/controller/get_states_by_run_id.py (1 hunks)
  • state-manager/app/controller/list_graph_templates.py (1 hunks)
  • state-manager/app/controller/list_registered_nodes.py (1 hunks)
  • state-manager/app/models/create_models.py (1 hunks)
  • state-manager/app/models/db/state.py (1 hunks)
  • state-manager/app/models/list_models.py (1 hunks)
  • state-manager/app/models/state_list_models.py (1 hunks)
  • state-manager/app/routes.py (4 hunks)
  • state-manager/app/tasks/create_next_state.py (3 hunks)
  • state-manager/tests/unit/controller/test_create_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_errored_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_executed_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_trigger_graph.py (1 hunks)
  • state-manager/tests/unit/controller/test_upsert_graph_template.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (16)
state-manager/app/controller/get_states_by_run_id.py (5)
state-manager/app/models/db/state.py (1)
  • State (8-20)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
state-manager/app/controller/executed_state.py (1)
  • executed_state (12-62)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/app/controller/register_nodes.py (1)
  • register_nodes (11-71)
state-manager/app/controller/list_graph_templates.py (6)
state-manager/app/models/db/graph_template_model.py (1)
  • GraphTemplate (13-81)
state-manager/app/controller/get_graph_template.py (1)
  • get_graph_template (9-48)
state-manager/app/controller/upsert_graph_template.py (1)
  • upsert_graph_template (12-62)
state-manager/app/tasks/verify_graph.py (2)
  • verify_graph (234-265)
  • get_database_nodes (76-91)
state-manager/app/controller/register_nodes.py (1)
  • register_nodes (11-71)
state-manager/app/controller/get_secrets.py (1)
  • get_secrets (9-55)
state-manager/tests/unit/controller/test_trigger_graph.py (3)
state-manager/app/controller/create_states.py (2)
  • create_states (51-93)
  • trigger_graph (24-48)
state-manager/app/models/create_models.py (3)
  • TriggerGraphRequestModel (32-33)
  • RequestStateModel (7-9)
  • ResponseStateModel (12-19)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/controller/list_registered_nodes.py (5)
state-manager/app/models/db/registered_node.py (1)
  • RegisteredNode (6-13)
state-manager/app/controller/register_nodes.py (1)
  • register_nodes (11-71)
state-manager/app/tasks/verify_graph.py (1)
  • get_database_nodes (76-91)
state-manager/app/models/register_nodes_response.py (2)
  • RegisterNodesResponseModel (12-14)
  • RegisteredNodeModel (5-9)
state-manager/app/models/register_nodes_request.py (1)
  • RegisterNodesRequestModel (12-14)
state-manager/app/models/list_models.py (2)
state-manager/app/models/db/registered_node.py (1)
  • RegisteredNode (6-13)
state-manager/app/models/db/graph_template_model.py (1)
  • GraphTemplate (13-81)
state-manager/app/models/state_list_models.py (4)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/enqueue_response.py (2)
  • StateModel (6-11)
  • EnqueueResponseModel (14-19)
state-manager/app/models/executed_models.py (1)
  • ExecutedResponseModel (9-10)
state-manager/app/models/errored_models.py (1)
  • ErroredResponseModel (9-10)
state-manager/app/controller/get_current_states.py (5)
state-manager/app/models/db/state.py (1)
  • State (8-20)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
state-manager/app/controller/executed_state.py (1)
  • executed_state (12-62)
state-manager/app/controller/register_nodes.py (1)
  • register_nodes (11-71)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/tests/unit/controller/test_executed_state.py (3)
state-manager/app/controller/executed_state.py (1)
  • executed_state (12-61)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/tests/unit/controller/test_create_states.py (7)
state-manager/app/controller/create_states.py (2)
  • create_states (51-93)
  • get_node_template (17-21)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (22-24)
  • RequestStateModel (7-9)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/state.py (1)
  • State (8-20)
state-manager/app/models/db/graph_template_model.py (2)
  • GraphTemplate (13-81)
  • get_node_by_identifier (30-35)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/app/models/create_models.py (2)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/graph_models.py (1)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/models/db/state.py (2)
state-manager/app/models/db/base.py (1)
  • BaseDatabaseModel (7-15)
state-manager/app/models/enqueue_response.py (1)
  • StateModel (6-11)
state-manager/app/routes.py (11)
state-manager/app/models/create_models.py (3)
  • CreateRequestModel (22-24)
  • TriggerGraphRequestModel (32-33)
  • TriggerGraphResponseModel (36-39)
state-manager/app/controller/create_states.py (2)
  • create_states (51-93)
  • trigger_graph (24-48)
state-manager/app/models/list_models.py (2)
  • ListRegisteredNodesResponse (12-16)
  • ListGraphTemplatesResponse (19-23)
state-manager/app/controller/list_registered_nodes.py (1)
  • list_registered_nodes (11-38)
state-manager/app/controller/list_graph_templates.py (1)
  • list_graph_templates (11-38)
state-manager/app/models/state_list_models.py (2)
  • StatesByRunIdResponse (29-34)
  • CurrentStatesResponse (37-42)
state-manager/app/controller/get_states_by_run_id.py (1)
  • get_states_by_run_id (11-40)
state-manager/app/controller/get_current_states.py (1)
  • get_current_states (11-38)
state-manager/app/utils/check_secret.py (1)
  • check_api_key (15-19)
state-manager/app/models/db/graph_template_model.py (1)
  • get_secrets (71-74)
state-manager/app/controller/get_secrets.py (1)
  • get_secrets (10-56)
state-manager/tests/unit/controller/test_errored_state.py (3)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/app/models/errored_models.py (1)
  • ErroredRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/tests/unit/controller/test_upsert_graph_template.py (5)
state-manager/app/models/graph_models.py (2)
  • UpsertGraphTemplateRequest (8-10)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/db/graph_template_model.py (2)
  • set_secrets (67-69)
  • GraphTemplate (13-81)
state-manager/app/controller/upsert_graph_template.py (1)
  • upsert_graph_template (12-62)
state-manager/app/controller/create_states.py (3)
state-manager/app/models/create_models.py (5)
  • CreateRequestModel (22-24)
  • CreateResponseModel (27-29)
  • ResponseStateModel (12-19)
  • TriggerGraphRequestModel (32-33)
  • TriggerGraphResponseModel (36-39)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/state.py (1)
  • State (8-20)
state-manager/app/tasks/create_next_state.py (2)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/tasks/verify_graph.py (1)
  • verify_node_identifiers (29-55)
🔇 Additional comments (4)
state-manager/app/controller/get_current_states.py (1)

11-12: Optional: Rename namespace to namespace_name for consistency

To match other controller signatures (e.g., create_states, get_secrets, etc.), consider renaming the namespace parameter in get_current_states:

  • In state-manager/app/controller/get_current_states.py, update the signature:
    - async def get_current_states(namespace: str, request_id: str) -> List[State]:
    + async def get_current_states(namespace_name: str, request_id: str) -> List[State]:
  • In state-manager/app/routes.py (around line 286), the call site already passes namespace_name, so no changes to its invocation are required—but ensure the updated signature aligns.

After renaming, run your full build/test suite (e.g., pytest, type checks) to confirm there are no breakages. You may also update any inline docstrings or references inside the function for completeness.
[optional_refactors_recommended]

state-manager/app/controller/get_states_by_run_id.py (1)

11-12: Align parameter naming for consistency
Rename the namespace parameter to namespace_name in the controller to match its usage elsewhere:

• In state-manager/app/controller/get_states_by_run_id.py (around line 11):

- async def get_states_by_run_id(namespace: str, run_id: str, request_id: str) -> List[State]:
+ async def get_states_by_run_id(namespace_name: str, run_id: str, request_id: str) -> List[State]:

[optional_refactors_recommended]

state-manager/app/routes.py (2)

12-14: Imports for graph trigger and create flow look correct

The new request/response models and controller wiring are appropriate and consistent with the referenced modules.


32-39: List endpoints imports are aligned with controllers

Models and controllers for nodes/graph templates/current states/run-id states are correctly imported and names are consistent.

Comment on lines 86 to 89
return CreateResponseModel(
status=StateStatusEnum.CREATED,
states=[ResponseStateModel(state_id=str(state.id), identifier=state.identifier, node_name=state.node_name, graph_name=state.graph_name, inputs=state.inputs, created_at=state.created_at) for state in newStates]
states=[ResponseStateModel(state_id=str(state.id), identifier=state.identifier, node_name=state.node_name, graph_name=state.graph_name, run_id=state.run_id, inputs=state.inputs, created_at=state.created_at) for state in newStates]
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Maintain variable naming consistency (snake_case)

Use new_states in the response builder.

-        return CreateResponseModel(
-            status=StateStatusEnum.CREATED,
-            states=[ResponseStateModel(state_id=str(state.id), identifier=state.identifier, node_name=state.node_name, graph_name=state.graph_name, run_id=state.run_id, inputs=state.inputs, created_at=state.created_at) for state in newStates]
-        )
+        return CreateResponseModel(
+            status=StateStatusEnum.CREATED,
+            states=[ResponseStateModel(
+                state_id=str(state.id),
+                identifier=state.identifier,
+                node_name=state.node_name,
+                graph_name=state.graph_name,
+                run_id=state.run_id,
+                inputs=state.inputs,
+                created_at=state.created_at
+            ) for state in new_states]
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
return CreateResponseModel(
status=StateStatusEnum.CREATED,
states=[ResponseStateModel(state_id=str(state.id), identifier=state.identifier, node_name=state.node_name, graph_name=state.graph_name, inputs=state.inputs, created_at=state.created_at) for state in newStates]
states=[ResponseStateModel(state_id=str(state.id), identifier=state.identifier, node_name=state.node_name, graph_name=state.graph_name, run_id=state.run_id, inputs=state.inputs, created_at=state.created_at) for state in newStates]
)
return CreateResponseModel(
status=StateStatusEnum.CREATED,
states=[ResponseStateModel(
state_id=str(state.id),
identifier=state.identifier,
node_name=state.node_name,
graph_name=state.graph_name,
run_id=state.run_id,
inputs=state.inputs,
created_at=state.created_at
) for state in new_states]
)
🤖 Prompt for AI Agents
In state-manager/app/controller/create_states.py around lines 86 to 89, the
response builder is using camelCase variable newStates; change it to the
snake_case new_states (i.e., replace newStates with new_states in the list
comprehension) so the code uses the existing variable name consistently; update
any other occurrences in this return expression to new_states to avoid NameError
and maintain naming consistency.

Comment on lines 221 to 241
# Configure State.find_one to return different values based on call
# First call returns the state object, second call returns a query object with set method
mock_state_class.find_one = AsyncMock()
mock_state_class.find_one.side_effect = [mock_state, mock_update_query]

# Act
result = await executed_state(
mock_namespace,
mock_state_id,
executed_request,
mock_request_id,
mock_background_tasks
)

# Assert
assert result.status == StateStatusEnum.EXECUTED
mock_update_query.set.assert_called_once()
# Should set outputs to empty dict
call_args = mock_update_query.set.call_args[0][0]
assert call_args["outputs"] == {}
mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: assert find_one call count in empty-outputs path

For completeness and to catch regressions, assert the two expected calls (load + update query).

Apply this diff:

         mock_update_query.set.assert_called_once()
         # Should set outputs to empty dict
         call_args = mock_update_query.set.call_args[0][0]
         assert call_args["outputs"] == {}
         mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
+        assert mock_state_class.find_one.call_count == 2
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Configure State.find_one to return different values based on call
# First call returns the state object, second call returns a query object with set method
mock_state_class.find_one = AsyncMock()
mock_state_class.find_one.side_effect = [mock_state, mock_update_query]
# Act
result = await executed_state(
mock_namespace,
mock_state_id,
executed_request,
mock_request_id,
mock_background_tasks
)
# Assert
assert result.status == StateStatusEnum.EXECUTED
mock_update_query.set.assert_called_once()
# Should set outputs to empty dict
call_args = mock_update_query.set.call_args[0][0]
assert call_args["outputs"] == {}
mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
# Configure State.find_one to return different values based on call
# First call returns the state object, second call returns a query object with set method
mock_state_class.find_one = AsyncMock()
mock_state_class.find_one.side_effect = [mock_state, mock_update_query]
# Act
result = await executed_state(
mock_namespace,
mock_state_id,
executed_request,
mock_request_id,
mock_background_tasks
)
# Assert
assert result.status == StateStatusEnum.EXECUTED
mock_update_query.set.assert_called_once()
# Should set outputs to empty dict
call_args = mock_update_query.set.call_args[0][0]
assert call_args["outputs"] == {}
mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
assert mock_state_class.find_one.call_count == 2
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_executed_state.py around lines 221
to 241, the test exercises two calls to State.find_one (one to load the state
and one to produce the update query) but doesn't assert that both calls
occurred; add an assertion to verify find_one was called twice to prevent
regressions — for example, assert mock_state_class.find_one.call_count == 2 or
use assert_has_calls with the expected call signatures after the Act section and
before the rest of the assertions so the test explicitly validates both the load
and update-query calls.

Comment on lines +35 to +36
with patch('app.controller.create_states.create_states') as mock_create_states:
mock_response = MagicMock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Patch async dependency with AsyncMock for reliability

Explicitly patch create_states as AsyncMock to avoid awaitability issues and improve clarity.

Apply this diff:

-    with patch('app.controller.create_states.create_states') as mock_create_states:
+    with patch('app.controller.create_states.create_states', new_callable=AsyncMock) as mock_create_states:

Repeat the same change in the error test:

-    with patch('app.controller.create_states.create_states') as mock_create_states:
+    with patch('app.controller.create_states.create_states', new_callable=AsyncMock) as mock_create_states:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with patch('app.controller.create_states.create_states') as mock_create_states:
mock_response = MagicMock()
with patch('app.controller.create_states.create_states', new_callable=AsyncMock) as mock_create_states:
mock_response = MagicMock()
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_trigger_graph.py around lines 35 to
36, the test patches create_states with a regular MagicMock which can cause
awaitability issues; replace the patch so create_states is patched as an
AsyncMock (e.g., use patch(..., new_callable=AsyncMock) or assign AsyncMock() to
the mock) and ensure the mocked return value is awaited-compatible; repeat the
same replacement in the corresponding error test so both tests patch
create_states as an AsyncMock.

Comment on lines +63 to +69
# Verify the result
assert result.run_id is not None
assert result.status == StateStatusEnum.CREATED
assert len(result.states) == 2
assert result.states[0].identifier == "test_node_1"
assert result.states[1].identifier == "test_node_2"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Strengthen assertion: verify run_id propagation to states

Ensure the run_id generated by trigger_graph is propagated to each returned state from create_states.

Apply this diff:

         assert len(result.states) == 2
         assert result.states[0].identifier == "test_node_1"
         assert result.states[1].identifier == "test_node_2"
+        # Ensure run_id is propagated to all states
+        assert all(s.run_id == result.run_id for s in result.states)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Verify the result
assert result.run_id is not None
assert result.status == StateStatusEnum.CREATED
assert len(result.states) == 2
assert result.states[0].identifier == "test_node_1"
assert result.states[1].identifier == "test_node_2"
# Verify the result
assert result.run_id is not None
assert result.status == StateStatusEnum.CREATED
assert len(result.states) == 2
assert result.states[0].identifier == "test_node_1"
assert result.states[1].identifier == "test_node_2"
# Ensure run_id is propagated to all states
assert all(s.run_id == result.run_id for s in result.states)
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_trigger_graph.py around lines 63 to
69, the test currently asserts result.run_id and state identifiers but doesn't
verify that the generated run_id was propagated to each created state; add
assertions that each state object's run_id equals result.run_id (e.g., for every
state in result.states assert state.run_id == result.run_id) to ensure run_id
propagation from trigger_graph to create_states.

@@ -0,0 +1,286 @@
from time import sleep
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Remove sleeps from unit tests; BackgroundTasks aren’t executed here

sleep introduces flakiness and slows the suite without effect (BackgroundTasks.add_task is just scheduled and mocked).

-from time import sleep
@@
-        sleep(1) # wait for the background task to complete
@@
-        sleep(1) # wait for the background task to complete
@@
-        sleep(1) # wait for the background task to complete

Also applies to: 105-106, 238-239, 282-283

🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around line
1 (and also at lines 105-106, 238-239, 282-283), remove the time.sleep calls
used in the unit tests because BackgroundTasks.add_task is not executed in these
tests and the sleeps only slow and make tests flaky; instead, delete the sleep
statements and, if you need to assert that add_task was scheduled, replace the
implicit wait with explicit assertions on the mock BackgroundTasks object (e.g.,
assert_called or assert_called_with) or inspect the called tasks directly so the
test deterministically verifies scheduling without sleeping.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 25

🔭 Outside diff range comments (4)
state-manager/tests/unit/controller/test_create_states.py (1)

296-303: Strengthen multiple-states assertions (result contents and calls).

Assert the response contains both states and identifiers, plus that get_node_by_identifier was invoked twice with the expected IDs.

         # Assert
         assert result.status == StateStatusEnum.CREATED
         assert mock_graph_template_class.find_one.called
         mock_state_class.insert_many.assert_called_once()
         # Verify that insert_many was called with 2 states
         call_args = mock_state_class.insert_many.call_args[0][0]
         assert len(call_args) == 2
+        assert len(result.states) == 2
+        assert sorted(s.identifier for s in result.states) == ["node1", "node2"]
+        assert all(s.run_id == "test_run_id" for s in result.states)
+        # Ensure node resolution attempted for both identifiers
+        assert mock_graph_template.get_node_by_identifier.call_count == 2
+        got_ids = [call.args[0] for call in mock_graph_template.get_node_by_identifier.call_args_list]
+        assert sorted(got_ids) == ["node1", "node2"]
state-manager/app/controller/errored_state.py (2)

20-25: Clarify and simplify status validation logic

Current flow first allows QUEUED/EXECUTED, then immediately rejects EXECUTED. This works but is harder to read, and the “not queued or executed” message is slightly confusing given EXECUTED is blocked anyway.

Consider flipping the checks to make intent explicit:

  • If EXECUTED: raise “already executed”
  • Else if not QUEUED: raise “not queued”

Apply this diff:

-        if state.status != StateStatusEnum.QUEUED and state.status != StateStatusEnum.EXECUTED:
-            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is not queued or executed")
-        
-        if state.status == StateStatusEnum.EXECUTED:
-            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is already executed")
+        if state.status == StateStatusEnum.EXECUTED:
+            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is already executed")
+        elif state.status != StateStatusEnum.QUEUED:
+            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is not queued")

32-34: Preserve original traceback: use raise, not raise e

Using raise e discards the original traceback. Prefer bare raise to re-raise while preserving context.

-        raise e
+        raise
state-manager/app/controller/executed_state.py (1)

61-63: Preserve original traceback: use raise, not raise e

Bare raise retains the full stack trace for better debugging.

-        raise e
+        raise
♻️ Duplicate comments (10)
state-manager/tests/unit/controller/test_upsert_graph_template.py (5)

1-1: Remove sleeps; they add flakiness and aren’t needed in unit tests

BackgroundTasks aren’t executed here; asserting scheduling is sufficient. Also fixes ruff failures due to unused imports.

-from time import sleep
@@
-        sleep(1) # wait for the background task to complete
@@
-        sleep(1) # wait for the background task to complete
@@
-        sleep(1) # wait for the background task to complete

Also applies to: 106-106, 239-239, 283-283


116-121: Assert that async update was awaited, not just called

Since update is an AsyncMock, assert_awaited_once() is the correct assertion.

-        mock_existing_template.update.assert_called_once()
+        mock_existing_template.update.assert_awaited_once()

113-114: Avoid strict equality assertion on updated_at

updated_at often changes on update. This assertion is brittle and can fail if the controller updates the timestamp.

-        assert result.created_at == mock_existing_template.created_at
-        assert result.updated_at == mock_existing_template.updated_at
+        assert result.created_at == mock_existing_template.created_at
+        # Avoid strict equality on updated_at; it may change on update

108-112: Be explicit about expected status after upsert

The controller sets validation_status to PENDING on update in DB but returns the in-memory object. If the controller is later changed to reload after update, this test will start failing. Either:

  • assert that update() was awaited with a payload setting validation_status=PENDING; or
  • mutate the mock in-place in the update side-effect so result reflects the updated status.

Example mutation approach:

def _mutate_status(*_a, **_k):
    mock_existing_template.validation_status = GraphTemplateValidationStatus.PENDING

mock_existing_template.update = AsyncMock(side_effect=_mutate_status)

Or verify the intent without relying on in-memory mutation:

mock_existing_template.update.assert_awaited()  # already suggested above

270-271: Fix contradictory comment and assert update awaited in the validation-errors path; remove sleep

The assertion expects prior errors to remain, but the comment says they should be reset. Either update the controller to reload after update (then assert []), or fix the comment to reflect current behavior. Also assert update was awaited.

-        mock_existing_template.update = AsyncMock()
+        mock_existing_template.update = AsyncMock()
@@
-        sleep(1) # wait for the background task to complete
+        mock_existing_template.update.assert_awaited_once()
@@
-        assert result.validation_errors == ["Previous error 1", "Previous error 2"]  # Should be reset to empty
+        # Existing behavior: response reflects in-memory object; errors remain until background validation updates them
+        assert result.validation_errors == ["Previous error 1", "Previous error 2"]

Also applies to: 283-287

state-manager/tests/unit/controller/test_create_states.py (3)

10-13: Fix ruff F401 by removing unused imports (blocking CI).

These imports aren’t referenced in this module and are tripping ruff checks.

Apply this diff:

-from app.models.db.state import State
-from app.models.db.graph_template_model import GraphTemplate
 from app.models.node_template_model import NodeTemplate
-from app.models.graph_template_validation_status import GraphTemplateValidationStatus

146-152: Also assert run_id, graph_name, state_id, and created_at in response.

Run ID propagation was previously requested; add it here along with a couple of high-value assertions.

         assert result.status == StateStatusEnum.CREATED
         assert len(result.states) == 1
         assert result.states[0].identifier == "test_identifier"
         assert result.states[0].node_name == "test_node"
         assert result.states[0].inputs == {"key": "value"}
+        assert result.states[0].run_id == "test_run_id"
+        assert result.states[0].graph_name == "test_graph"
+        # state_id type may vary; compare string forms for robustness
+        assert str(result.states[0].state_id) == str(mock_state.id)
+        assert result.states[0].created_at == mock_state.created_at

153-157: Optionally strengthen assertions on find_one filters and State constructor args.

  • Validate that GraphTemplate.find_one received both name and namespace filters.
  • Validate State(...) construction captured expected fields from request + template.
         # Verify find_one was called (with any arguments)
-        assert mock_graph_template_class.find_one.called
+        assert mock_graph_template_class.find_one.called
+        # Check that filters include graph name and namespace (string-based, non-brittle)
+        args, _ = mock_graph_template_class.find_one.call_args
+        assert any("test_graph" in str(arg) for arg in args)
+        assert any("test_namespace" in str(arg) for arg in args)
         mock_state_class.insert_many.assert_called_once()
         mock_state_class.find.assert_called_once()
+
+        # Optional: verify State(...) was constructed with expected kwargs
+        ctor_calls = mock_state_class.call_args_list
+        assert ctor_calls, "Expected State(...) to be constructed at least once"
+        ctor_kwargs = ctor_calls[0].kwargs
+        assert ctor_kwargs["identifier"] == "test_identifier"
+        assert ctor_kwargs["node_name"] == "test_node"
+        assert ctor_kwargs["namespace_name"] == "test_namespace"
+        assert ctor_kwargs["graph_name"] == "test_graph"
+        assert ctor_kwargs["run_id"] == "test_run_id"
+        assert ctor_kwargs["inputs"] == {"key": "value"}
state-manager/tests/unit/controller/test_errored_state.py (2)

78-88: Misleading test name/docstring vs. setup

The test name says “executed” but you set status=QUEUED below. Rename the test and docstring to reflect actual behavior.

-    async def test_errored_state_success_executed(
+    async def test_errored_state_success_from_queued_alt(
@@
-        """Test successful error marking of executed state"""
+        """Test successful error marking when state is QUEUED (alternate path)"""

225-226: Fix: AsyncMock is required for awaited find_one (DB error path)

find_one is awaited; using MagicMock will raise “object is not awaitable.” Use AsyncMock with side_effect.

-        mock_state_class.find_one = MagicMock(side_effect=Exception("Database error"))
+        mock_state_class.find_one = AsyncMock(side_effect=Exception("Database error"))
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 6d7d76f and bb9e3f1.

📒 Files selected for processing (7)
  • state-manager/app/controller/errored_state.py (1 hunks)
  • state-manager/app/controller/executed_state.py (2 hunks)
  • state-manager/tests/integration/peinding_test_full_workflow_integration.py (1 hunks)
  • state-manager/tests/unit/controller/test_create_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_errored_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_executed_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_upsert_graph_template.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (7)
state-manager/app/controller/errored_state.py (2)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/routes.py (1)
  • errored_state_route (104-114)
state-manager/tests/unit/controller/test_errored_state.py (4)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/app/models/errored_models.py (1)
  • ErroredRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/routes.py (1)
  • errored_state_route (104-114)
state-manager/tests/unit/controller/test_upsert_graph_template.py (6)
state-manager/app/models/graph_models.py (2)
  • UpsertGraphTemplateRequest (8-10)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/db/graph_template_model.py (2)
  • set_secrets (67-69)
  • GraphTemplate (13-81)
state-manager/app/controller/upsert_graph_template.py (1)
  • upsert_graph_template (12-62)
state-manager/app/tasks/verify_graph.py (1)
  • verify_graph (234-265)
state-manager/tests/unit/controller/test_executed_state.py (3)
state-manager/app/controller/executed_state.py (1)
  • executed_state (12-63)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/tests/integration/peinding_test_full_workflow_integration.py (7)
state-manager/app/models/register_nodes_request.py (2)
  • RegisterNodesRequestModel (12-14)
  • NodeRegistrationModel (5-9)
state-manager/app/models/graph_models.py (1)
  • UpsertGraphTemplateRequest (8-10)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (22-24)
  • RequestStateModel (7-9)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/enqueue_request.py (1)
  • EnqueueRequestModel (4-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/tests/unit/controller/test_create_states.py (7)
state-manager/app/controller/create_states.py (3)
  • create_states (51-93)
  • get_node_template (17-21)
  • create_states (23-64)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (22-24)
  • RequestStateModel (7-9)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/state.py (1)
  • State (8-20)
state-manager/app/models/db/graph_template_model.py (2)
  • GraphTemplate (13-81)
  • get_node_by_identifier (30-35)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (5-10)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/app/controller/executed_state.py (5)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/tasks/create_next_state.py (2)
  • create_next_state (14-127)
  • create_next_state (16-155)
state-manager/app/models/db/state.py (1)
  • State (8-18)
state-manager/app/routes.py (1)
  • executed_state_route (84-94)
state-manager/app/controller/create_states.py (1)
  • create_states (23-64)
🔇 Additional comments (8)
state-manager/tests/unit/controller/test_upsert_graph_template.py (1)

93-96: Good: set_secrets mocked as sync and returning self

Using MagicMock for set_secrets and AsyncMock for update aligns with the real API and avoids coroutine chaining issues.

state-manager/tests/unit/controller/test_create_states.py (2)

19-38: LGTM: Solid coverage for get_node_template success and not-found paths.

Happy path and 404 behavior are verified, including the exact identifier call.


110-137: Correct mocking of Beanie’s find chain (sync find with awaited to_list).

Using MagicMock for State.find and AsyncMock for to_list is the correct approach and avoids awaiting find itself.

state-manager/app/controller/errored_state.py (1)

26-28: LGTM: in-memory mutation + save is the right direction

Switching to updating the loaded document (status/error) and calling save() reduces round-trips and aligns with executed_state. Looks good.

state-manager/app/controller/executed_state.py (3)

24-29: LGTM: zero-output path refactor to mutate + save

Updating the in-memory state (status/outputs/parents) and saving is cleaner than a separate update query + reload. Nice improvement.


33-39: LGTM: non-zero-output path refactor to mutate + save

Same here—this reduces DB ops and correctly passes the updated state downstream.


41-57: No action needed: run_id is defined on State and passed in all constructor calls.

I’ve confirmed that in state-manager/app/models/db/state.py the State model declares:

run_id: str = Field(..., description="Unique run ID for grouping states from the same graph execution")

and all usages of State(—including in executed_state.py—include a run_id=… argument.

state-manager/tests/unit/controller/test_executed_state.py (1)

236-240: Strengthen assertions for empty outputs

Validate persistence and scheduling.

         assert result.status == StateStatusEnum.EXECUTED
         assert mock_state.outputs == {}
-        mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
+        mock_state.save.assert_awaited_once()
+        mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)

Likely an incorrect or invalid review comment.

Comment on lines +1 to +14
"""
Integration tests for the complete state-manager workflow.

These tests cover the full happy path:
1. Register nodes with the state-manager
2. Create a graph template with the registered nodes
3. Create states for the graph
4. Execute states and verify the workflow

Prerequisites:
- A running MongoDB instance
- A running Redis instance (if used by the system)
- The state-manager service running on localhost:8000
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

File naming and CI behavior

  • Typo in filename: “peinding” → “pending” (or simply test_full_workflow_integration.py).
  • Consider skipping these tests by default in CI unless an env flag is set, as they depend on running services. You can do:
pytestmark = pytest.mark.skipif(
    os.getenv("RUN_INTEGRATION_TESTS") != "1",
    reason="Integration tests require running services; set RUN_INTEGRATION_TESTS=1 to enable."
)

This is in addition to the integration marker already present.

🤖 Prompt for AI Agents
state-manager/tests/integration/peinding_test_full_workflow_integration.py lines
1-14: the filename contains a typo ("peinding") and the test file runs
unconditionally which can break CI; rename the file to
state-manager/tests/integration/test_full_workflow_integration.py (or
state-manager/tests/integration/pending_test_full_workflow_integration.py) and
modify the test module to skip by default unless an environment flag is set by
adding a pytest skipif marker that checks os.getenv("RUN_INTEGRATION_TESTS") ==
"1" (ensure you import os and pytest), keeping the existing integration marker
intact.

Comment on lines +41 to +48
"""Integration tests for the complete state-manager workflow."""

@pytest.fixture
async def state_manager_client(self):
"""Create an HTTP client for the state-manager."""
async with httpx.AsyncClient(base_url="http://localhost:8000") as client:
yield client

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Pytest fixtures as class methods with self will not work

Fixtures should be module-level functions (no self). As written, pytest won’t inject these fixtures and tests will error at collection or runtime.

Refactor by moving fixtures out of the class and remove self parameters. Example (outside this class):

# Module-level fixtures (no self)
@pytest.fixture
async def state_manager_client():
    async with httpx.AsyncClient(base_url="http://localhost:8000") as client:
        yield client

@pytest.fixture
def test_namespace() -> str:
    return f"test-namespace-{uuid.uuid4().hex[:8]}"

# ... similarly for test_api_key, test_graph_name, test_runtime_name,
#     sample_node_registration, sample_graph_nodes

Then, keep the test methods in the class (or make them module-level functions) consuming these fixtures by name.

🤖 Prompt for AI Agents
In state-manager/tests/integration/peinding_test_full_workflow_integration.py
around lines 41 to 48, pytest fixtures are currently defined as class methods
(accepting self) which prevents pytest from injecting them; move each fixture
(state_manager_client, test_namespace, test_api_key, test_graph_name,
test_runtime_name, sample_node_registration, sample_graph_nodes, etc.) out of
the test class to module-level functions, remove any self parameters, and keep
or convert test functions to consume these fixtures by name (no self) so pytest
can collect and inject them properly.

Comment on lines 193 to 227
async def test_create_states(self, state_manager_client, test_namespace: str,
test_api_key: str, test_graph_name: str):
"""Test creating states for a graph."""

# Prepare the request
request_data = CreateRequestModel(
states=[
RequestStateModel(
identifier="node1",
inputs={
"input1": "test_value",
"input2": 42
}
)
]
)

# Make the request
response = await state_manager_client.post(
f"/v0/namespace/{test_namespace}/graph/{test_graph_name}/states/create",
json=request_data.model_dump(),
headers={"X-API-Key": test_api_key}
)

# Verify the response
assert response.status_code == 200
response_data = response.json()
assert "status" in response_data
assert "states" in response_data
assert len(response_data["states"]) == 1

# Store the state ID for later tests
state_id = response_data["states"][0]["state_id"]
return state_id

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

🛠️ Refactor suggestion

Missing state_id fixture; test methods depend on each other

  • test_execute_state and test_get_secrets require a state_id fixture that doesn’t exist.
  • test_full_workflow_happy_path calls other test methods (anti-pattern) which will also run independently and fail.

Refactor:

  • Create a proper state_id fixture that creates a state via the API and yields the ID.
  • Convert helper flows to non-test helpers (e.g., _create_states, _execute_state) and call them from the single end-to-end test.

I can draft the refactor with a single test_full_workflow_happy_path using helper coroutines and proper fixtures if you want.

🤖 Prompt for AI Agents
In state-manager/tests/integration/peinding_test_full_workflow_integration.py
around lines 193 to 227, the test currently returns a state_id and other tests
call test methods directly (anti-pattern) causing dependent failures; refactor
by adding a proper async pytest fixture named state_id that posts to the
/states/create endpoint, asserts 200, extracts and yields the created state_id,
change test_create_states into a non-test helper coroutine named _create_states
(or similar) that performs the POST and returns the id, update
test_full_workflow_happy_path to call these helper coroutines rather than other
test functions, and update test_execute_state and test_get_secrets to accept the
new state_id fixture instead of relying on calling other tests.

Comment on lines +255 to +281
async def test_execute_state(self, state_manager_client, test_namespace: str,
test_api_key: str, state_id: str):
"""Test executing a state."""

# Prepare the request
request_data = ExecutedRequestModel(
outputs=[
{
"output1": "executed_value",
"output2": 100
}
]
)

# Make the request
response = await state_manager_client.post(
f"/v0/namespace/{test_namespace}/states/{state_id}/executed",
json=request_data.model_dump(),
headers={"X-API-Key": test_api_key}
)

# Verify the response
assert response.status_code == 200
response_data = response.json()
assert "status" in response_data
assert response_data["status"] == StateStatusEnum.EXECUTED

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Enum vs JSON string: compare to .value

The API returns a JSON string; comparing it to the Enum object will fail.

-        assert "status" in response_data
-        assert response_data["status"] == StateStatusEnum.EXECUTED
+        assert "status" in response_data
+        assert response_data["status"] == StateStatusEnum.EXECUTED.value
🤖 Prompt for AI Agents
In state-manager/tests/integration/peinding_test_full_workflow_integration.py
around lines 255 to 281, the test compares the JSON response string to the Enum
object which will fail; change the assertion to compare response_data["status"]
to StateStatusEnum.EXECUTED.value (or otherwise convert the response string to
the Enum) so the test compares like-for-like JSON string values.

Comment on lines +77 to +105
@patch('app.controller.upsert_graph_template.GraphTemplate')
@patch('app.controller.upsert_graph_template.verify_graph')
async def test_upsert_graph_template_update_existing(
self,
mock_verify_graph,
mock_graph_template_class,
mock_namespace,
mock_graph_name,
mock_upsert_request,
mock_existing_template,
mock_background_tasks,
mock_request_id
):
"""Test successful update of existing graph template"""
# Arrange

mock_existing_template.update = AsyncMock()
mock_existing_template.set_secrets = MagicMock(return_value=mock_existing_template)
mock_graph_template_class.find_one = AsyncMock(return_value=mock_existing_template)

# Act
result = await upsert_graph_template(
mock_namespace,
mock_graph_name,
mock_upsert_request,
mock_request_id,
mock_background_tasks
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: assert that find_one was awaited and background task scheduled without sleeps

These additional assertions improve coverage and remove the need for sleeps.

mock_graph_template_class.find_one.assert_awaited_once()
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_existing_template)
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around
lines 77 to 105, add assertions to ensure GraphTemplate.find_one was awaited and
that the background task was scheduled: assert that
mock_graph_template_class.find_one.assert_awaited_once() is called and that
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph,
mock_existing_template); place these after the call to upsert_graph_template so
the test verifies the awaited DB lookup and the scheduled background
verification task (this also removes the need for sleeps).

Comment on lines +123 to +171
@patch('app.controller.upsert_graph_template.GraphTemplate')
@patch('app.controller.upsert_graph_template.verify_graph')
async def test_upsert_graph_template_create_new(
self,
mock_verify_graph,
mock_graph_template_class,
mock_namespace,
mock_graph_name,
mock_upsert_request,
mock_background_tasks,
mock_request_id
):
"""Test successful creation of new graph template"""
# Arrange
mock_graph_template_class.find_one = AsyncMock(return_value=None) # Template doesn't exist

mock_new_template = MagicMock()
mock_new_template.nodes = mock_upsert_request.nodes
mock_new_template.validation_status = GraphTemplateValidationStatus.PENDING
mock_new_template.validation_errors = []
mock_new_template.secrets = mock_upsert_request.secrets
mock_new_template.created_at = datetime(2023, 1, 1, 12, 0, 0)
mock_new_template.updated_at = datetime(2023, 1, 1, 12, 0, 0)
mock_new_template.get_secrets.return_value = mock_upsert_request.secrets
mock_new_template.set_secrets.return_value = mock_new_template

mock_graph_template_class.insert = AsyncMock(return_value=mock_new_template)

# Act
result = await upsert_graph_template(
mock_namespace,
mock_graph_name,
mock_upsert_request,
mock_request_id,
mock_background_tasks
)

# Assert
assert result.nodes == mock_upsert_request.nodes
assert result.validation_status == GraphTemplateValidationStatus.PENDING
assert result.validation_errors == []
assert result.secrets == {"api_key": True, "database_url": True}

# Verify new template was created
mock_graph_template_class.insert.assert_called_once()

# Verify background task was added
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_new_template)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: also assert background task scheduling without sleeps

You’re already asserting add_task in other tests; mirroring that here adds consistency.

mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_new_template)
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around
lines 123 to 171, add an assertion to verify the background task was scheduled
when creating a new template by asserting mock_background_tasks.add_task was
called once with the verify_graph function and the created template; add the
line: mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph,
mock_new_template) after the existing insert assertion.

Comment on lines +160 to +171
# Assert
assert result.nodes == mock_upsert_request.nodes
assert result.validation_status == GraphTemplateValidationStatus.PENDING
assert result.validation_errors == []
assert result.secrets == {"api_key": True, "database_url": True}

# Verify new template was created
mock_graph_template_class.insert.assert_called_once()

# Verify background task was added
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_new_template)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Assert async insert was awaited, not just called

insert is an AsyncMock; use assert_awaited_once().

-        mock_graph_template_class.insert.assert_called_once()
+        mock_graph_template_class.insert.assert_awaited_once()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Assert
assert result.nodes == mock_upsert_request.nodes
assert result.validation_status == GraphTemplateValidationStatus.PENDING
assert result.validation_errors == []
assert result.secrets == {"api_key": True, "database_url": True}
# Verify new template was created
mock_graph_template_class.insert.assert_called_once()
# Verify background task was added
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_new_template)
# Verify new template was created
mock_graph_template_class.insert.assert_awaited_once()
# Verify background task was added
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_new_template)
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around
lines 160 to 171, the test currently uses
mock_graph_template_class.insert.assert_called_once() but insert is an AsyncMock
and should be awaited-asserted; replace that assertion with
mock_graph_template_class.insert.assert_awaited_once() (or
assert_awaited_once_with(...) if checking args) so the test verifies the
coroutine was awaited rather than only called.

Comment on lines +166 to +171
# Verify new template was created
mock_graph_template_class.insert.assert_called_once()

# Verify background task was added
mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_new_template)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: also assert that find_one was awaited

This strengthens the behavior verification on the create path.

mock_graph_template_class.find_one.assert_awaited_once()
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around
lines 166 to 171, the test verifies insert and background task scheduling but
doesn't assert that find_one was awaited; add an assertion that the mocked
find_one coroutine was awaited exactly once (e.g., call
mock_graph_template_class.find_one.assert_awaited_once()) immediately after the
existing assertions to strengthen the create-path behavior verification.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (5)
state-manager/app/tasks/create_next_state.py (3)

95-133: Fix templating: avoid TypeError for non-string outputs and remove unreachable branch

  • The len(splits) == 0 branch is unreachable; split(...) always returns at least one element.
  • Concatenating dependent_state.outputs[input_field] with strings will raise TypeError when the output field is non-string.

Refactor to short-circuit when there’s no placeholder and cast substituted values to str.

-                splits = temporary_input.split("${{")
-                    
-                if len(splits) == 0:
-                    next_node_input_data[field_name] = temporary_input
-                    continue
-
-                constructed_string = ""
-                for split in splits:
-                    if "}}" in split:
-                        placeholder_content = split.split("}}")[0]
-                        parts = [p.strip() for p in placeholder_content.split('.')]
-                            
-                        if len(parts) != 3 or parts[1] != 'outputs':
-                            raise Exception(f"Invalid input placeholder format: '{placeholder_content}' for field {field_name}")
-                            
-                        input_identifier = parts[0]
-                        input_field = parts[2]                        
-
-                        parent_id = parents.get(input_identifier)
-                            
-                        if not parent_id:
-                            raise Exception(f"Parent identifier '{input_identifier}' not found in state parents.")
-
-                        if parent_id not in cache_states:
-                            dependent_state = await State.get(parent_id)
-                            if not dependent_state:
-                                raise Exception(f"Dependent state {input_identifier} not found")
-                            cache_states[parent_id] = dependent_state
-                        else:
-                            dependent_state = cache_states[parent_id]
-                            
-                        if input_field not in dependent_state.outputs:
-                            raise Exception(f"Input field {input_field} not found in dependent state {input_identifier}")
-                            
-                        constructed_string += dependent_state.outputs[input_field] + split.split("}}")[1]
-
-                    else:
-                        constructed_string += split
-                    
-                next_node_input_data[field_name] = constructed_string
+                # Fast path: no templating present
+                if "${{" not in temporary_input:
+                    next_node_input_data[field_name] = temporary_input
+                    continue
+
+                constructed_parts: list[str] = []
+                for segment in temporary_input.split("${{"):
+                    if "}}" in segment:
+                        placeholder_content, suffix = segment.split("}}", 1)
+                        parts = [p.strip() for p in placeholder_content.split(".")]
+                        if len(parts) != 3 or parts[1] != "outputs":
+                            raise Exception(f"Invalid input placeholder format: '{placeholder_content}' for field {field_name}")
+
+                        input_identifier = parts[0]
+                        input_field = parts[2]
+
+                        parent_id = parents.get(input_identifier)
+                        if not parent_id:
+                            raise Exception(f"Parent identifier '{input_identifier}' not found in state parents.")
+
+                        if parent_id not in cache_states:
+                            dependent_state = await State.get(parent_id)
+                            if not dependent_state:
+                                raise Exception(f"Dependent state {input_identifier} not found")
+                            cache_states[parent_id] = dependent_state
+                        else:
+                            dependent_state = cache_states[parent_id]
+
+                        if input_field not in dependent_state.outputs:
+                            raise Exception(f"Input field {input_field} not found in dependent state {input_identifier}")
+
+                        value = dependent_state.outputs[input_field]
+                        constructed_parts.append(str(value) + suffix)
+                    else:
+                        constructed_parts.append(segment)
+
+                next_node_input_data[field_name] = "".join(constructed_parts)

57-79: Guard against duplicate next-state creation under concurrent executions

Two parents satisfying a unite at roughly the same time can race and each create the same next state. Add an idempotency check before insert (and consider a unique index keyed by run_id + identifier + parents entry) to prevent duplicates.

Place this guard right before instantiating new_state:

+            # Idempotency guard: avoid creating duplicate next states under concurrent executions
+            existing = await State.find_one(
+                State.namespace_name == next_node_template.namespace,
+                State.graph_name == state.graph_name,
+                State.identifier == next_node_template.identifier,
+                State.run_id == state.run_id,
+                {f"parents.{state.identifier}": state.id},
+            )
+            if existing:
+                continue

Longer-term, consider a unique index on (namespace_name, graph_name, run_id, identifier, parents.<current_identifier>) if your MongoDB version and data shape permit it; otherwise, persist an explicit deterministic idempotency key on the document and make it unique.


152-156: Log the exception with stack trace when marking state ERRORED

Currently the error is only stored on the document; add a structured log with stack trace to aid debugging.

-    except Exception as e:
-        state.status = StateStatusEnum.ERRORED
-        state.error = str(e)
-        await state.save()
-        return
+    except Exception as e:
+        logger.error("Error creating next state(s)", state_id=str(state.id or ""), exc_info=True)
+        state.status = StateStatusEnum.ERRORED
+        state.error = str(e)
+        await state.save()
+        return
state-manager/app/controller/executed_state.py (2)

62-64: Log with stack trace for easier debugging

Swap error=e for exc_info=True (or use logger.exception) to capture the traceback.

-    except Exception as e:
-        logger.error(f"Error executing state {state_id} for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id, error=e)
-        raise e
+    except Exception as e:
+        logger.error(
+            f"Error executing state {state_id} for namespace {namespace_name}",
+            x_exosphere_request_id=x_exosphere_request_id,
+            exc_info=True,
+        )
+        raise e

24-41: Minor duplication: consolidate common mutations across branches

Both branches set outputs/status/parents and save, then schedule create_next_state. Consider extracting a small helper or reordering to reduce duplication and the chance of one branch drifting.

♻️ Duplicate comments (1)
state-manager/app/models/db/state.py (1)

18-18: Good: parents typed as PydanticObjectId

This addresses the earlier ask to avoid arbitrary types and ensures proper validation/serialization.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between bb9e3f1 and 104f58a.

⛔ Files ignored due to path filters (1)
  • state-manager/uv.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • state-manager/app/controller/executed_state.py (2 hunks)
  • state-manager/app/models/db/state.py (1 hunks)
  • state-manager/app/tasks/create_next_state.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
state-manager/app/models/db/state.py (2)
state-manager/app/models/enqueue_response.py (1)
  • StateModel (6-11)
state-manager/app/models/create_models.py (1)
  • RequestStateModel (7-9)
state-manager/app/controller/executed_state.py (3)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/tasks/create_next_state.py (1)
  • create_next_state (16-156)
state-manager/app/routes.py (1)
  • executed_state_route (84-94)
state-manager/app/tasks/create_next_state.py (1)
state-manager/app/models/enqueue_response.py (1)
  • StateModel (6-11)
🔇 Additional comments (2)
state-manager/app/tasks/create_next_state.py (1)

139-145: LGTM: run_id propagation ensures correct grouping of next states

Propagating run_id here is necessary for end-to-end grouping and aligns with the State model change.

state-manager/app/controller/executed_state.py (1)

25-29: Unify parents mapping to avoid mixed types
We found two places updating parents with different forms of state.id:

  • Line 27 in executed_state.py:
    state.parents = {**state.parents, state.identifier: ObjectId(state.id)}
  • Lines 47–55 in the same file:
    parents={ **state.parents, state.identifier: state.id }

Please standardize on one representation (e.g. always wrap state.id with ObjectId) so that all entries in parents share the same type. [optional_refactors_recommended]

namespace_name: str = Field(..., description="Name of the namespace of the state")
identifier: str = Field(..., description="Identifier of the node for which state is created")
graph_name: str = Field(..., description="Name of the graph template for this state")
run_id: str = Field(..., description="Unique run ID for grouping states from the same graph execution")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

🛠️ Refactor suggestion

Add index and minimal validation for run_id; consider compound (namespace_name, run_id) index

run_id is now a primary grouping key. Index it and enforce non-empty values to avoid slow scans and accidental empties.

Apply this diff to index run_id and add a basic constraint:

-from beanie import PydanticObjectId
+from beanie import PydanticObjectId, Indexed
@@
-    run_id: str = Field(..., description="Unique run ID for grouping states from the same graph execution")
+    run_id: Indexed(str) = Field(..., min_length=1, description="Unique run ID for grouping states from the same graph execution")

Additionally, add Beanie indexes for common lookups (compound index helps listing by namespace/run):

# Place inside the State model
class Settings:
    indexes = [
        [("namespace_name", 1), ("run_id", 1)],
        # Optional: speed up common graph traversals
        [("namespace_name", 1), ("graph_name", 1), ("identifier", 1), ("status", 1)],
    ]
🤖 Prompt for AI Agents
state-manager/app/models/db/state.py around line 13: run_id is declared without
an index or validation which can lead to slow scans and empty values; update the
Field to enforce non-empty values (e.g., min_length=1 or equivalent constraint)
and mark run_id as indexed, then add a Settings inner class on the State model
that defines Beanie indexes including a compound index [("namespace_name", 1),
("run_id", 1)] and the optional traversal index [("namespace_name", 1),
("graph_name", 1), ("identifier", 1), ("status", 1)] so common lookups are fast.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🔭 Outside diff range comments (4)
state-manager/app/controller/executed_state.py (1)

63-65: Nit: fix log message phrasing for clarity

The log reads awkwardly (“Error errored state ...”). Consider clarifying.

-        logger.error(f"Error executing state {state_id} for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id, error=e)
+        logger.error(
+            f"Error marking executed state {state_id} for namespace {namespace_name}",
+            x_exosphere_request_id=x_exosphere_request_id,
+            error=e,
+        )
state-manager/app/controller/errored_state.py (2)

20-25: Redundant precondition: simplify status checks

You can compress the status validation into a single guard and a single explicit “already executed” branch for readability.

-        if state.status != StateStatusEnum.QUEUED and state.status != StateStatusEnum.EXECUTED:
-            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is not queued or executed")
-        
-        if state.status == StateStatusEnum.EXECUTED:
+        if state.status == StateStatusEnum.EXECUTED:
             raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is already executed")
+        if state.status != StateStatusEnum.QUEUED:
+            raise HTTPException(status_code=status.HTTP_400_BAD_REQUEST, detail="State is not queued or executed")

32-34: Nit: improve error log phrasing

The message “Error errored state ...” is awkward; rephrase for clarity.

-        logger.error(f"Error errored state {state_id} for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id, error=e)
+        logger.error(
+            f"Error marking state {state_id} as ERRORED for namespace {namespace_name}",
+            x_exosphere_request_id=x_exosphere_request_id,
+            error=e,
+        )
state-manager/app/controller/create_states.py (1)

91-93: Preserve traceback on errors in create_states

Same issue as above: log with exception and re-raise without shadowing.

-    except Exception as e:
-        logger.error(f"Error creating states for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise e
+    except Exception:
+        logger.exception(f"Error creating states for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
+        raise
♻️ Duplicate comments (30)
state-manager/tests/unit/controller/test_enqueue_states.py (4)

80-84: Assert awaited update and validate payload set to QUEUED

Strengthen the verification by asserting the AsyncMock was awaited and that the update payload contains the expected status.

         assert mock_state_class.find.call_count == 2  # Called twice: once for finding, once for updating
         mock_query.limit.assert_called_once_with(10)
-        mock_update_query.set.assert_called_once()
+        mock_update_query.set.assert_awaited_once()
+        args, kwargs = mock_update_query.set.call_args
+        assert args[0]["status"] == StateStatusEnum.QUEUED

109-114: No candidates path: also assert only one find call

Ensure we don't attempt an update pass when nothing was found.

         assert result.status == StateStatusEnum.QUEUED
         assert len(result.states) == 0
+        assert mock_state_class.find.call_count == 1

214-216: Batch-size path: also assert only one find call

Catch unnecessary update queries in “no results” scenarios.

         assert result.count == 0
         mock_query.limit.assert_called_once_with(5)
+        assert mock_state_class.find.call_count == 1

158-163: Multiple states: assert one update cycle and awaited set

Mirror the success-path assertions to catch regressions and validate the update payload.

         assert result.states[0].node_name == "node1"
         assert result.states[1].node_name == "node2"
+        assert mock_state_class.find.call_count == 2  # one for selection, one for update
+        mock_update_query.set.assert_awaited_once()
+        args, kwargs = mock_update_query.set.call_args
+        assert args[0]["status"] == StateStatusEnum.QUEUED
state-manager/tests/unit/controller/test_executed_state.py (5)

66-74: Remove dead mocks and duplicate assignments (ruff cleanup)

mock_update_query is unused and save is assigned twice. Simplify the setup.

-        # Mock State.find_one() for finding the state
-        # Mock State.find_one().set() for updating the state
-        mock_update_query = MagicMock()
-        mock_update_query.set = AsyncMock()
-
-        mock_state.save = AsyncMock()
-
-        mock_state.status = StateStatusEnum.QUEUED 
-        mock_state.save = AsyncMock()
-        mock_state_class.find_one = AsyncMock(return_value=mock_state)   
+        mock_state.status = StateStatusEnum.QUEUED
+        mock_state.save = AsyncMock()
+        mock_state_class.find_one = AsyncMock(return_value=mock_state)

84-88: Assert persistence and scheduling on single-output path

Verify the mutated state was saved and the background task scheduled.

         assert result.status == StateStatusEnum.EXECUTED
         assert mock_state_class.find_one.call_count == 1  # Called once for finding
-        mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
+        mock_state.save.assert_awaited_once()
+        mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)

217-226: Empty outputs path: remove unused mocks and assert save

Clean up the unused update mock and verify persistence.

-        # Mock State.find_one() for finding the state
-        # Mock State.find_one().set() for updating the state
-        mock_update_query = MagicMock()
-        mock_update_query.set = AsyncMock()
-        
-        # Configure State.find_one to return different values based on call
-        # First call returns the state object, second call returns a query object with set method
-        mock_state_class.find_one = AsyncMock(return_value=mock_state)
-        mock_state.save = AsyncMock()
+        mock_state_class.find_one = AsyncMock(return_value=mock_state)
+        mock_state.save = AsyncMock()

111-126: Multiple outputs: remove unused mocks and use distinct new_state instances

Avoid unused update-query mocks and create separate instances per new state so you can assert both saves.

-        # Mock State.find_one() for finding the state
-        # Mock State.find_one().set() for updating the state
-        mock_update_query = MagicMock()
-        mock_update_query.set = AsyncMock()
-        
-        # Configure State.find_one to return different values based on call
-        # First call returns the state object, second call returns a query object with set method
-        # Additional calls in the loop also return query objects with set method
-        mock_state_class.find_one = AsyncMock(return_value=mock_state)
-        mock_state.save = AsyncMock()
-        
-        # Mock State.save() for new states
-        mock_new_state = MagicMock()
-        mock_new_state.save = AsyncMock()
-        mock_state_class.return_value = mock_new_state
+        mock_state_class.find_one = AsyncMock(return_value=mock_state)
+        mock_state.save = AsyncMock()
+        # Mock State(...) instances for the 2 extra outputs
+        mock_new_state1 = MagicMock()
+        mock_new_state1.save = AsyncMock()
+        mock_new_state2 = MagicMock()
+        mock_new_state2.save = AsyncMock()
+        mock_state_class.side_effect = [mock_new_state1, mock_new_state2]

137-144: Add assertions for new_state saves and fix misleading comment

Ensure both new states are saved and correct the find_one call-count comment.

         assert result.status == StateStatusEnum.EXECUTED
         # Should create 2 additional states (3 outputs total, 1 for main state, 2 new states)
         assert mock_state_class.call_count == 2
         # Should add 3 background tasks (1 for main state + 2 for new states)
         assert mock_background_tasks.add_task.call_count == 3
-        # State.find_one should be called multiple times: once for finding, once for updating main state, and twice in the loop
-        assert mock_state_class.find_one.call_count == 1
+        # State.find_one is called only once (initial load; mutate in-memory + save)
+        assert mock_state_class.find_one.call_count == 1
+        mock_new_state1.save.assert_awaited_once()
+        mock_new_state2.save.assert_awaited_once()
state-manager/tests/unit/controller/test_create_states.py (3)

10-13: Fix Ruff F401: remove unused imports (CI blocker)

Ruff is failing on unused imports for State, GraphTemplate, and GraphTemplateValidationStatus. Remove them to unblock CI.

-from app.models.db.state import State
-from app.models.db.graph_template_model import GraphTemplate
 from app.models.node_template_model import NodeTemplate
-from app.models.graph_template_validation_status import GraphTemplateValidationStatus

146-152: Assert run_id propagation in response

Given run_id is part of the response model, assert it to catch regressions.

         assert result.states[0].node_name == "test_node"
         assert result.states[0].inputs == {"key": "value"}
+        assert result.states[0].run_id == "test_run_id"

195-198: Avoid shadowing fixture name: rename local mock

Rename the local variable to reduce confusion when scanning tests.

-        mock_graph_template = MagicMock()
-        mock_graph_template.get_node_by_identifier.return_value = None
-        mock_graph_template_class.find_one = AsyncMock(return_value=mock_graph_template)
+        graph_template_instance = MagicMock()
+        graph_template_instance.get_node_by_identifier.return_value = None
+        mock_graph_template_class.find_one = AsyncMock(return_value=graph_template_instance)
state-manager/tests/unit/controller/test_errored_state.py (6)

225-225: Use AsyncMock for awaited find_one() in DB error case

find_one is awaited by the controller; MagicMock is not awaitable and will raise. Use AsyncMock with side_effect.

-        mock_state_class.find_one = MagicMock(side_effect=Exception("Database error"))
+        mock_state_class.find_one = AsyncMock(side_effect=Exception("Database error"))

72-75: Strengthen assertions: verify mutation persisted

Assert that save was awaited and fields were updated as expected.

         # Assert
         assert result.status == StateStatusEnum.ERRORED
         assert mock_state_class.find_one.call_count == 1  # Called once for finding
+        mock_state_queued.save.assert_awaited_once()
+        assert mock_state_queued.status == StateStatusEnum.ERRORED
+        assert mock_state_queued.error == mock_errored_request.error

77-88: Rename misleading test: it exercises QUEUED path, not EXECUTED

The name/docs imply the EXECUTED path, but the test sets status to QUEUED. Rename to reflect intent.

-    async def test_errored_state_success_executed(
+    async def test_errored_state_success_from_queued_alt(
@@
-        """Test successful error marking of executed state"""
+        """Test successful error marking when state is QUEUED (alternate path)"""

103-106: Strengthen assertions: verify save + field updates

Ensure the in-memory mutate + save path actually persisted and fields were set.

         # Assert
         assert result.status == StateStatusEnum.ERRORED
         assert mock_state_class.find_one.call_count == 1  # Called once for finding
+        mock_state_executed.save.assert_awaited_once()
+        assert mock_state_executed.status == StateStatusEnum.ERRORED
+        assert mock_state_executed.error == mock_errored_request.error

58-63: Remove duplicate save mock; keep a single AsyncMock()

save is assigned twice; keep one to avoid confusion.

-        mock_state_queued.save = AsyncMock()     
-        
         mock_state_queued.status = StateStatusEnum.QUEUED
-        mock_state_queued.save = AsyncMock()
+        mock_state_queued.save = AsyncMock()
         mock_state_class.find_one = AsyncMock(return_value=mock_state_queued)

89-95: Remove duplicate save mock; keep a single AsyncMock()

Duplicate assignment to save adds noise.

-        mock_state_executed.save = AsyncMock() 
-
         mock_state_executed.status = StateStatusEnum.QUEUED
-        mock_state_executed.save = AsyncMock()
+        mock_state_executed.save = AsyncMock()
         mock_state_class.find_one = AsyncMock(return_value=mock_state_executed)
state-manager/app/models/state_list_models.py (1)

5-5: Prefer builtin generics (list) and drop unused typing.List import

Use list[...] for consistency across the codebase.

-from typing import List, Optional, Any
+from typing import Optional, Any
@@
-    states: List[StateListItem] = Field(..., description="List of states")
+    states: list[StateListItem] = Field(..., description="List of states")
@@
-    states: List[StateListItem] = Field(..., description="List of states")
-    run_ids: List[str] = Field(..., description="List of unique run IDs")
+    states: list[StateListItem] = Field(..., description="List of states")
+    run_ids: list[str] = Field(..., description="List of unique run IDs")

Also applies to: 34-35, 41-42

state-manager/app/controller/create_states.py (2)

46-49: Preserve traceback: use logger.exception and bare raise

logger.exception captures stack trace and bare raise preserves the original traceback. Current code loses it with raise e.

-    except Exception as e:
-        logger.error(f"Error triggering graph {graph_name} for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise e
+    except Exception:
+        logger.exception(f"Error triggering graph {graph_name} for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
+        raise

82-89: Use snake_case for variables and be consistent in response builder

Rename newStates to new_states and reference it consistently.

-        newStates = await State.find(
+        new_states = await State.find(
             In(State.id, [PydanticObjectId(id) for id in inserted_states.inserted_ids])
         ).to_list()
         
         return CreateResponseModel(
             status=StateStatusEnum.CREATED,
-            states=[ResponseStateModel(state_id=str(state.id), identifier=state.identifier, node_name=state.node_name, graph_name=state.graph_name, run_id=state.run_id, inputs=state.inputs, created_at=state.created_at) for state in newStates]
+            states=[ResponseStateModel(
+                state_id=str(state.id),
+                identifier=state.identifier,
+                node_name=state.node_name,
+                graph_name=state.graph_name,
+                run_id=state.run_id,
+                inputs=state.inputs,
+                created_at=state.created_at
+            ) for state in new_states]
         )
state-manager/app/routes.py (7)

72-82: Remove redundant API key if/else; dependency already raises on invalid keys

check_api_key raises HTTPException on invalid keys. Drop the branch and avoid unused-variable warnings by discarding the param.

-async def trigger_graph_route(namespace_name: str, graph_name: str, body: TriggerGraphRequestModel, request: Request, api_key: str = Depends(check_api_key)):
+async def trigger_graph_route(namespace_name: str, graph_name: str, body: TriggerGraphRequestModel, request: Request, _: str = Depends(check_api_key)):
@@
-    if api_key:
-        logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-    else:
-        logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")
+    logger.info(f"API key validated for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)

229-237: Simplify API key handling in list_registered_nodes_route

Redundant if/else; the dependency enforces 401. Also discard the dependency value.

-async def list_registered_nodes_route(namespace_name: str, request: Request, api_key: str = Depends(check_api_key)):
+async def list_registered_nodes_route(namespace_name: str, request: Request, _: str = Depends(check_api_key)):
@@
-    if api_key:
-        logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-    else:
-        logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")
+    logger.info(f"API key validated for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)

253-260: Simplify API key handling in list_graph_templates_route

Same cleanup as other routes; rely on dependency to raise and unify logging.

-async def list_graph_templates_route(namespace_name: str, request: Request, api_key: str = Depends(check_api_key)):
+async def list_graph_templates_route(namespace_name: str, request: Request, _: str = Depends(check_api_key)):
@@
-    if api_key:
-        logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-    else:
-        logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")
+    logger.info(f"API key validated for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)

277-285: Simplify API key handling in get_current_states_route

Drop redundant branch and discard the dependency return value to avoid linter warnings.

-async def get_current_states_route(namespace_name: str, request: Request, api_key: str = Depends(check_api_key)):
+async def get_current_states_route(namespace_name: str, request: Request, _: str = Depends(check_api_key)):
@@
-    if api_key:
-        logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-    else:
-        logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")
+    logger.info(f"API key validated for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)

289-317: Harden serialization: guard parents None, avoid None in run_ids, make output deterministic

Parents may be None, and run_id can be None; CurrentStatesResponse.run_ids expects list[str]. Also sort run_ids for stable responses.

-    # Convert states to response format
-    state_items = []
-    run_ids = set()
+    # Convert states to response format
+    state_items: list[dict] = []
+    run_ids: set[str] = set()
@@
-    for state in states:
-        # Convert ObjectId parents to strings
-        parents_dict = {k: str(v) for k, v in state.parents.items()}
+    for state in states:
+        # Convert ObjectId parents to strings; guard None/empty
+        parents = getattr(state, "parents", {}) or {}
+        parents_dict = {k: str(v) for k, v in parents.items()}
@@
-        state_items.append({
+        state_items.append({
             "id": str(state.id),
             "node_name": state.node_name,
             "namespace_name": state.namespace_name,
             "identifier": state.identifier,
             "graph_name": state.graph_name,
             "run_id": state.run_id,
             "status": state.status,
             "inputs": state.inputs,
             "outputs": state.outputs,
             "error": state.error,
             "parents": parents_dict,
             "created_at": state.created_at,
             "updated_at": state.updated_at
         })
-        run_ids.add(state.run_id)
+        if state.run_id:
+            run_ids.add(str(state.run_id))
@@
-    return CurrentStatesResponse(
+    return CurrentStatesResponse(
         namespace=namespace_name,
         count=len(states),
         states=state_items,
-        run_ids=list(run_ids)
+        run_ids=sorted(run_ids)
     )

331-336: Simplify API key handling in get_states_by_run_id_route

Same as others: rely on dependency to raise; discard the value.

-async def get_states_by_run_id_route(namespace_name: str, run_id: str, request: Request, api_key: str = Depends(check_api_key)):
+async def get_states_by_run_id_route(namespace_name: str, run_id: str, request: Request, _: str = Depends(check_api_key)):
@@
-    if api_key:
-        logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-    else:
-        logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
-        raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")
+    logger.info(f"API key validated for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)

342-360: Guard parents when None in run-id filtered listing

Mirror the parents None guard used in current-states endpoint to prevent attribute errors.

-    for state in states:
-        # Convert ObjectId parents to strings
-        parents_dict = {k: str(v) for k, v in state.parents.items()}
+    for state in states:
+        # Convert ObjectId parents to strings; guard None/empty
+        parents = getattr(state, "parents", {}) or {}
+        parents_dict = {k: str(v) for k, v in parents.items()}
state-manager/tests/unit/controller/test_get_secrets.py (2)

72-74: Use assert_awaited_ with AsyncMock*

These are AsyncMocks; prefer await-aware assertions to ensure the coroutines were actually awaited.

Apply this diff:

-        mock_state_class.get.assert_called_once_with(mock_state_id)
-        mock_graph_template_class.find_one.assert_called_once()
+        mock_state_class.get.assert_awaited_once_with(mock_state_id)
+        mock_graph_template_class.find_one.assert_awaited_once()

1-1: Add coverage for invalid state_id input

Earlier feedback requested an invalid ObjectId test. Given the current implementation doesn’t explicitly coerce to ObjectId, emulate the failure at State.get by raising from the AsyncMock.

You can add:

from unittest.mock import AsyncMock, patch

@patch('app.controller.get_secrets.State')
async def test_get_secrets_invalid_state_id(self, mock_state_class, mock_namespace, mock_request_id):
    invalid_state_id = "not-a-valid-objectid"
    mock_state_class.get = AsyncMock(side_effect=Exception("InvalidId"))
    with pytest.raises(Exception):
        await get_secrets(
            mock_namespace,
            invalid_state_id,
            mock_request_id
        )

If preferred, I can wire this into the suite and parameterize with specific InvalidId exceptions if your environment exposes bson.errors.InvalidId.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 104f58a and 009818f.

📒 Files selected for processing (10)
  • state-manager/app/controller/create_states.py (4 hunks)
  • state-manager/app/controller/errored_state.py (2 hunks)
  • state-manager/app/controller/executed_state.py (4 hunks)
  • state-manager/app/models/state_list_models.py (1 hunks)
  • state-manager/app/routes.py (7 hunks)
  • state-manager/tests/unit/controller/test_create_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_enqueue_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_errored_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_executed_state.py (1 hunks)
  • state-manager/tests/unit/controller/test_get_secrets.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (10)
state-manager/tests/unit/controller/test_get_secrets.py (2)
state-manager/app/models/secrets_response.py (1)
  • SecretsResponseModel (5-6)
state-manager/app/controller/get_secrets.py (1)
  • get_secrets (9-55)
state-manager/tests/unit/controller/test_create_states.py (7)
state-manager/app/controller/create_states.py (2)
  • create_states (51-93)
  • get_node_template (17-21)
state-manager/app/models/create_models.py (2)
  • CreateRequestModel (22-24)
  • RequestStateModel (7-9)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/state.py (1)
  • State (8-18)
state-manager/app/models/db/graph_template_model.py (2)
  • GraphTemplate (13-81)
  • get_node_by_identifier (30-35)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (9-15)
state-manager/app/models/graph_template_validation_status.py (1)
  • GraphTemplateValidationStatus (4-8)
state-manager/tests/unit/controller/test_errored_state.py (3)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/app/models/errored_models.py (1)
  • ErroredRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/tests/unit/controller/test_enqueue_states.py (4)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
state-manager/app/models/enqueue_request.py (1)
  • EnqueueRequestModel (4-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/enqueue_response.py (2)
  • EnqueueResponseModel (14-19)
  • StateModel (6-11)
state-manager/tests/unit/controller/test_executed_state.py (4)
state-manager/app/controller/executed_state.py (1)
  • executed_state (13-65)
state-manager/app/models/executed_models.py (1)
  • ExecutedRequestModel (5-6)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/tasks/create_next_state.py (1)
  • create_next_state (16-155)
state-manager/app/controller/create_states.py (5)
state-manager/app/models/create_models.py (5)
  • CreateRequestModel (22-24)
  • CreateResponseModel (27-29)
  • ResponseStateModel (12-19)
  • TriggerGraphRequestModel (32-33)
  • TriggerGraphResponseModel (36-39)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/db/state.py (1)
  • State (8-18)
state-manager/app/models/db/graph_template_model.py (1)
  • GraphTemplate (13-81)
state-manager/app/models/node_template_model.py (1)
  • NodeTemplate (9-15)
state-manager/app/models/state_list_models.py (7)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/create_models.py (3)
  • ResponseStateModel (12-18)
  • CreateResponseModel (25-27)
  • CreateRequestModel (21-22)
state-manager/app/models/enqueue_response.py (2)
  • StateModel (6-11)
  • EnqueueResponseModel (14-19)
state-manager/app/models/db/state.py (1)
  • State (8-18)
state-manager/app/models/executed_models.py (2)
  • ExecutedResponseModel (9-10)
  • ExecutedRequestModel (5-6)
state-manager/app/models/register_nodes_response.py (1)
  • RegisterNodesResponseModel (12-14)
state-manager/app/models/errored_models.py (1)
  • ErroredResponseModel (9-10)
state-manager/app/routes.py (10)
state-manager/app/models/create_models.py (4)
  • CreateRequestModel (22-24)
  • CreateResponseModel (27-29)
  • TriggerGraphRequestModel (32-33)
  • TriggerGraphResponseModel (36-39)
state-manager/app/controller/create_states.py (2)
  • create_states (51-93)
  • trigger_graph (24-48)
state-manager/app/models/list_models.py (2)
  • ListRegisteredNodesResponse (12-16)
  • ListGraphTemplatesResponse (19-23)
state-manager/app/controller/list_registered_nodes.py (1)
  • list_registered_nodes (11-38)
state-manager/app/controller/list_graph_templates.py (1)
  • list_graph_templates (11-38)
state-manager/app/models/state_list_models.py (2)
  • StatesByRunIdResponse (29-34)
  • CurrentStatesResponse (37-42)
state-manager/app/controller/get_states_by_run_id.py (1)
  • get_states_by_run_id (11-40)
state-manager/app/controller/get_current_states.py (1)
  • get_current_states (11-38)
state-manager/app/utils/check_secret.py (1)
  • check_api_key (15-19)
state-manager/app/controller/get_secrets.py (1)
  • get_secrets (9-55)
state-manager/app/controller/executed_state.py (5)
state-manager/app/models/executed_models.py (2)
  • ExecutedRequestModel (5-6)
  • ExecutedResponseModel (9-10)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/tasks/create_next_state.py (1)
  • create_next_state (16-156)
state-manager/app/models/db/state.py (1)
  • State (8-18)
python-sdk/exospherehost/runtime.py (1)
  • _notify_executed (197-214)
state-manager/app/controller/errored_state.py (5)
state-manager/app/models/db/state.py (1)
  • State (8-18)
state-manager/app/models/state_status_enum.py (1)
  • StateStatusEnum (4-14)
state-manager/app/models/errored_models.py (2)
  • ErroredRequestModel (5-6)
  • ErroredResponseModel (9-10)
python-sdk/exospherehost/runtime.py (2)
  • _notify_errored (216-233)
  • _get_errored_endpoint (108-112)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
🪛 GitHub Actions: Ruff check on changed files only
state-manager/tests/unit/controller/test_create_states.py

[error] 10-10: Ruff check failed. Unused import 'State' detected (F401). Remove unused import.


[error] 11-11: Ruff check failed. Unused import 'GraphTemplate' detected (F401). Remove unused import.


[error] 13-13: Ruff check failed. Unused import 'GraphTemplateValidationStatus' detected (F401). Remove unused import.

🔇 Additional comments (7)
state-manager/tests/unit/controller/test_get_secrets.py (7)

53-71: Happy-path coverage looks solid

Patching, arrangement, and response assertions are correct. Once the await-aware assertions are applied, this test will be robust.


97-120: Namespace mismatch path correctly validated

Accurate error propagation and message assertion.


121-146: GraphTemplate-not-found path covered correctly

Behavior and message match the controller logic.


149-176: Empty secrets behavior validated

Correctly asserts an empty dict is returned.


178-218: Complex secrets preserved end-to-end

Good coverage for a realistic secrets set.


219-239: DB error propagation test is correct

Asserts re-raise with original message; matches controller logic.


1-4: Ruff lint issues fixed — all checks passing
All Ruff errors were auto-fixed and there are no remaining lint failures in the repo or in state-manager/tests/unit/controller/test_get_secrets.py. CI’s lint step should now succeed.

Comment on lines 25 to +33
if len(body.outputs) == 0:
await State.find_one(State.id == state_id).set(
{"status": StateStatusEnum.EXECUTED, "outputs": {}, "parents": {**state.parents, state.identifier: state.id}}
)
state.status = StateStatusEnum.EXECUTED
state.outputs = {}
state.parents = {**state.parents, state.identifier: state.id}
await state.save()

background_tasks.add_task(create_next_state, state)

else:

else:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Prefer truthy check for outputs and reduce branching duplication

Use a truthy check for readability and consider factoring the common mutation code (parents update + save + scheduling) to avoid duplication.

-        if len(body.outputs) == 0:
+        if not body.outputs:
             state.status = StateStatusEnum.EXECUTED
             state.outputs = {}
             state.parents = {**state.parents, state.identifier: state.id}
             await state.save()

             background_tasks.add_task(create_next_state, state)

-        else:            
+        else:
             state.outputs = body.outputs[0]
             state.status = StateStatusEnum.EXECUTED
             state.parents = {**state.parents, state.identifier: state.id}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if len(body.outputs) == 0:
await State.find_one(State.id == state_id).set(
{"status": StateStatusEnum.EXECUTED, "outputs": {}, "parents": {**state.parents, state.identifier: state.id}}
)
state.status = StateStatusEnum.EXECUTED
state.outputs = {}
state.parents = {**state.parents, state.identifier: state.id}
await state.save()
background_tasks.add_task(create_next_state, state)
else:
else:
if not body.outputs:
state.status = StateStatusEnum.EXECUTED
state.outputs = {}
state.parents = {**state.parents, state.identifier: state.id}
await state.save()
background_tasks.add_task(create_next_state, state)
else:
state.outputs = body.outputs[0]
state.status = StateStatusEnum.EXECUTED
state.parents = {**state.parents, state.identifier: state.id}
🤖 Prompt for AI Agents
In state-manager/app/controller/executed_state.py around lines 25-33, replace
the len(body.outputs) == 0 check with a truthy check (if not body.outputs) and
reduce duplicated mutations: set state.outputs = body.outputs or {} and set
state.status = StateStatusEnum.EXECUTED only when outputs are empty, then always
update state.parents = {**state.parents, state.identifier: state.id} and await
state.save(); finally, keep background_tasks.add_task(create_next_state, state)
inside the branch that runs when outputs are empty so scheduling remains
conditional while parents/save logic is factored out to avoid duplication.

Comment on lines +14 to +26
id: PydanticObjectId = Field(..., description="State ID")
node_name: str = Field(..., description="Name of the node")
namespace_name: str = Field(..., description="Namespace name")
identifier: str = Field(..., description="Node identifier")
graph_name: str = Field(..., description="Graph name")
run_id: str = Field(..., description="Run ID")
status: StateStatusEnum = Field(..., description="State status")
inputs: dict[str, Any] = Field(..., description="State inputs")
outputs: dict[str, Any] = Field(..., description="State outputs")
error: Optional[str] = Field(None, description="Error message")
parents: dict[str, PydanticObjectId] = Field(default_factory=dict, description="Parent state IDs")
created_at: datetime = Field(..., description="Creation timestamp")
updated_at: datetime = Field(..., description="Last update timestamp")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix type mismatch with route serialization: use str for id and parents values

Routes build response dicts with id as str and parents values as str. The model currently expects PydanticObjectId, causing unnecessary coercion and potential validation headaches. Align the model with the serialized payload.

 class StateListItem(BaseModel):
     """Model for a single state in a list"""
-    id: PydanticObjectId = Field(..., description="State ID")
+    id: str = Field(..., description="State ID")
@@
-    parents: dict[str, PydanticObjectId] = Field(default_factory=dict, description="Parent state IDs")
+    parents: dict[str, str] = Field(default_factory=dict, description="Parent state IDs")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: PydanticObjectId = Field(..., description="State ID")
node_name: str = Field(..., description="Name of the node")
namespace_name: str = Field(..., description="Namespace name")
identifier: str = Field(..., description="Node identifier")
graph_name: str = Field(..., description="Graph name")
run_id: str = Field(..., description="Run ID")
status: StateStatusEnum = Field(..., description="State status")
inputs: dict[str, Any] = Field(..., description="State inputs")
outputs: dict[str, Any] = Field(..., description="State outputs")
error: Optional[str] = Field(None, description="Error message")
parents: dict[str, PydanticObjectId] = Field(default_factory=dict, description="Parent state IDs")
created_at: datetime = Field(..., description="Creation timestamp")
updated_at: datetime = Field(..., description="Last update timestamp")
class StateListItem(BaseModel):
"""Model for a single state in a list"""
id: str = Field(..., description="State ID")
node_name: str = Field(..., description="Name of the node")
namespace_name: str = Field(..., description="Namespace name")
identifier: str = Field(..., description="Node identifier")
graph_name: str = Field(..., description="Graph name")
run_id: str = Field(..., description="Run ID")
status: StateStatusEnum = Field(..., description="State status")
inputs: dict[str, Any] = Field(..., description="State inputs")
outputs: dict[str, Any] = Field(..., description="State outputs")
error: Optional[str] = Field(None, description="Error message")
parents: dict[str, str] = Field(default_factory=dict, description="Parent state IDs")
created_at: datetime = Field(..., description="Creation timestamp")
updated_at: datetime = Field(..., description="Last update timestamp")
🤖 Prompt for AI Agents
In state-manager/app/models/state_list_models.py around lines 14 to 26, the
model uses PydanticObjectId for id and parents values but route responses
serialize those as strings; change id: PydanticObjectId to id: str and change
parents: dict[str, PydanticObjectId] to parents: dict[str, str] (or Optional
entries if needed) so the model types match the serialized payload, update any
Field descriptions accordingly, and run tests to ensure no further
validation/coercion errors.

Comment on lines +65 to +83
@router.post(
"/graph/{graph_name}/trigger",
response_model=TriggerGraphResponseModel,
status_code=status.HTTP_200_OK,
response_description="Graph triggered successfully with new run ID",
tags=["graph"]
)
async def trigger_graph_route(namespace_name: str, graph_name: str, body: TriggerGraphRequestModel, request: Request, api_key: str = Depends(check_api_key)):

x_exosphere_request_id = getattr(request.state, "x_exosphere_request_id", str(uuid4()))

if api_key:
logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
else:
logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")

return await trigger_graph(namespace_name, graph_name, body, x_exosphere_request_id)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Fix ruff failures before merging: trailing whitespace and unused variables

  • Unused api_key vars after simplifying checks can trip lints; discard them as _.
  • Remove trailing whitespace (e.g., blank line near Line 218) and any W293/W291 cases across the module.

Run this quick sweep to locate offenders:

Also applies to: 222-244, 246-268, 270-319, 321-367


🏁 Script executed:

#!/bin/bash
set -euo pipefail
# trailing whitespace
rg -nP '\s+$' -g 'state-manager/**'
# unused api_key parameters
rg -n -C2 -g 'state-manager/**' -P 'def\s+\w+\(.*api_key:\s*str\s*=\s*Depends\(check_api_key\)\)'

Length of output: 25524


Remove trailing whitespace in state-manager/app/routes.py

The api_key parameter is used in each route’s auth check—don’t rename it to _. Instead, fix the trailing-whitespace errors reported by ruff (W291/W293) at these locations:

  • state-manager/app/routes.py
    • Line 151: remove space after )
    • Line 218
    • Line 287
    • Line 291
    • Line 295
    • Line 312
    • Line 338
    • Line 341
    • Line 345
    • Line 361

After removing these stray spaces, rerun your linter to ensure no W291/W293 failures remain.

🤖 Prompt for AI Agents
In state-manager/app/routes.py around the referenced locations (lines 151, 218,
287, 291, 295, 312, 338, 341, 345, 361), remove any trailing whitespace
characters at the ends of those lines (no code renaming — keep the api_key
parameter as-is), save the file, and rerun the linter to confirm W291/W293
warnings are resolved.

Comment on lines +246 to +268

@router.get(
"/graphs/",
response_model=ListGraphTemplatesResponse,
status_code=status.HTTP_200_OK,
response_description="Graph templates listed successfully",
tags=["graph"]
)
async def list_graph_templates_route(namespace_name: str, request: Request, api_key: str = Depends(check_api_key)):
x_exosphere_request_id = getattr(request.state, "x_exosphere_request_id", str(uuid4()))

if api_key:
logger.info(f"API key is valid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
else:
logger.error(f"API key is invalid for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
raise HTTPException(status_code=status.HTTP_401_UNAUTHORIZED, detail="Invalid API key")

templates = await list_graph_templates(namespace_name, x_exosphere_request_id)
return ListGraphTemplatesResponse(
namespace=namespace_name,
count=len(templates),
templates=templates
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid leaking secrets in GraphTemplate list responses

ListGraphTemplatesResponse currently exposes full GraphTemplate objects, which include an encrypted secrets field. Prefer returning a summary model without secrets, or explicitly exclude secrets in the response mapping.

I can propose a GraphTemplateSummary model and corresponding mapping if you’d like to proceed.

🤖 Prompt for AI Agents
In state-manager/app/routes.py around lines 246 to 268, the route returns full
GraphTemplate objects (including encrypted secrets) via
ListGraphTemplatesResponse; remove secrets from the API output by mapping
templates to a safe summary model or filtering out the secrets field before
constructing the response. Create or use a GraphTemplateSummary (id, name,
created_at, etc.) that omits secrets, map each template -> summary (do not
include the encrypted secrets property), adjust ListGraphTemplatesResponse to
contain summaries and correct count, and ensure unit/typing changes reflect the
new response model.

Comment on lines +1 to +5
import pytest
from unittest.mock import AsyncMock, MagicMock, patch
from fastapi import HTTPException
from beanie import PydanticObjectId
from datetime import datetime
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify async test plugin configuration (pytest-asyncio/anyio)

These tests are async; ensure the plugin is installed and configured so they run under CI.


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Searching for pytest async plugin configuration..."
rg -nP '\bpytest\.mark\.(asyncio|anyio)\b' -g '!**/.venv/**' || true
rg -nP '^\s*\[pytest\]|\[tool\.pytest\.ini_options\]' -n -C2 -g '!**/.venv/**' pyproject.toml pytest.ini setup.cfg || true
rg -nP 'pytest-asyncio|pytest-anyio' -n -C2 -g '!**/.venv/**' pyproject.toml requirements*.txt poetry.lock || true

Length of output: 1186


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "Checking for pytest.mark.asyncio in test_create_states.py..."
rg -nP '@pytest\.mark\.asyncio' -n state-manager/tests/unit/controller/test_create_states.py || echo "No @pytest.mark.asyncio found"

Length of output: 314


Ensure pytest-asyncio/anyio is installed and async tests are marked or auto-configured

We’ve confirmed that

  • state-manager/tests/unit/controller/test_create_states.py defines async def tests but has no @pytest.mark.asyncio decorator, and
  • there’s no pytest-asyncio/pytest-anyio plugin in any pytest config or dependency file.

To address this before CI runs:

• Add the async test plugin to your dev dependencies (e.g. in requirements-dev.txt, pyproject.toml, etc.):

pytest-asyncio>=0.20.0
# or
pytest-anyio>=3.0.0

• Either decorate each async test:

 import pytest
+ 
 @pytest.mark.asyncio
 async def test_my_async_case():
     …

• Or configure automatic mode so you don’t need to decorate every test. Create/update a pytest.ini (or under [tool.pytest.ini_options] in pyproject.toml):

[pytest]
asyncio_mode = auto

• Ensure your CI installs those dev dependencies so async tests will be awaited and executed properly.

🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_create_states.py lines 1-5, the file
defines async tests but lacks pytest-asyncio/anyio configuration or
@pytest.mark.asyncio decorators; add pytest-asyncio or pytest-anyio to dev
dependencies (requirements-dev.txt or pyproject.toml) and either annotate each
async test with the appropriate @pytest.mark.asyncio/@pytest.mark.anyio or
enable automatic mode by adding a pytest.ini (or tool.pytest.ini_options) with
asyncio_mode = auto, and ensure CI installs the dev dependencies so async tests
run under the plugin.

Comment on lines +153 to +157
# Verify find_one was called (with any arguments)
assert mock_graph_template_class.find_one.called
mock_state_class.insert_many.assert_called_once()
mock_state_class.find.assert_called_once()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optional: assert GraphTemplate.find_one filter correctness

Strengthen the test by asserting that find_one was called with the expected filters (graph name + namespace).

You can inspect mock_graph_template_class.find_one.call_args and assert the query predicates include both the graph name and namespace.

🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_create_states.py around lines 153 to
157, the test currently only checks that mock_graph_template_class.find_one was
called; add an assertion that it was called with the expected filter containing
both the graph name and namespace. Inspect
mock_graph_template_class.find_one.call_args and assert the first positional
argument (the query dict) includes the expected keys/values (e.g.,
query.get("name") == expected_graph_name and query.get("namespace") ==
expected_namespace) or compare to the exact dict used by the code under test.

Comment on lines +253 to +258
mock_state_queued.save = AsyncMock()

mock_state_queued.status = StateStatusEnum.QUEUED
mock_state_queued.set = AsyncMock()
mock_state_class.find_one = AsyncMock(return_value=mock_state_queued)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove dead .set mock; assert save and field updates instead

The controller mutates in-memory state and calls save(); it doesn’t call .set(). Remove the unused mock and verify persistence + updates.

         mock_state_queued.save = AsyncMock()
 
         mock_state_queued.status = StateStatusEnum.QUEUED
-        mock_state_queued.set = AsyncMock()
         mock_state_class.find_one = AsyncMock(return_value=mock_state_queued)
@@
         # Assert
         assert result.status == StateStatusEnum.ERRORED
         assert mock_state_class.find_one.call_count == 1  # Called once for finding
-        assert mock_state_queued.error == "Different error message"
+        mock_state_queued.save.assert_awaited_once()
+        assert mock_state_queued.status == StateStatusEnum.ERRORED
+        assert mock_state_queued.error == "Different error message"

Also applies to: 267-271

🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_errored_state.py around lines
253-258 (and similarly for lines 267-271), remove the unused mock assignment
mock_state_queued.set = AsyncMock() because the controller does not call .set();
update the test to assert that mock_state_queued.save was awaited and assert the
expected in-memory field changes on mock_state_queued (e.g., status and any
other mutated fields) rather than asserting a .set() call, ensuring the test
verifies persistence via save() and the correct field updates.

Comment on lines +236 to +240
# Assert
assert result.status == StateStatusEnum.EXECUTED
assert mock_state.outputs == {}
mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Empty outputs path: also assert save and parents update

Verify the save was awaited and the parents map was updated with the self-parent entry.

         assert result.status == StateStatusEnum.EXECUTED
         assert mock_state.outputs == {}
         mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
+        mock_state.save.assert_awaited_once()
+        assert mock_state.parents.get(mock_state.identifier) == mock_state.id
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Assert
assert result.status == StateStatusEnum.EXECUTED
assert mock_state.outputs == {}
mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
# Assert
assert result.status == StateStatusEnum.EXECUTED
assert mock_state.outputs == {}
mock_background_tasks.add_task.assert_called_once_with(mock_create_next_state, mock_state)
mock_state.save.assert_awaited_once()
assert mock_state.parents.get(mock_state.identifier) == mock_state.id
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_executed_state.py around lines 236
to 240, the test currently asserts status, outputs, and that background task was
scheduled but misses verifying that the state's save coroutine was awaited and
that the state's parents map was updated with a self-parent entry; update the
test to assert the mocked save method was awaited (e.g., assert awaitable save
called/awaited) and add an assertion that mock_state.parents contains the
self-parent mapping (self id -> parent info) matching the expected self-parent
entry after execution.

Comment on lines +21 to +23
def mock_state_id(self):
return PydanticObjectId()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Align state_id fixture type with the function signature (str)

get_secrets(namespace_name: str, state_id: str, ...) expects a string, but the fixture returns a PydanticObjectId. Returning a string avoids subtle mismatches and future brittleness if the underlying model enforcement changes.

Apply this diff:

-    def mock_state_id(self):
-        return PydanticObjectId()
+    def mock_state_id(self):
+        return str(PydanticObjectId())
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def mock_state_id(self):
return PydanticObjectId()
def mock_state_id(self):
return str(PydanticObjectId())
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_get_secrets.py around lines 21 to
23, the mock_state_id fixture currently returns a PydanticObjectId but
get_secrets(namespace_name: str, state_id: str, ...) expects a string; change
the fixture to return a string (e.g., str(PydanticObjectId()) or a fixed
UUID-like string) so the fixture type matches the function signature and avoid
future brittleness; update any imports only if necessary and ensure tests still
pass.

Comment on lines +85 to +96
mock_state_class.get = AsyncMock(return_value=None)

# Act & Assert
with pytest.raises(ValueError) as exc_info:
await get_secrets(
mock_namespace,
mock_state_id,
mock_request_id
)

assert str(exc_info.value) == f"State {mock_state_id} not found"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Good negative-path assertion for missing state

Clear failure mode and precise message check. Consider (optional) also asserting GraphTemplate.find_one is not called in this path to guard against unnecessary DB work.

🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_get_secrets.py around lines 85 to
96, the negative-path test correctly asserts a ValueError for a missing state
but should also guard against unnecessary DB work; after asserting the exception
message, add an assertion that GraphTemplate.find_one was not awaited/called
(use the mock used for GraphTemplate in the test, e.g. assert_not_awaited() or
assert_not_called() as appropriate) to ensure the function returns early without
invoking the GraphTemplate lookup.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🔭 Outside diff range comments (1)
state-manager/tests/unit/controller/test_create_states.py (1)

295-302: Strengthen multi-state assertions and verify constructor invocations

Ensure both response entries are present and that the State constructor was called with correct kwargs for each state.

@@
         # Assert
         assert result.status == StateStatusEnum.CREATED
+        assert len(result.states) == 2
+        assert {s.identifier for s in result.states} == {"node1", "node2"}
         assert mock_graph_template_class.find_one.called
         mock_state_class.insert_many.assert_called_once()
         # Verify that insert_many was called with 2 states
         call_args = mock_state_class.insert_many.call_args[0][0]
         assert len(call_args) == 2
+        # Ensure the read-back query was executed
+        mock_state_class.find.assert_called_once()
+        # Validate the constructor calls for both states
+        expected_calls = [
+            call(
+                identifier="node1",
+                node_name="test_node",
+                namespace_name=mock_namespace,
+                graph_name=mock_graph_name,
+                status=StateStatusEnum.CREATED,
+                inputs={"input1": "value1"},
+                outputs={},
+                error=None,
+            ),
+            call(
+                identifier="node2",
+                node_name="test_node",
+                namespace_name=mock_namespace,
+                graph_name=mock_graph_name,
+                status=StateStatusEnum.CREATED,
+                inputs={"input2": "value2"},
+                outputs={},
+                error=None,
+            ),
+        ]
+        mock_state_class.assert_has_calls(expected_calls, any_order=True)
+        assert mock_state_class.call_count == 2
♻️ Duplicate comments (29)
state-manager/tests/unit/controller/test_get_graph_template.py (7)

82-82: Use assert_awaited_once for AsyncMock

Strengthen async assertion semantics by asserting awaited, not just called.

-        mock_graph_template_class.find_one.assert_called_once()
+        mock_graph_template_class.find_one.assert_awaited_once()

52-53: Remove unused get_secrets stub from fixture

Controller never calls get_secrets; this is dead setup.

-        template.get_secrets.return_value = {"secret1": "encrypted_value1", "secret2": "encrypted_value2"}

131-131: Remove unused get_secrets stub

Dead setup; controller maps template.secrets keys directly.

-        template.get_secrets.return_value = {"secret1": "encrypted_value1"}

171-171: Remove unused get_secrets stub

Not used by the code path under test.

-        template.get_secrets.return_value = {}

204-204: Remove unused get_secrets stub

Not used; avoid confusing future readers.

-        template.get_secrets.return_value = {}

270-274: Remove unused get_secrets stubs

Controller never calls get_secrets; keys from template.secrets are sufficient.

-        template.get_secrets.return_value = {
-            "api_key": "encrypted_api_key",
-            "database_url": "encrypted_db_url",
-            "aws_credentials": "encrypted_aws_creds"
-        }

12-12: Optional: add a case for ONGOING validation status

Enum exposes ONGOING; mirror the PENDING case to prevent regressions.

+    @patch('app.controller.get_graph_template.GraphTemplate')
+    async def test_get_graph_template_with_ongoing_validation(
+        self,
+        mock_graph_template_class,
+        mock_namespace,
+        mock_graph_name,
+        mock_request_id
+    ):
+        """Test retrieval of graph template with ongoing validation"""
+        template = MagicMock()
+        template.nodes = [NodeTemplate(
+            identifier="node1",
+            node_name="Test Node",
+            namespace="test_namespace",
+            inputs={},
+            next_nodes=[],
+            unites=None
+        )]
+        template.validation_status = GraphTemplateValidationStatus.ONGOING
+        template.validation_errors = []
+        template.secrets = {}
+        template.created_at = datetime(2023, 1, 1, 12, 0, 0)
+        template.updated_at = datetime(2023, 1, 2, 12, 0, 0)
+        mock_graph_template_class.find_one = AsyncMock(return_value=template)
+
+        result = await get_graph_template(
+            mock_namespace,
+            mock_graph_name,
+            mock_request_id
+        )
+
+        assert result.validation_status == GraphTemplateValidationStatus.ONGOING
+        assert result.validation_errors == []
+        assert result.secrets == {}

I can push this test if you want.

state-manager/app/controller/get_current_states.py (1)

35-37: Log structured error like other controllers (don’t stringify exception)

Pass the exception object as error=e and include namespace as a structured field.

-        logger.error(f"Error fetching current states for namespace {namespace}: {str(e)}", x_exosphere_request_id=request_id)
+        logger.error(
+            "Error fetching current states for namespace",
+            namespace=namespace_name,
+            error=e,
+            x_exosphere_request_id=x_exosphere_request_id,
+        )
state-manager/app/controller/get_states_by_run_id.py (1)

37-39: Use structured error logging; avoid interpolating exception text

Keep consistent with other controllers: pass the exception as a structured field and avoid formatting it into the message.

Apply:

-        logger.error(f"Error fetching states for run ID {run_id} in namespace {namespace}: {str(e)}", x_exosphere_request_id=request_id)
-        raise
+        logger.error(f"Error fetching states for run ID {run_id} in namespace {namespace}", x_exosphere_request_id=request_id, error=e)
+        raise
state-manager/tests/unit/controller/test_upsert_graph_template.py (9)

1-1: Remove time.sleep; it adds flakiness and isn’t needed in unit tests.

BackgroundTasks.add_task is only scheduled (not executed) here, so sleeps do nothing except slow tests. Remove the import.

Apply:

-from time import sleep

108-109: Delete unnecessary sleep; assert scheduling instead.

Sleep isn’t needed; assert the background task scheduling explicitly.

Apply:

-        sleep(1) # wait for the background task to complete

119-124: Assert awaited calls for async mocks and verify DB lookup awaited.

update and find_one are AsyncMocks; assert_awaited_once() is the correct assertion method.

Apply:

         # Verify template was updated
         mock_existing_template.set_secrets.assert_called_once_with(mock_upsert_request.secrets)
-        mock_existing_template.update.assert_called_once()
+        mock_existing_template.update.assert_awaited_once()
         
         # Verify background task was added
         mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_existing_template)
+        # Verify DB lookup awaited
+        mock_graph_template_class.find_one.assert_awaited_once()

162-171: Assert awaited insert and awaited find_one.

insert/find_one are coroutines; use assert_awaited_once(). This also improves coverage.

Apply:

-        # Verify new template was created
-        mock_graph_template_class.insert.assert_called_once()
+        # Verify new template was created
+        mock_graph_template_class.insert.assert_awaited_once()
+        # Verify DB lookup awaited
+        mock_graph_template_class.find_one.assert_awaited_once()

230-247: Verify interactions in empty-nodes path and remove sleep.

Add assertions for set_secrets with {}, awaited update, and scheduled background task. Remove the sleep.

Apply:

-        
-        sleep(1) # wait for the background task to complete
         # Assert
         assert result.nodes == []
         assert result.validation_status == GraphTemplateValidationStatus.VALID
         assert result.validation_errors == []
         assert result.secrets == {}
+
+        # Verify interactions
+        mock_existing_template.set_secrets.assert_called_once_with({})
+        mock_existing_template.update.assert_awaited_once()
+        mock_background_tasks.add_task.assert_called_once_with(mock_verify_graph, mock_existing_template)

241-241: Remove unnecessary sleep here as well.

Apply:

-        sleep(1) # wait for the background task to complete

248-275: Prefer mutating the in-memory mock on update or reload in controller.

The controller writes PENDING/[] via update but returns the stale in-memory object. To assert DB-updated values in the response, either:

  • adjust controller to reload the object after update; or
  • mutate the mock in-place via update’s side_effect.

For the test-only approach, set update side_effect to update validation fields.

Apply:

-        mock_existing_template.update = AsyncMock()
+        mock_existing_template.update = AsyncMock(side_effect=lambda *a, **k: (
+            setattr(mock_existing_template, "validation_status", GraphTemplateValidationStatus.PENDING),
+            setattr(mock_existing_template, "validation_errors", [])
+        ))

285-289: Fix contradictory comment vs assertion; align with intended behavior.

The comment says “Should be reset to empty” but the assertion expects previous errors. Pick one behavior and make test consistent. If you apply the side_effect above (or reload in controller), assert PENDING and []:

Apply:

-        assert result.validation_status == GraphTemplateValidationStatus.INVALID
-        assert result.validation_errors == ["Previous error 1", "Previous error 2"]  # Should be reset to empty
+        assert result.validation_status == GraphTemplateValidationStatus.PENDING
+        assert result.validation_errors == []

If you keep current controller behavior (no reload), remove the misleading comment instead:

-        assert result.validation_errors == ["Previous error 1", "Previous error 2"]  # Should be reset to empty
+        assert result.validation_errors == ["Previous error 1", "Previous error 2"]

285-285: Remove sleep here too; it’s not doing anything.

Apply:

-        sleep(1) # wait for the background task to complete
state-manager/tests/unit/controller/test_create_states.py (1)

194-197: Rename local to avoid shadowing the fixture named ‘mock_graph_template’

This reduces confusion when scanning tests.

-        mock_graph_template = MagicMock()
-        mock_graph_template.get_node_by_identifier.return_value = None
-        mock_graph_template_class.find_one = AsyncMock(return_value=mock_graph_template)
+        graph_template_instance = MagicMock()
+        graph_template_instance.get_node_by_identifier.return_value = None
+        mock_graph_template_class.find_one = AsyncMock(return_value=graph_template_instance)
state-manager/app/controller/list_graph_templates.py (2)

35-37: Use structured error logging and include stack trace

Align with logging used in other controllers (e.g., get_graph_template). Prefer structured fields and exc_info=True.

-        logger.error(f"Error listing graph templates for namespace {namespace}: {str(e)}", x_exosphere_request_id=request_id)
+        logger.error(
+            "Error listing graph templates",
+            namespace=namespace,
+            x_exosphere_request_id=request_id,
+            error=e,
+            exc_info=True,
+        )

10-20: Optional: add deterministic sort and pagination

Unbounded list responses can grow without limit. Add a deterministic sort and simple pagination to keep responses predictable and scalable.

-async def list_graph_templates(namespace: str, request_id: str) -> List[GraphTemplate]:
+async def list_graph_templates(
+    namespace: str,
+    request_id: str,
+    *,
+    limit: int = 100,
+    offset: int = 0,
+) -> List[GraphTemplate]:
     """
     List all graph templates for a given namespace
     
     Args:
-        namespace: The namespace to list graph templates for
-        request_id: Request ID for logging
+        namespace: The namespace to list graph templates for
+        request_id: Request ID for logging
+        limit: Max number of items to return (default 100)
+        offset: Number of items to skip for pagination (default 0)
         
     Returns:
         List of graph templates
     """
@@
-        templates = await GraphTemplate.find(
-            GraphTemplate.namespace == namespace
-        ).to_list()
+        templates = await (
+            GraphTemplate
+            .find(GraphTemplate.namespace == namespace)
+            .sort(GraphTemplate.name)  # or .sort("-created_at")
+            .skip(offset)
+            .limit(limit)
+            .to_list()
+        )

Also applies to: 27-30

state-manager/app/controller/list_registered_nodes.py (2)

35-37: Use structured error logging and include stack trace

Match the logging pattern used in other controllers; include structured fields and exc_info=True.

-        logger.error(f"Error listing registered nodes for namespace {namespace}: {str(e)}", x_exosphere_request_id=request_id)
+        logger.error(
+            "Error listing registered nodes",
+            namespace=namespace,
+            x_exosphere_request_id=request_id,
+            error=e,
+            exc_info=True,
+        )

10-20: Optional: add deterministic sort and pagination

Bound the result set and make ordering predictable.

-async def list_registered_nodes(namespace: str, request_id: str) -> List[RegisteredNode]:
+async def list_registered_nodes(
+    namespace: str,
+    request_id: str,
+    *,
+    limit: int = 100,
+    offset: int = 0,
+) -> List[RegisteredNode]:
     """
     List all registered nodes for a given namespace
     
     Args:
-        namespace: The namespace to list nodes for
-        request_id: Request ID for logging
+        namespace: The namespace to list nodes for
+        request_id: Request ID for logging
+        limit: Max number of items to return (default 100)
+        offset: Number of items to skip for pagination (default 0)
         
     Returns:
         List of registered nodes
     """
@@
-        nodes = await RegisteredNode.find(
-            RegisteredNode.namespace == namespace
-        ).to_list()
+        nodes = await (
+            RegisteredNode
+            .find(RegisteredNode.namespace == namespace)
+            .sort(RegisteredNode.name)
+            .skip(offset)
+            .limit(limit)
+            .to_list()
+        )

Also applies to: 27-30

state-manager/tests/integration/peinding_test_full_workflow_integration.py (6)

33-35: Gate integration tests behind an environment flag to avoid breaking CI

These tests require running services. Skip by default unless explicitly enabled.

-pytestmark = pytest.mark.integration
+pytestmark = [
+    pytest.mark.integration,
+    pytest.mark.skipif(
+        os.getenv("RUN_INTEGRATION_TESTS") != "1",
+        reason="Integration tests require running services; set RUN_INTEGRATION_TESTS=1 to enable.",
+    ),
+]

40-116: Pytest fixtures defined as class methods with self will not work

Fixtures must be module-level (no self) for pytest to inject them. As written, fixture discovery/injection will fail.

Apply this diff to remove the class-scoped fixtures:

 class TestFullWorkflowIntegration:
     """Integration tests for the complete state-manager workflow."""
-    
-    @pytest.fixture
-    async def state_manager_client(self):
-        """Create an HTTP client for the state-manager."""
-        async with httpx.AsyncClient(base_url="http://localhost:8000") as client:
-            yield client
-    
-    @pytest.fixture
-    def test_namespace(self) -> str:
-        """Generate a unique test namespace."""
-        return f"test-namespace-{uuid.uuid4().hex[:8]}"
-    
-    @pytest.fixture
-    def test_api_key(self) -> str:
-        """Get the test API key from environment."""
-        return os.environ.get("TEST_API_KEY", "API-KEY")
-    
-    @pytest.fixture
-    def test_graph_name(self) -> str:
-        """Generate a unique test graph name."""
-        return f"test-graph-{uuid.uuid4().hex[:8]}"
-    
-    @pytest.fixture
-    def test_runtime_name(self) -> str:
-        """Generate a unique test runtime name."""
-        return f"test-runtime-{uuid.uuid4().hex[:8]}"
-    
-    @pytest.fixture
-    def sample_node_registration(self) -> NodeRegistrationModel:
-        """Create a sample node registration for testing."""
-        return NodeRegistrationModel(
-            name="TestNode",
-            inputs_schema={
-                "type": "object",
-                "properties": {
-                    "input1": {"type": "string"},
-                    "input2": {"type": "number"}
-                },
-                "required": ["input1", "input2"]
-            },
-            outputs_schema={
-                "type": "object",
-                "properties": {
-                    "output1": {"type": "string"},
-                    "output2": {"type": "number"}
-                }
-            },
-            secrets=["test_secret"]
-        )
-    
-    @pytest.fixture
-    def sample_graph_nodes(self, test_namespace: str) -> List[NodeTemplate]:
-        """Create sample graph nodes for testing."""
-        return [
-            NodeTemplate(
-                node_name="TestNode",
-                namespace=test_namespace,
-                identifier="node1",
-                inputs={
-                    "input1": "test_value",
-                    "input2": 42
-                },
-                next_nodes=["node2"],
-                unites=None
-            ),
-            NodeTemplate(
-                node_name="TestNode",
-                namespace=test_namespace,
-                identifier="node2",
-                inputs={
-                    "input1": "{{node1.output1}}",
-                    "input2": "{{node1.output2}}"
-                },
-                next_nodes=[],
-                unites=None
-            )
-        ]

Then add equivalent module-level fixtures outside the class:

# Module-level fixtures
@pytest.fixture
async def state_manager_client():
    async with httpx.AsyncClient(base_url="http://localhost:8000") as client:
        yield client

@pytest.fixture
def test_namespace() -> str:
    return f"test-namespace-{uuid.uuid4().hex[:8]}"

@pytest.fixture
def test_api_key() -> str:
    return os.environ.get("TEST_API_KEY", "API-KEY")

@pytest.fixture
def test_graph_name() -> str:
    return f"test-graph-{uuid.uuid4().hex[:8]}"

@pytest.fixture
def test_runtime_name() -> str:
    return f"test-runtime-{uuid.uuid4().hex[:8]}"

@pytest.fixture
def sample_node_registration() -> NodeRegistrationModel:
    return NodeRegistrationModel(
        name="TestNode",
        inputs_schema={"type": "object", "properties": {"input1": {"type": "string"}, "input2": {"type": "number"}}, "required": ["input1", "input2"]},
        outputs_schema={"type": "object", "properties": {"output1": {"type": "string"}, "output2": {"type": "number"}}},
        secrets=["test_secret"],
    )

@pytest.fixture
def sample_graph_nodes(test_namespace: str) -> List[NodeTemplate]:
    return [
        NodeTemplate(
            node_name="TestNode",
            namespace=test_namespace,
            identifier="node1",
            inputs={"input1": "test_value", "input2": 42},
            next_nodes=["node2"],
            unites=None,
        ),
        NodeTemplate(
            node_name="TestNode",
            namespace=test_namespace,
            identifier="node2",
            inputs={"input1": "{{node1.output1}}", "input2": "{{node1.output2}}"},
            next_nodes=[],
            unites=None,
        ),
    ]

255-257: Missing state_id fixture; tests will fail independently

test_execute_state (and test_get_secrets) require a state_id fixture that is not defined. test_create_states returns a value but is not a fixture and won’t run before these tests.

Add a proper module-level fixture that creates states and yields the state_id:

@pytest.fixture
async def state_id(
    state_manager_client,
    test_namespace: str,
    test_api_key: str,
    test_graph_name: str,
    sample_graph_nodes: List[NodeTemplate],
    sample_node_registration: NodeRegistrationModel,
    test_runtime_name: str,
) -> str:
    # Ensure prerequisites: register node and upsert graph
    await state_manager_client.put(
        f"/v0/namespace/{test_namespace}/nodes/",
        json=RegisterNodesRequestModel(runtime_name=test_runtime_name, nodes=[sample_node_registration]).model_dump(),
        headers={"X-API-Key": test_api_key},
    )
    await state_manager_client.put(
        f"/v0/namespace/{test_namespace}/graph/{test_graph_name}",
        json=UpsertGraphTemplateRequest(secrets={"test_secret": "secret_value"}, nodes=sample_graph_nodes).model_dump(),
        headers={"X-API-Key": test_api_key},
    )
    # Create states
    resp = await state_manager_client.post(
        f"/v0/namespace/{test_namespace}/graph/{test_graph_name}/states/create",
        json=CreateRequestModel(run_id=str(uuid.uuid4()), states=[RequestStateModel(identifier="node1", inputs={"input1": "test_value", "input2": 42})]).model_dump(),
        headers={"X-API-Key": test_api_key},
    )
    assert resp.status_code == 200, resp.text
    return resp.json()["states"][0]["state_id"]

279-281: Compare enum to its JSON string value

API returns a string; compare to StateStatusEnum.EXECUTED.value.

-        assert response_data["status"] == StateStatusEnum.EXECUTED
+        assert response_data["status"] == StateStatusEnum.EXECUTED.value

299-339: Don’t call other test methods; refactor into helpers or a single E2E test

Calling test_* from another test is an anti-pattern and breaks independence/order. Use helpers or fixtures instead.

Example refactor sketch:

async def _register_nodes(...): ...
async def _upsert_graph_template(...): ...
async def _create_state_and_get_id(...): ...
async def _execute_state(...): ...
async def _get_secrets(...): ...

async def test_full_workflow_happy_path(state_manager_client, test_namespace, test_api_key, test_graph_name, test_runtime_name, sample_node_registration, sample_graph_nodes):
    await _register_nodes(...)
    await _upsert_graph_template(...)
    state_id = await _create_state_and_get_id(...)
    await _get_secrets(...)
    await _execute_state(...)

I can provide a concrete patch if you want it in this PR.


345-348: Avoid prints in tests

Use logging or rely on assertions; printing adds noise to CI logs.

-        print(f"✅ Full workflow completed successfully for namespace: {test_namespace}")
-        print(f"   - Graph: {test_graph_name}")
-        print(f"   - State ID: {state_id}")
-        print(f"   - Runtime: {test_runtime_name}")
+        # Consider logging if needed, or rely on assertions above
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 009818f and f23a7fa.

📒 Files selected for processing (8)
  • state-manager/app/controller/get_current_states.py (1 hunks)
  • state-manager/app/controller/get_states_by_run_id.py (1 hunks)
  • state-manager/app/controller/list_graph_templates.py (1 hunks)
  • state-manager/app/controller/list_registered_nodes.py (1 hunks)
  • state-manager/tests/integration/peinding_test_full_workflow_integration.py (1 hunks)
  • state-manager/tests/unit/controller/test_create_states.py (1 hunks)
  • state-manager/tests/unit/controller/test_get_graph_template.py (1 hunks)
  • state-manager/tests/unit/controller/test_upsert_graph_template.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (7)
state-manager/app/controller/list_graph_templates.py (3)
state-manager/app/controller/get_graph_template.py (1)
  • get_graph_template (9-48)
state-manager/app/controller/upsert_graph_template.py (1)
  • upsert_graph_template (12-62)
state-manager/app/controller/get_secrets.py (1)
  • get_secrets (9-55)
state-manager/app/controller/list_registered_nodes.py (5)
state-manager/app/controller/register_nodes.py (1)
  • register_nodes (11-71)
state-manager/app/models/register_nodes_response.py (2)
  • RegisterNodesResponseModel (12-14)
  • RegisteredNodeModel (5-9)
state-manager/app/models/db/registered_node.py (1)
  • RegisteredNode (6-13)
state-manager/app/models/register_nodes_request.py (1)
  • RegisterNodesRequestModel (12-14)
python-sdk/exospherehost/runtime.py (1)
  • _register (126-158)
state-manager/app/controller/get_states_by_run_id.py (5)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
state-manager/app/controller/executed_state.py (1)
  • executed_state (12-62)
state-manager/app/controller/create_states.py (1)
  • create_states (23-64)
state-manager/app/controller/errored_state.py (1)
  • errored_state (11-34)
state-manager/app/controller/register_nodes.py (1)
  • register_nodes (11-71)
state-manager/app/controller/get_current_states.py (2)
state-manager/app/controller/enqueue_states.py (1)
  • enqueue_states (13-54)
state-manager/app/controller/create_states.py (1)
  • create_states (23-64)
state-manager/tests/unit/controller/test_create_states.py (1)
state-manager/app/controller/create_states.py (1)
  • create_states (23-64)
state-manager/tests/unit/controller/test_get_graph_template.py (3)
state-manager/app/models/db/graph_template_model.py (1)
  • GraphTemplate (13-81)
state-manager/app/models/graph_models.py (2)
  • UpsertGraphTemplateRequest (8-10)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/controller/get_graph_template.py (1)
  • get_graph_template (9-48)
state-manager/tests/unit/controller/test_upsert_graph_template.py (6)
state-manager/app/controller/upsert_graph_template.py (1)
  • upsert_graph_template (12-62)
state-manager/app/routes.py (1)
  • upsert_graph_template (124-133)
state-manager/app/models/graph_models.py (2)
  • UpsertGraphTemplateRequest (8-10)
  • UpsertGraphTemplateResponse (13-19)
state-manager/app/tasks/verify_graph.py (1)
  • verify_graph (234-265)
state-manager/app/models/db/graph_template_model.py (1)
  • GraphTemplate (13-81)
state-manager/app/controller/get_graph_template.py (1)
  • get_graph_template (9-48)
🔇 Additional comments (17)
state-manager/app/controller/get_current_states.py (1)

11-13: Clarify intent of get_current_states or rename to get_states_by_namespace

The current implementation unconditionally returns all State documents in the given namespace, but the name get_current_states implies some notion of “current” (e.g. latest per identifier, only in-flight/non-terminal states, etc.). Please either:

• Update the query to match your intended “current” semantics (for example,
– Filter by State.status to exclude terminal/completed states,
– Group by identifier and return only the most recent document per identifier, or
– Add any other business-logic filters that define “current”)
• Or, if no such filtering is desired, rename the function to get_states_by_namespace to accurately reflect that it returns all states.

File: state-manager/app/controller/get_current_states.py
Lines: 11–13

[optional_refactors_recommended]

state-manager/app/controller/get_states_by_run_id.py (3)

4-8: Imports are clean; prior F401 is resolved

Unused import from earlier review has been removed. This should unblock ruff on this file.


24-26: Good, structured info logs with request correlation

Consistent use of x_exosphere_request_id and informative messages. This matches logging in sibling controllers.

Also applies to: 33-35


28-31: Add deterministic ordering and a compound index on State

The State model inherits created_at from BaseDatabaseModel, so you can reliably sort by it. I didn’t find any existing indexes on (namespace_name, run_id), so add one for optimal query performance.

• In state-manager/app/controller/get_states_by_run_id.py, update the query to:

         states = await State.find(
             State.run_id == run_id,
             State.namespace_name == namespace
-        ).to_list()
+        ).sort(-State.created_at).to_list()

• In state-manager/app/models/db/state.py, define a compound index:

class State(BaseDatabaseModel):
    # existing fields…
    node_name: str
    namespace_name: str
    identifier: str
    run_id: str

    class Settings:
        indexes = [
            [("namespace_name", 1), ("run_id", 1)]
        ]

This ensures consistent ordering and efficient filtering at scale.
[optional_refactors_recommended]

state-manager/tests/unit/controller/test_upsert_graph_template.py (2)

95-98: Good fix: set_secrets should be sync; update is async.

Using MagicMock for set_secrets (returning self) and AsyncMock for update aligns with the model’s contract and prevents coroutine chaining bugs.


79-81: Patch targets and argument ordering look correct.

verify_graph is the inner decorator, so (mock_verify_graph, mock_graph_template_class) ordering is right.

state-manager/tests/unit/controller/test_create_states.py (5)

16-36: LGTM: solid coverage for get_node_template success path

You validate both the returned NodeTemplate and that get_node_by_identifier was called with the expected identifier. Clean and sufficient.


37-49: LGTM: correct 404 handling for missing node template

Asserting both status_code and detail ensures contract fidelity for API error semantics.


133-136: Nice fix: correct mocking pattern for State.find()

Using MagicMock for find() and AsyncMock for .to_list() matches Beanie’s API and avoids awaiting a sync method.


1-5: Ensure pytest-asyncio/anyio is installed and configured for async tests
I didn’t find a pytest config (pytest.ini, setup.cfg, or pyproject.toml) or any declared dev dependency for pytest-asyncio/anyio. Because test_create_states.py contains async tests without @pytest.mark.asyncio, they’ll be skipped unless a plugin is installed and set to asyncio_mode=auto. Please verify and address the following:

  • No pytest config file detected at the repo root (pytest.ini, setup.cfg, or pyproject.toml).
  • No reference to pytest-asyncio or pytest-anyio in any requirements or lock files.
  • Other tests (e.g. test_trigger_graph.py) use @pytest.mark.asyncio, implying you rely on pytest-asyncio.
  • CI must install dev dependencies so the plugin is available during test runs.

Suggested config snippet to add (e.g., in pytest.ini at the project root):

[pytest]
asyncio_mode = auto

And include in your dev dependencies:

pytest-asyncio>=0.21.0

Alternatively, decorate each async test with @pytest.mark.asyncio.


145-151: Add an assertion for run_id propagation

The ResponseStateModel includes a run_id field and the test setup assigns test_run_id to each state. To catch regressions, add an assertion for run_id:

• File: state-manager/tests/unit/controller/test_create_states.py
Lines: 145–151

         assert result.states[0].inputs == {"key": "value"}
+        assert result.states[0].run_id == "test_run_id"

[optional_refactors_recommended]

state-manager/app/controller/list_graph_templates.py (1)

10-10: Nit: Parameter naming consistency across controllers
Most controller methods in state-manager/app/controller use (namespace_name: str, x_exosphere_request_id: str), but here we have (namespace: str, request_id: str). For consistency, consider renaming as follows (and update all call sites accordingly):

  • state-manager/app/controller/list_graph_templates.py: change signature from
    async def list_graph_templates(namespace: str, request_id: str) -> List[GraphTemplate]:
    to
    async def list_graph_templates(namespace_name: str, x_exosphere_request_id: str) -> List[GraphTemplate]:
state-manager/app/controller/list_registered_nodes.py (1)

10-10: Rename namespace to namespace_name for consistency

Most controller handlers (e.g. get_secrets, register_nodes, etc.) use namespace_name as the first parameter. Update this signature and all its usages accordingly:

- async def list_registered_nodes(namespace: str, request_id: str) -> List[RegisteredNode]:
+ async def list_registered_nodes(namespace_name: str, request_id: str) -> List[RegisteredNode]:

• Update any in-body references from namespacenamespace_name
• (Optional) Consider standardizing the other three handlers still using namespace (list_graph_templates, get_states_by_run_id, get_current_states) to namespace_name as well.
[optional_refactors_recommended]

state-manager/tests/integration/peinding_test_full_workflow_integration.py (4)

162-171: Verify expected status code for upsert (PUT) endpoint

PUT can return 200 (updated) or 201 (created). Ensure the route sets 201 on create; otherwise this assertion will fail.

If the route returns 200, adjust the assertion:

-        assert response.status_code == 201
+        assert response.status_code in (200, 201)

1-3: Rename integration test file to correct typo in filename

The file at state-manager/tests/integration/peinding_test_full_workflow_integration.py contains a typo (“peinding”). Please rename it to one of the following to improve discoverability and avoid confusion:

  • state-manager/tests/integration/pending_test_full_workflow_integration.py
  • state-manager/tests/integration/test_full_workflow_integration.py

[optional_refactors_recommended]


23-31: Fix ruff E402: imports not at top due to sys.path manipulation

You can either suppress the E402 warnings on these imports or remove the manual path hack and rely on proper packaging:

• Suppress E402 locally by adding # noqa: E402 to each import
• Or remove the sys.path.insert(…) and configure your tests to see the package (e.g., install in editable mode with pip install -e ., set PYTHONPATH, or adjust your test runner’s root)

Diff example:

 # Add the state-manager app to the path
 sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '../../')))

-from app.models.register_nodes_request import RegisterNodesRequestModel, NodeRegistrationModel
-from app.models.graph_models         import UpsertGraphTemplateRequest, NodeTemplate
-from app.models.create_models        import CreateRequestModel, RequestStateModel
-from app.models.executed_models      import ExecutedRequestModel
-from app.models.enqueue_request      import EnqueueRequestModel
-from app.models.state_status_enum    import StateStatusEnum
+from app.models.register_nodes_request import RegisterNodesRequestModel, NodeRegistrationModel  # noqa: E402
+from app.models.graph_models          import UpsertGraphTemplateRequest, NodeTemplate         # noqa: E402
+from app.models.create_models         import CreateRequestModel, RequestStateModel           # noqa: E402
+from app.models.executed_models       import ExecutedRequestModel                          # noqa: E402
+from app.models.enqueue_request       import EnqueueRequestModel                           # noqa: E402
+from app.models.state_status_enum     import StateStatusEnum                               # noqa: E402

Please verify that your test suite still runs correctly after either change.


16-22: Async pytest plugin and asyncio_mode are already configured

  • state-manager/pyproject.toml includes pytest-asyncio>=0.24.0
  • state-manager/pytest.ini (line 9) sets asyncio_mode = auto

No further changes are needed for async test support.

Comment on lines +10 to +20
async def get_current_states(namespace: str, request_id: str) -> List[State]:
"""
Get all current states in a namespace

Args:
namespace: The namespace to search in
request_id: Request ID for logging

Returns:
List of all states in the namespace
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Align parameter names with other controllers (namespace_name, x_exosphere_request_id)

Other controllers use namespace_name and x_exosphere_request_id. Rename for consistency and easier grepping across the codebase.

-async def get_current_states(namespace: str, request_id: str) -> List[State]:
+async def get_current_states(namespace_name: str, x_exosphere_request_id: str) -> List[State]:
@@
-        namespace: The namespace to search in
-        request_id: Request ID for logging
+        namespace_name: The namespace to search in
+        x_exosphere_request_id: Request ID for logging
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def get_current_states(namespace: str, request_id: str) -> List[State]:
"""
Get all current states in a namespace
Args:
namespace: The namespace to search in
request_id: Request ID for logging
Returns:
List of all states in the namespace
"""
async def get_current_states(namespace_name: str, x_exosphere_request_id: str) -> List[State]:
"""
Get all current states in a namespace
Args:
namespace_name: The namespace to search in
x_exosphere_request_id: Request ID for logging
Returns:
List of all states in the namespace
"""
🤖 Prompt for AI Agents
In state-manager/app/controller/get_current_states.py around lines 10 to 20, the
function parameter names namespace and request_id should be renamed to
namespace_name and x_exosphere_request_id to match other controllers; update the
async def signature, docstring parameter names, any type hints and all internal
references to use namespace_name and x_exosphere_request_id, and then update any
callers/imports across the codebase to pass the new parameter names (or adjust
keyword usages) so the function and its usages stay consistent.

Comment on lines +24 to +31
logger.info(f"Fetching current states for namespace: {namespace}", x_exosphere_request_id=request_id)

# Find all states in the namespace
states = await State.find(
State.namespace_name == namespace
).to_list()

logger.info(f"Found {len(states)} states for namespace: {namespace}", x_exosphere_request_id=request_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Use structured logging for info logs (avoid f-strings) and align field names

Pass contextual fields as structured args to align with other controllers and improve log parsing/correlation. Also update variable names per the previous comment.

-        logger.info(f"Fetching current states for namespace: {namespace}", x_exosphere_request_id=request_id)
+        logger.info(
+            "Fetching current states for namespace",
+            namespace=namespace_name,
+            x_exosphere_request_id=x_exosphere_request_id,
+        )
@@
-        states = await State.find(
-            State.namespace_name == namespace
-        ).to_list()
+        states = await State.find(
+            State.namespace_name == namespace_name
+        ).to_list()
@@
-        logger.info(f"Found {len(states)} states for namespace: {namespace}", x_exosphere_request_id=request_id)
+        logger.info(
+            "Found states for namespace",
+            namespace=namespace_name,
+            count=len(states),
+            x_exosphere_request_id=x_exosphere_request_id,
+        )

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In state-manager/app/controller/get_current_states.py around lines 24 to 31,
replace the f-string info logs with structured logging by passing contextual
fields as keyword args (e.g. namespace_name, x_exosphere_request_id,
state_count) instead of embedding values in the message; update local variable
names to match the earlier naming convention (use namespace_name if the previous
comment renamed namespace) and log the count of states via a field (state_count
or state_total) rather than interpolating it into the message.

Comment on lines +27 to +29
states = await State.find(
State.namespace_name == namespace
).to_list()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Potentially unbounded query — consider pagination/limits

This will load all states for a namespace into memory. For large namespaces, this can be slow and memory-heavy. Consider adding optional limit/offset (or a created_after cursor) and a deterministic sort (e.g., by created_at).

Example API shape (outside this hunk):

  • Add optional params: limit: int | None = None, offset: int = 0, sort_desc: bool = True
  • Apply: query = State.find(State.namespace_name == namespace_name); query = query.sort(-State.created_at if sort_desc else State.created_at); if limit is not None: query = query.skip(offset).limit(limit)

💡 Verification agent

🧩 Analysis chain

Ensure there’s an index on State.namespace_name

Filtering by namespace_name without an index can cause full scans. Verify an index exists on this field (via Beanie Indexed or model Settings.indexes). If missing, add one.

If no index is present, consider (in State model):

  • Field-level: namespace_name: Indexed(str)
  • Or Settings: indexes = ['namespace_name']

---

🏁 Script executed:

```shell
#!/bin/bash
# Verify indexing on namespace_name in the State model
set -euo pipefail

echo "Locating State model definition..."
rg -n -C3 -g 'state-manager/app/models/db/state.py' -P '^\s*class\s+State\b'

echo -e "\nSearching for Beanie index hints (Indexed, indexes, IndexModel)..."
rg -n -C3 -g 'state-manager/app/models/db/state.py' -P 'Indexed|indexes|IndexModel|index_models|Settings'

echo -e "\nChecking occurrences of namespace_name field..."
rg -n -C2 -g 'state-manager/app/models/db/state.py' -P '\bnamespace_name\b'

Length of output: 1034


Add a database index on namespace_name in the State model

We didn’t find any existing indexes on namespace_name, so all queries against that field will trigger collection scans. Please add an index to improve query performance.

• File: state-manager/app/models/db/state.py
• Location: around the definition of namespace_name (line ~10)

Suggested changes (choose one):

Option A – Field-level index

-    namespace_name: str = Field(..., description="Name of the namespace of the state")
+    namespace_name: str = Field(..., index=True, description="Name of the namespace of the state")

Option B – Model-level index

+from beanie import IndexModel
+from pymongo import ASCENDING

 class State(BaseDatabaseModel):
     node_name: str = Field(..., description="Name of the node of the state")
     namespace_name: str = Field(..., description="Name of the namespace of the state")
     identifier: str = Field(..., description="Identifier of the node for which state is created")

+    class Settings:
+        indexes = [
+            IndexModel([("namespace_name", ASCENDING)], name="idx_state_namespace")
+        ]

Ensure you import IndexModel and ASCENDING as shown.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
states = await State.find(
State.namespace_name == namespace
).to_list()
++ b/state-manager/app/models/db/state.py
@@ class State(BaseDatabaseModel):
namespace_name: str = Field(
...,
index=True,
description="Name of the namespace of the state"
)
🤖 Prompt for AI Agents
In state-manager/app/models/db/state.py around line ~10, the State model lacks
an index on namespace_name causing collection scans; add an index either by
marking the field with index=True (Field(..., index=True)) or by adding a
model-level index: import IndexModel and ASCENDING from pymongo and add an
IndexModel([("namespace_name", ASCENDING)]) to the model's Meta/indexes list;
ensure the new imports are added at the top of the file.

Comment on lines +10 to +21
async def get_states_by_run_id(namespace: str, run_id: str, request_id: str) -> List[State]:
"""
Get all states for a given run ID in a namespace

Args:
namespace: The namespace to search in
run_id: The run ID to filter by
request_id: Request ID for logging

Returns:
List of states for the given run ID
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Align parameter naming with the rest of controllers (namespace_name)

Other controllers consistently use namespace_name. Keeping this consistent reduces cognitive load and avoids mix-ups with namespace fields elsewhere.

Apply:

-async def get_states_by_run_id(namespace: str, run_id: str, request_id: str) -> List[State]:
+async def get_states_by_run_id(namespace_name: str, run_id: str, request_id: str) -> List[State]:
@@
-    Get all states for a given run ID in a namespace
+    Get all states for a given run ID in a namespace
@@
-        namespace: The namespace to search in
+        namespace_name: The namespace to search in
@@
-        logger.info(f"Fetching states for run ID: {run_id} in namespace: {namespace}", x_exosphere_request_id=request_id)
+        logger.info(f"Fetching states for run ID: {run_id} in namespace: {namespace_name}", x_exosphere_request_id=request_id)

And below in the query, replace namespace with namespace_name.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In state-manager/app/controller/get_states_by_run_id.py around lines 10 to 21,
the function parameter is named `namespace` but other controllers use
`namespace_name`; rename the parameter to `namespace_name` (and update its
occurrences in the docstring and function signature) and update the query code
below to replace any `namespace` references with `namespace_name` so naming is
consistent across controllers.

Comment on lines +27 to +29
templates = await GraphTemplate.find(
GraphTemplate.namespace == namespace
).to_list()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Ensure DB index on namespace filter

This query will be frequent; ensure GraphTemplate.namespace is indexed to keep the list operation efficient.


🏁 Script executed:

#!/bin/bash
set -e
# Locate the GraphTemplate class definition
rg -n "class GraphTemplate" .

Length of output: 250


🏁 Script executed:

#!/bin/bash
set -e
# Search for namespace field usage and index definitions in GraphTemplate model
rg -n "namespace" state-manager/app/models/db/graph_template_model.py
rg -n "__table_args__" state-manager/app/models/db/graph_template_model.py
rg -n "indexes" state-manager/app/models/db/graph_template_model.py
rg -n "Index" state-manager/app/models/db/graph_template_model.py
# Preview the beginning of the model file to inspect field and meta definitions
sed -n '1,200p' state-manager/app/models/db/graph_template_model.py

Length of output: 331


Add single‐field index on namespace for efficient filtering
The model currently defines only a composite index on (name, namespace), which won’t be used for queries filtering solely by namespace. Since list_graph_templates does frequent GraphTemplate.find(GraphTemplate.namespace == namespace) calls, please add a dedicated index on namespace in the model.

• File: state-manager/app/models/db/graph_template_model.py
• Location: the __table_args__ block beneath the field definitions

Example diff:

 class GraphTemplate(BaseDatabaseModel):
     ...
     __table_args__ = (
-        Index("name", "namespace", unique=True, name="unique_name_namespace"),
+        Index("name", "namespace", unique=True, name="unique_name_namespace"),
+        Index("namespace", name="idx_graph_template_namespace"),
     )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
templates = await GraphTemplate.find(
GraphTemplate.namespace == namespace
).to_list()
# In state-manager/app/models/db/graph_template_model.py
class GraphTemplate(BaseDatabaseModel):
...
__table_args__ = (
Index("name", "namespace", unique=True, name="unique_name_namespace"),
Index("namespace", name="idx_graph_template_namespace"),
)
🤖 Prompt for AI Agents
In state-manager/app/controller/list_graph_templates.py around lines 27–29 the
code queries GraphTemplate by namespace only; update the model in
state-manager/app/models/db/graph_template_model.py inside the __table_args__
block to add a dedicated single-field index on the namespace column (e.g., add
an Index for namespace alongside the existing composite index), so queries
filtering solely by namespace will use the index; place the new Index entry with
the other table args and run any required migrations to apply the index to the
database.

template.get_secrets.return_value = {"secret1": "encrypted_value1", "secret2": "encrypted_value2"}
return template

@patch('app.controller.get_graph_template.GraphTemplate')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Add autospec=True to patch for stronger mocks

Autospeccing helps catch attribute typos (e.g., misspelling find_one) at test time.

-    @patch('app.controller.get_graph_template.GraphTemplate')
+    @patch('app.controller.get_graph_template.GraphTemplate', autospec=True)

Apply to all patched tests in this module.

Also applies to: 84-84, 147-147, 187-187, 220-220, 242-242

🤖 Prompt for AI Agents
state-manager/tests/unit/controller/test_get_graph_template.py lines 55, 84,
147, 187, 220, 242: the test module uses unittest.mock.patch without autospec
which allows attribute typos to slip through; update each @patch decorator (the
ones shown at these lines) to include autospec=True (e.g.,
@patch('app.controller.get_graph_template.GraphTemplate', autospec=True)) so the
mocks enforce the real object's signature and attributes.

Comment on lines +75 to +81
# Assert
assert result.validation_status == GraphTemplateValidationStatus.VALID
assert result.validation_errors == []
assert result.secrets == {"secret1": True, "secret2": True}
assert result.created_at == mock_graph_template.created_at
assert result.updated_at == mock_graph_template.updated_at

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Optionally assert nodes content in the success path

Lightweight check that node identities are preserved.

         assert result.validation_errors == []
         assert result.secrets == {"secret1": True, "secret2": True}
         assert result.created_at == mock_graph_template.created_at
         assert result.updated_at == mock_graph_template.updated_at
+        assert [n.identifier for n in result.nodes] == ["node1", "node2"]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Assert
assert result.validation_status == GraphTemplateValidationStatus.VALID
assert result.validation_errors == []
assert result.secrets == {"secret1": True, "secret2": True}
assert result.created_at == mock_graph_template.created_at
assert result.updated_at == mock_graph_template.updated_at
# Assert
assert result.validation_status == GraphTemplateValidationStatus.VALID
assert result.validation_errors == []
assert result.secrets == {"secret1": True, "secret2": True}
assert result.created_at == mock_graph_template.created_at
assert result.updated_at == mock_graph_template.updated_at
assert [n.identifier for n in result.nodes] == ["node1", "node2"]
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_get_graph_template.py around lines
75 to 81, add a lightweight assertion to the success path that verifies node
identities/content are preserved: extract the nodes from result (or
result.template) and assert they match the expected node ids/content from
mock_graph_template (or a small expected list/dict), e.g. check lengths and that
each expected node id is present and any key fields equal; keep it minimal and
non-flaky.

Comment on lines +84 to +106
@patch('app.controller.get_graph_template.GraphTemplate')
async def test_get_graph_template_not_found(
self,
mock_graph_template_class,
mock_namespace,
mock_graph_name,
mock_request_id
):
"""Test when graph template is not found"""
# Arrange
mock_graph_template_class.find_one = AsyncMock(return_value=None)

# Act & Assert
with pytest.raises(HTTPException) as exc_info:
await get_graph_template(
mock_namespace,
mock_graph_name,
mock_request_id
)

assert exc_info.value.status_code == status.HTTP_404_NOT_FOUND
assert exc_info.value.detail == f"Graph template {mock_graph_name} not found in namespace {mock_namespace}"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Also assert the async call was awaited in the Not Found path

Ensures the async method was actually awaited even in error paths.

         with pytest.raises(HTTPException) as exc_info:
             await get_graph_template(
                 mock_namespace,
                 mock_graph_name,
                 mock_request_id
             )
 
         assert exc_info.value.status_code == status.HTTP_404_NOT_FOUND
         assert exc_info.value.detail == f"Graph template {mock_graph_name} not found in namespace {mock_namespace}"
+        mock_graph_template_class.find_one.assert_awaited_once()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@patch('app.controller.get_graph_template.GraphTemplate')
async def test_get_graph_template_not_found(
self,
mock_graph_template_class,
mock_namespace,
mock_graph_name,
mock_request_id
):
"""Test when graph template is not found"""
# Arrange
mock_graph_template_class.find_one = AsyncMock(return_value=None)
# Act & Assert
with pytest.raises(HTTPException) as exc_info:
await get_graph_template(
mock_namespace,
mock_graph_name,
mock_request_id
)
assert exc_info.value.status_code == status.HTTP_404_NOT_FOUND
assert exc_info.value.detail == f"Graph template {mock_graph_name} not found in namespace {mock_namespace}"
@patch('app.controller.get_graph_template.GraphTemplate')
async def test_get_graph_template_not_found(
self,
mock_graph_template_class,
mock_namespace,
mock_graph_name,
mock_request_id
):
"""Test when graph template is not found"""
# Arrange
mock_graph_template_class.find_one = AsyncMock(return_value=None)
# Act & Assert
with pytest.raises(HTTPException) as exc_info:
await get_graph_template(
mock_namespace,
mock_graph_name,
mock_request_id
)
assert exc_info.value.status_code == status.HTTP_404_NOT_FOUND
assert exc_info.value.detail == f"Graph template {mock_graph_name} not found in namespace {mock_namespace}"
mock_graph_template_class.find_one.assert_awaited_once()
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_get_graph_template.py around lines
84 to 106, the test for the Not Found path does not verify that the async
find_one was actually awaited; after the with pytest.raises block, add an
assertion that mock_graph_template_class.find_one was awaited with the expected
arguments (e.g., mock_namespace and mock_graph_name) — use the AsyncMock
assertion (assert_awaited or assert_awaited_with) to ensure the async call was
executed even when the function raises the HTTPException.

@@ -0,0 +1,289 @@
from time import sleep
import pytest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Ensure async tests run under pytest (mark or config).

If the repo isn’t using pytest-asyncio’s asyncio_mode=auto, these async tests won’t run. Either add module-level pytestmark or verify config.

You can add a module-level mark:

+pytestmark = pytest.mark.asyncio

Or verify configuration with:


🏁 Script executed:

#!/bin/bash
# Check pytest-asyncio mode and usage
rg -nC2 -i 'asyncio_mode' pytest.ini pyproject.toml tox.ini || true
rg -nP --type=py '@pytest\.mark\.asyncio' state-manager/tests || true

Length of output: 448


Ensure async tests are configured for pytest-asyncio

We didn’t find any pytest.ini/pyproject.toml/tox.ini entries setting asyncio_mode, and this new test_upsert_graph_template.py doesn’t use @pytest.mark.asyncio. Without either a global config or per-test marker, its async tests won’t be executed.

You can fix this in one of two ways:

• Add a module-level mark in
state-manager/tests/unit/controller/test_upsert_graph_template.py

 import pytest
+pytestmark = pytest.mark.asyncio

• Or add a pytest config file (e.g. pytest.ini) at the repo root:

[pytest]
asyncio_mode = auto
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around line
2, the test module contains async tests but lacks pytest-asyncio configuration
or the @pytest.mark.asyncio marker, so async tests won't run; fix by either
adding a module-level marker at the top of this file (e.g., add "import pytest"
then "@pytest.mark.asyncio" before async test functions or use "pytestmark =
pytest.mark.asyncio" after imports) or add a repo-level pytest config
(create/update pytest.ini or pyproject.toml) with "asyncio_mode = auto" under
[pytest] so pytest executes async tests automatically.

Comment on lines +110 to +117
# Assert
assert result.nodes == mock_upsert_request.nodes
assert result.validation_status == GraphTemplateValidationStatus.VALID
assert result.validation_errors == []
assert result.secrets == {"api_key": True, "database_url": True}
assert result.created_at == mock_existing_template.created_at
assert result.updated_at == mock_existing_template.updated_at

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Avoid strict updated_at equality; controller may bump timestamps.

Asserting updated_at equals the pre-update value is brittle. Either drop it or relax the expectation.

Apply:

         assert result.created_at == mock_existing_template.created_at
-        assert result.updated_at == mock_existing_template.updated_at
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Assert
assert result.nodes == mock_upsert_request.nodes
assert result.validation_status == GraphTemplateValidationStatus.VALID
assert result.validation_errors == []
assert result.secrets == {"api_key": True, "database_url": True}
assert result.created_at == mock_existing_template.created_at
assert result.updated_at == mock_existing_template.updated_at
# Assert
assert result.nodes == mock_upsert_request.nodes
assert result.validation_status == GraphTemplateValidationStatus.VALID
assert result.validation_errors == []
assert result.secrets == {"api_key": True, "database_url": True}
assert result.created_at == mock_existing_template.created_at
🤖 Prompt for AI Agents
In state-manager/tests/unit/controller/test_upsert_graph_template.py around
lines 110 to 117, the test asserts updated_at equals the pre-update value which
is brittle because the controller may bump timestamps; change the assertion to
relax it (for example, assert that result.updated_at is not None and
result.updated_at >= mock_existing_template.updated_at) or remove the updated_at
equality assertion entirely so the test no longer expects exact timestamp
equality.

Copy link
Member

@NiveditJain NiveditJain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pending discussions:

async def trigger_graph(namespace_name: str, graph_name: str, body: TriggerGraphRequestModel, x_exosphere_request_id: str) -> TriggerGraphResponseModel:
    try:
        # Generate a new run ID for this graph execution
        run_id = str(uuid.uuid4())
        logger.info(f"Triggering graph {graph_name} with run_id {run_id}", x_exosphere_request_id=x_exosphere_request_id)

        # Create a CreateRequestModel with the generated run_id
        create_request = CreateRequestModel(
            run_id=run_id,
            states=body.states
        )

        # Call the existing create_states function
        create_response = await create_states(namespace_name, graph_name, create_request, x_exosphere_request_id)

        # Return the trigger response with the generated run_id
        return TriggerGraphResponseModel(
            run_id=run_id,
            status=create_response.status,
            states=create_response.states
        )

    except Exception as e:
        logger.error(f"Error triggering graph {graph_name} for namespace {namespace_name}", x_exosphere_request_id=x_exosphere_request_id)
        raise e

Should this create any state or should only create root node of the tree?

NIT will fix in upcoming PR for trigger, we have added a lot of APIs currently this API they accept a parameter request_id however we have a standard x_exosphere_request_id will need to change this.

@NiveditJain NiveditJain merged commit fe7bae3 into exospherehost:main Aug 17, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants