Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. 📝 WalkthroughSummary by CodeRabbit
Summary by CodeRabbit
WalkthroughAdds a new Pydantic model Changes
Sequence Diagram(s)sequenceDiagram
participant Runner
participant CreateNextState
participant DB
Runner->>CreateNextState: create_next_state(state)
loop for each next_node_template
CreateNextState->>CreateNextState: read next_node_template.unites
alt unites present
CreateNextState->>CreateNextState: for each depend in unites
CreateNextState->>DB: query pending_count for depend.identifier (NE status SUCCESS, parents filter)
alt pending_count > 0
CreateNextState-->>Runner: skip this next node (unites unmet)
else
CreateNextState->>DB: create next state
end
else no unites
CreateNextState->>DB: create next state
end
end
sequenceDiagram
participant Verifier
participant Topology
participant Logger
participant DB
Verifier->>Topology: compute dependency_graph
Verifier->>Verifier: verify_inputs
Verifier->>Verifier: verify_unites(nodes, dependency_graph, errors)
alt exception thrown
Verifier->>Logger: log exception
Verifier->>DB: mark graph INVALID, append general validation error, save
else
alt errors exist or dependency_graph is None
Verifier->>DB: mark graph INVALID and save
else
Verifier->>DB: mark graph VALID and save
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Summary of Changes
Hello @NiveditJain, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a new dependency mechanism for nodes within the state management system. It allows defining explicit dependencies for nodes, ensuring that a node only executes once all its specified predecessors have completed their tasks. This enhances the control flow and reliability of graph execution by preventing premature node activation.
Highlights
- Introduced Node Dependencies: A new Depend model and a depends field in NodeTemplate allow nodes to declare explicit dependencies on other nodes by their identifiers.
- Enforced Dependency Resolution: The create_next_state task now includes logic to check the status of dependent nodes. A node will only be considered for execution if all its declared dependencies have successfully completed or are not in a pending state.
- Validated Graph Dependencies: A new verify_dependencies function was added to the graph verification process. This function ensures that all declared dependencies refer to existing nodes within the graph and are consistent with the graph's topological structure, preventing invalid dependency configurations.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request introduces a depends field to nodes, allowing them to wait for other nodes to complete before executing. The changes touch the data model, the state creation logic, and the graph verification logic. My review identified two critical bugs: one that would cause a crash due to an uninitialized variable in the state creation task, and another in the graph verification logic that uses an incorrect method to check for dependency existence. I've also suggested a refactoring to improve code clarity. Addressing these issues is crucial for the stability and correctness of the new feature.
There was a problem hiding this comment.
Actionable comments posted: 3
🔭 Outside diff range comments (3)
state-manager/app/tasks/create_next_state.py (1)
90-93: Tighten placeholder parsing and type safety
- len(splits) == 0 is unreachable; split() always returns at least one element.
- Outputs may not be strings; concatenation will raise TypeError.
Refactor the parsing to handle the “no placeholder” short path and cast outputs to str.
- splits = temporary_input.split("${{") - - if len(splits) == 0: - next_node_input_data[field_name] = temporary_input - continue + splits = temporary_input.split("${{") + if len(splits) == 1: + next_node_input_data[field_name] = temporary_input + continue @@ - constructed_string += dependent_state.outputs[input_field] + split.split("}}")[1] + constructed_string += str(dependent_state.outputs[input_field]) + split.split("}}")[1]Also applies to: 94-127
state-manager/app/tasks/verify_graph.py (2)
150-160: Potential KeyError when building dependency graph
dependency_graph[next_node]may not be initialized when referenced. Initialize the map up-front or use setdefault.-async def build_dependencies_graph(graph_nodes: list[NodeTemplate]): - dependency_graph = {} - for node in graph_nodes: - dependency_graph[node.identifier] = set() - if node.next_nodes is None: - continue - for next_node in node.next_nodes: - dependency_graph[next_node].add(node.identifier) - dependency_graph[next_node] = dependency_graph[next_node] | dependency_graph[node.identifier] - return dependency_graph +async def build_dependencies_graph(graph_nodes: list[NodeTemplate]): + dependency_graph = {node.identifier: set() for node in graph_nodes} + for node in graph_nodes: + if not node.next_nodes: + continue + for next_node in node.next_nodes: + dependency_graph.setdefault(next_node, set()) + dependency_graph[next_node].add(node.identifier) + dependency_graph[next_node] |= dependency_graph[node.identifier] + return dependency_graph
192-210: Avoid mutable default in dfs_visit
current_path: list[str] = []shares state across calls. Use None default and initialize inside the function.- def dfs_visit(current_node: str, parent_node: str | None = None, current_path: list[str] = []): + def dfs_visit(current_node: str, parent_node: str | None = None, current_path: list[str] | None = None): + if current_path is None: + current_path = []
♻️ Duplicate comments (2)
state-manager/app/tasks/create_next_state.py (1)
54-76: Uninitialized variable and dependency gating bug (self-dependency blocks creation)
- depends_satisfied is used on Line 76 without being initialized when there are no dependencies, causing UnboundLocalError.
- More importantly, when a next node depends on the current node, the query counts the current (in-progress) state as pending, so the dependent node is never created in this invocation. Since you mark the current state SUCCESS after the loop, no subsequent call will create the dependent node either unless another mechanism re-invokes this task.
Initialize depends_satisfied to True and exclude the current state id in the “self” dependency query (or treat self as satisfied).
Refactor and de-duplicate queries:
- pending_count = 0 - if next_node_template.depends is not None and len(next_node_template.depends) > 0: - for depend in next_node_template.depends: - if depend.identifier == state.identifier: - pending_count = await State.find( - State.identifier == depend.identifier, - State.namespace_name == state.namespace_name, - State.graph_name == state.graph_name, - NotIn(State.status, [StateStatusEnum.SUCCESS, StateStatusEnum.EXECUTED]) - ).count() - else: - pending_count = await State.find( - State.identifier == depend.identifier, - State.namespace_name == state.namespace_name, - State.graph_name == state.graph_name, - NE(State.status, StateStatusEnum.SUCCESS) - ).count() - if pending_count > 0: - depends_satisfied = False - break - - if not depends_satisfied: - continue + depends_satisfied = True + if next_node_template.depends: + for depend in next_node_template.depends: + base_query = State.find( + State.identifier == depend.identifier, + State.namespace_name == state.namespace_name, + State.graph_name == state.graph_name, + ) + if depend.identifier == state.identifier: + # Exclude the current state record so a self-dependency doesn't block creation + pending_count = await base_query.find( + NE(State.id, state.id), + NotIn(State.status, [StateStatusEnum.SUCCESS, StateStatusEnum.EXECUTED]), + ).count() + else: + pending_count = await base_query.find( + NE(State.status, StateStatusEnum.SUCCESS), + ).count() + if pending_count > 0: + depends_satisfied = False + break + + if not depends_satisfied: + continuestate-manager/app/tasks/verify_graph.py (1)
223-235: Dependency existence check is incorrect; compare identifiers, not objects
if depend.identifier not in graph_nodes:will always be False because graph_nodes is a list of NodeTemplate objects. Build a set of identifiers and short-circuit the dependency-graph membership check if the dependency doesn’t exist.-async def verify_dependencies(graph_nodes: list[NodeTemplate], dependency_graph: dict | None, errors: list[str]): +async def verify_dependencies(graph_nodes: list[NodeTemplate], dependency_graph: dict | None, errors: list[str]): if dependency_graph is None: return - - for node in graph_nodes: + node_identifiers = {n.identifier for n in graph_nodes} + for node in graph_nodes: if node.depends is None or len(node.depends) == 0: continue for depend in node.depends: - if depend.identifier not in graph_nodes: + if depend.identifier not in node_identifiers: errors.append(f"Node {node.identifier} depends on {depend.identifier} which does not exist") - - if depend.identifier not in dependency_graph[node.identifier]: + continue + if depend.identifier not in dependency_graph[node.identifier]: errors.append(f"Node {node.identifier} depends on {depend.identifier} which is not a dependency of {node.identifier}")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
state-manager/app/models/node_template_model.py(1 hunks)state-manager/app/tasks/create_next_state.py(2 hunks)state-manager/app/tasks/verify_graph.py(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
state-manager/app/tasks/verify_graph.py (3)
state-manager/app/models/node_template_model.py (1)
NodeTemplate(9-15)state-manager/app/models/graph_template_validation_status.py (1)
GraphTemplateValidationStatus(4-8)state-manager/app/controller/upsert_graph_template.py (1)
upsert_graph_template(12-62)
state-manager/app/tasks/create_next_state.py (2)
state-manager/app/models/db/state.py (1)
State(8-18)state-manager/app/models/state_status_enum.py (1)
StateStatusEnum(4-14)
🪛 GitHub Actions: Check spells before PR merge
state-manager/app/tasks/create_next_state.py
[warning] 9-9: codespell: potential misspelling 'NotIn' on line 9. Suggestions: noting, not in, notion.
[warning] 62-62: codespell: potential misspelling 'NotIn' on line 62. Suggestions: noting, not in, notion.
🔇 Additional comments (1)
state-manager/app/tasks/create_next_state.py (1)
9-9: Codespell false positive on ‘NotIn’The pipeline warnings are from codespell misreading the Beanie operator name. Add “NotIn” to the ignore list for the codespell job to avoid blocking merges.
Example options:
- Add a
.codespellignorefile with a lineNotIn- Or configure the action with
ignore-words-list: NotIn
There was a problem hiding this comment.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
state-manager/app/tasks/create_next_state.py (1)
1-13: Movecreate_modelimport above logger initialization and silence Codespell “NotIn” false positiveTo fix the pipeline “Check spells” failure and keep imports organized:
• Add a per-file Codespell directive to ignore the Beanie operator name
NotIn.
• Relocate thefrom json_schema_to_pydantic import create_modelimport so it appears with the other imports, immediately above the logger initialization.Apply this diff to
state-manager/app/tasks/create_next_state.py:+# codespell:ignore-words-list=NotIn import asyncio import time from app.models.db.state import State from app.models.db.graph_template_model import GraphTemplate from app.models.graph_template_validation_status import GraphTemplateValidationStatus from app.models.db.registered_node import RegisteredNode from app.models.state_status_enum import StateStatusEnum from beanie.operators import NE, NotIn from app.singletons.logs_manager import LogsManager +from json_schema_to_pydantic import create_model logger = LogsManager().get_logger()
♻️ Duplicate comments (2)
state-manager/app/tasks/verify_graph.py (1)
223-233: Harden dependency validation: avoid KeyError and check for unknown dependencies
verify_dependenciescurrently indexesdependency_graph[node.identifier]without ensuring the key exists, which can raise a KeyError if topology validation didn’t visit that node (e.g., disconnected graphs). It also doesn’t verify thatdepend.identifieractually exists among graph nodes, so an unknown dependency is silently treated as satisfied. This mirrors a previously raised concern; let’s fix both.Apply this diff to make the function robust and accurate:
async def verify_dependencies(graph_nodes: list[NodeTemplate], dependency_graph: dict | None, errors: list[str]): if dependency_graph is None: return - - for node in graph_nodes: - if node.depends is None or len(node.depends) == 0: - continue - for depend in node.depends: - if depend.identifier not in dependency_graph[node.identifier]: - errors.append(f"Node {node.identifier} depends on {depend.identifier} which is not a dependency of {node.identifier}") - + # Build a set of valid identifiers to detect unknown dependencies quickly + node_identifiers = {n.identifier for n in graph_nodes} + + for node in graph_nodes: + if not node.depends: + continue + + # If the node is not in the dependency graph (e.g., disconnected), skip to avoid KeyError. + # Topology verification already adds connectivity errors. + if node.identifier not in dependency_graph: + continue + + for depend in node.depends: + if depend.identifier not in node_identifiers: + errors.append(f"Node {node.identifier} depends on {depend.identifier} which does not exist") + continue + + if depend.identifier not in dependency_graph.get(node.identifier, []): + errors.append(f"Node {node.identifier} depends on {depend.identifier} which is not a dependency of {node.identifier}") +state-manager/app/tasks/create_next_state.py (1)
59-75: Reduce duplication by building the query once and applying a status filterBoth branches differ only by the status predicate. Build a base query and apply
status_filter. This was raised earlier; here’s the concrete refactor.Apply this diff:
- for depend in next_node_template.depends: - if depend.identifier == state.identifier: - pending_count = await State.find( - State.identifier == depend.identifier, - State.namespace_name == state.namespace_name, - State.graph_name == state.graph_name, - NotIn(State.status, [StateStatusEnum.SUCCESS, StateStatusEnum.EXECUTED]) - ).count() - else: - pending_count = await State.find( - State.identifier == depend.identifier, - State.namespace_name == state.namespace_name, - State.graph_name == state.graph_name, - NE(State.status, StateStatusEnum.SUCCESS) - ).count() + for depend in next_node_template.depends: + base_query = State.find( + State.identifier == depend.identifier, + State.namespace_name == state.namespace_name, + State.graph_name == state.graph_name, + ) + status_filter = ( + NotIn(State.status, [StateStatusEnum.SUCCESS, StateStatusEnum.EXECUTED]) + if depend.identifier == state.identifier + else NE(State.status, StateStatusEnum.SUCCESS) + ) + pending_count = await base_query.find(status_filter).count()
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
state-manager/app/tasks/create_next_state.py(2 hunks)state-manager/app/tasks/verify_graph.py(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (2)
state-manager/app/tasks/create_next_state.py (2)
state-manager/app/models/db/state.py (1)
State(8-18)state-manager/app/models/state_status_enum.py (1)
StateStatusEnum(4-14)
state-manager/app/tasks/verify_graph.py (3)
state-manager/app/models/node_template_model.py (1)
NodeTemplate(9-15)state-manager/app/models/graph_template_validation_status.py (1)
GraphTemplateValidationStatus(4-8)state-manager/app/controller/upsert_graph_template.py (1)
upsert_graph_template(12-62)
🪛 GitHub Actions: Ruff check on changed files only
state-manager/app/tasks/create_next_state.py
[error] 14-14: Ruff: E402 Module level import not at top of file.
🪛 GitHub Actions: Check spells before PR merge
state-manager/app/tasks/create_next_state.py
[error] 9-9: codespell: potential misspelling 'NotIn' at line 9. Suggestions: noting, not in, notion.
[error] 66-66: codespell: potential misspelling 'NotIn' at line 66. Suggestions: noting, not in, notion.
🔇 Additional comments (1)
state-manager/app/tasks/create_next_state.py (1)
57-82: Dependency gating logic looks correct; prevents premature next-node creationThe added gating ensures a next node is only created when all declared dependencies are satisfied. Special-casing the current node to accept EXECUTED (besides SUCCESS) avoids deadlock since you mark the current state SUCCESS at the end. Good change.
Consider adding a quick unit/integration test asserting that:
- a next node with depends=[current] is not created until current reaches EXECUTED,
- a next node with depends=[other] waits for that “other” to be SUCCESS.
I can help scaffold this if desired.
There was a problem hiding this comment.
Actionable comments posted: 1
🔭 Outside diff range comments (3)
state-manager/app/tasks/create_next_state.py (3)
95-98: Dead condition: len(splits) can’t be 0str.split() always returns at least one element. This branch never executes. Use a direct substring check.
- if len(splits) == 0: + if "${{" not in temporary_input: next_node_input_data[field_name] = temporary_input continue
127-127: Coercedependent_state.outputsvalues to strings before concatenation
dependent_state.outputs[input_field]can be non-string (e.g., int, bool), leading to aTypeError. Wrap it instr()to ensure safe concatenation.• File:
state-manager/app/tasks/create_next_state.py
Line 127
• Update:- constructed_string += dependent_state.outputs[input_field] + split.split("}}")[1] + constructed_string += str(dependent_state.outputs[input_field]) + split.split("}}")[1]
91-93: Ensure required template inputs exist before accessAdd a guard to raise a clear error if a required field is missing from
next_node_template.inputs, preventing a strayKeyErrorat runtime:for field_name, _ in next_node_input_model.model_fields.items(): - temporary_input = next_node_template.inputs[field_name] + if field_name not in next_node_template.inputs: + raise KeyError(f"Missing input '{field_name}' for node '{next_node_template.identifier}'") + temporary_input = next_node_template.inputs[field_name]• File: state-manager/app/tasks/create_next_state.py
• Lines: 91–93
♻️ Duplicate comments (2)
state-manager/app/tasks/create_next_state.py (2)
57-81: Initialize depends_satisfied at the start of each next-node iteration — fixedYou addressed the earlier UnboundLocalError risk by setting depends_satisfied = True before the depends check. Good catch and fix.
57-81: Isolate dependency checks to a single graph execution/run (if you keep query-based approach)If you prefer to stick with a query approach in future, include a run/execution identifier in both the dependency query and the State model to prevent cross-run interference. This was flagged earlier and still applies.
Run this script to confirm whether the State model already has an execution/run identifier:
#!/bin/bash # Check if a run/execution identifier exists on the State model or elsewhere echo "Searching for execution/run identifiers..." rg -n -C2 --type=py '\b(execution_id|run_id|flow_id|session_id)\b' echo echo "Showing State model to verify presence of execution_id/run_id:" state_files=$(rg -lP 'class\s+State\b' --type=py || true) for f in $state_files; do echo "---- $f ----" sed -n '1,200p' "$f" done
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
state-manager/app/tasks/create_next_state.py(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
state-manager/app/tasks/create_next_state.py (3)
state-manager/app/models/db/state.py (1)
State(8-18)state-manager/app/models/state_status_enum.py (1)
StateStatusEnum(4-14)state-manager/app/tasks/verify_graph.py (1)
build_dependencies_graph(150-159)
🔇 Additional comments (1)
state-manager/app/tasks/create_next_state.py (1)
9-14: Ruff E402 resolved; imports are now correctly placed before executable codeImport ordering looks good, and logger is initialized after imports. Thanks for fixing the earlier linter issue.
nk-ag
left a comment
There was a problem hiding this comment.
what external dependency are we addressing here which is not captured by inputs
|
@nk-ag suggest a better keywords, essentially what I mean to do is, a state is created when all sub paths of its parent are done. |
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
state-manager/app/models/node_template_model.py (1)
14-15: Prefer default_factory lists and consistent modern type hints; fix descriptionUsing Optional[List[…]] with None defaults forces None checks everywhere. Use empty lists by default and modern builtins. Also clarify the “unites” description.
-from typing import Any, Optional, List +from typing import Any @@ - next_nodes: Optional[List[str]] = Field(None, description="Next nodes to execute") - unites: Optional[List[Unites]] = Field(None, description="Unites of the node") + next_nodes: list[str] = Field(default_factory=list, description="Next nodes to execute") + unites: list[Unites] = Field( + default_factory=list, + description="Identifiers that must all complete before this node runs", + )If changing defaults is risky now, at least fix the description to something like “Identifiers that must all complete before this node runs.”
state-manager/app/tasks/verify_graph.py (1)
222-233: Dependency verification misses non-existent identifiers and can misreportYou validate only that each unite is in the path list but don’t first verify the target exists in the graph. Also, narrow typing of dependency_graph improves readability.
-async def verify_dependencies(graph_nodes: list[NodeTemplate], dependency_graph: dict | None, errors: list[str]): +async def verify_dependencies( + graph_nodes: list[NodeTemplate], + dependency_graph: dict[str, list[str]] | None, + errors: list[str], +): if dependency_graph is None: return - - for node in graph_nodes: - if node.unites is None or len(node.unites) == 0: + + node_identifiers = {n.identifier for n in graph_nodes} + for node in graph_nodes: + if not node.unites: continue - for depend in node.unites: - if depend.identifier not in dependency_graph[node.identifier]: + for depend in node.unites: + if depend.identifier not in node_identifiers: + errors.append(f"Node {node.identifier} depends on {depend.identifier} which does not exist") + continue + # dependency_graph is a list of ancestors for node.identifier + if depend.identifier not in dependency_graph.get(node.identifier, []): errors.append(f"Node {node.identifier} depends on {depend.identifier} which is not a dependency of {node.identifier}")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
state-manager/app/models/node_template_model.py(1 hunks)state-manager/app/tasks/create_next_state.py(2 hunks)state-manager/app/tasks/verify_graph.py(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
state-manager/app/models/node_template_model.py (1)
state-manager/app/models/graph_models.py (2)
UpsertGraphTemplateRequest(8-10)UpsertGraphTemplateResponse(13-19)
state-manager/app/tasks/verify_graph.py (3)
state-manager/app/models/node_template_model.py (1)
NodeTemplate(9-15)state-manager/app/models/graph_template_validation_status.py (1)
GraphTemplateValidationStatus(4-8)state-manager/app/controller/upsert_graph_template.py (1)
upsert_graph_template(12-62)
state-manager/app/tasks/create_next_state.py (2)
state-manager/app/models/db/state.py (1)
State(8-18)state-manager/app/controller/enqueue_states.py (1)
enqueue_states(13-54)
🔇 Additional comments (2)
state-manager/app/tasks/create_next_state.py (2)
9-15: LGTM: import ordering and logger initializationImport moved above logger (fixes E402) and logger initialized once. Good.
68-75: Clarify terminal-status semantics to treat non-success terminals as “done”To avoid a failed sibling blocking the join forever, only consider a dependency pending if its status is not in the terminal set
{SUCCESS, ERRORED, CANCELLED, TIMEDOUT}. You can also switch from.count()to.find_one()for a simple existence check:-from beanie.operators import NE +from beanie.operators import NotIn # … - pending_count = await State.find( + pending_exists = await State.find( State.identifier == depend.identifier, State.namespace_name == state.namespace_name, State.graph_name == state.graph_name, - NE(State.status, StateStatusEnum.SUCCESS), - {f"parents.{depend.identifier}": parents[depend.identifier]} - ).count() -if pending_count > 0: + NotIn( + State.status, + [ + StateStatusEnum.SUCCESS, + StateStatusEnum.ERRORED, + StateStatusEnum.CANCELLED, + StateStatusEnum.TIMEDOUT, + ], + ), + {f"parents.{depend.identifier}": parents[depend.identifier]}, + ).find_one() is not None +if pending_exists:Please confirm:
- That “done” should include all terminal statuses (
ERRORED,CANCELLED,TIMEDOUT, in addition toSUCCESS).- That
beanie.operators.NotInis available in your environment (or adjust to the correct exclusion operator if not).
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
state-manager/app/tasks/verify_graph.py (1)
246-250: Short-circuit verify_unites when earlier validation produced errorsRunning unites validation when topology/inputs have already added errors risks noisy messages and potential KeyError if the dependency_graph is partial. Keep it within the same guard as verify_inputs.
- if dependency_graph is not None and len(errors) == 0: - await verify_inputs(graph_template.nodes, database_nodes, dependency_graph, errors) - - await verify_unites(graph_template.nodes, dependency_graph, errors) + if dependency_graph is not None and not errors: + await verify_inputs(graph_template.nodes, database_nodes, dependency_graph, errors) + await verify_unites(graph_template.nodes, dependency_graph, errors)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
state-manager/app/tasks/verify_graph.py(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
state-manager/app/tasks/verify_graph.py (3)
state-manager/app/models/node_template_model.py (1)
NodeTemplate(9-15)state-manager/app/models/graph_template_validation_status.py (1)
GraphTemplateValidationStatus(4-8)state-manager/app/controller/upsert_graph_template.py (1)
upsert_graph_template(12-62)
Adding code to execute something once all nodes are done