Skip to content

Conversation

@leon062112
Copy link
Contributor

PR Category

Feature Enhancement

Description

Copilot AI review requested due to automatic review settings January 15, 2026 14:32
@paddle-bot
Copy link

paddle-bot bot commented Jan 15, 2026

Thanks for your contribution!

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request refactors the graph decomposer logic by extracting it from run_decomposer_for_single_model into a new standalone wrapper module. The wrapper provides a cleaner separation of concerns and better modularity.

Changes:

  • Created a new local_graph_decomposer_wrapper.py module that encapsulates the decorator config building and subprocess execution logic
  • Simplified run_decomposer_for_single_model by delegating to the new wrapper module
  • Maintained the same functionality while improving code organization

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.

File Description
graph_net/local_graph_decomposer_wrapper.py New wrapper module that builds decorator config and executes the graph decomposition subprocess
graph_net/subgraph_decompose_and_evaluation_step.py Refactored to use the new wrapper module instead of building decorator config inline

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

return decorator_config


def main():
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main function lacks a docstring explaining its purpose and behavior. Adding documentation would improve code clarity and maintainability.

Suggested change
def main():
def main():
"""Entry point for running a model with the local graph decomposer.
This function expects command-line arguments to have been parsed into the
module-level ``args`` variable. It:
1. Parses and validates the JSON string of tensor split positions.
2. Builds a decorator configuration for the specified framework and model.
3. Base64-encodes the configuration and passes it to the framework-specific
``run_model`` module as a decorator configuration argument.
4. Executes the model in a subprocess and exits the current process with
the subprocess's return code.
"""

Copilot uses AI. Check for mistakes.
Comment on lines +9 to +14


def convert_json_to_b64_string(config) -> str:
return base64.b64encode(json.dumps(config).encode()).decode()


Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The convert_json_to_b64_string function is duplicated from subgraph_decompose_and_evaluation_step.py. This creates code duplication and maintainability issues. Consider importing this function from the existing module or creating a shared utility module for common encoding/decoding functions.

Suggested change
def convert_json_to_b64_string(config) -> str:
return base64.b64encode(json.dumps(config).encode()).decode()
from graph_net.subgraph_decompose_and_evaluation_step import convert_json_to_b64_string

Copilot uses AI. Check for mistakes.
split_positions: List[int],
output_dir: str,
log_path: str,
) -> bool:
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The docstring for the run_decomposer_for_single_model function was removed during refactoring. The original docstring "Decomposes a single model." should be retained to maintain documentation consistency.

Suggested change
) -> bool:
) -> bool:
"""Decomposes a single model."""

Copilot uses AI. Check for mistakes.
Comment on lines +53 to +79
def main():
split_positions = json.loads(args.split_positions_json)
if not isinstance(split_positions, list) or not all(
isinstance(x, int) for x in split_positions
):
raise ValueError(f"Invalid split positions: {split_positions}")

decorator_config = build_decorator_config(
framework=args.framework,
model_name=args.model_name,
output_dir=args.output_dir,
split_positions=split_positions,
)
decorator_config_b64 = convert_json_to_b64_string(decorator_config)

cmd = [
sys.executable,
"-m",
f"graph_net.{args.framework}.run_model",
"--model-path",
args.model_path,
"--decorator-config",
decorator_config_b64,
]

result = subprocess.run(cmd, text=True)
sys.exit(result.returncode)
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main function references the args variable before it is defined. The args variable is only assigned on line 92 within the if name == "main" block, but the main function is called on line 93. This creates a scope issue where args is not accessible within the main function.

The main function should accept args as a parameter.

Copilot uses AI. Check for mistakes.
model_name: str,
output_dir: str,
split_positions: List[int],
) -> dict:
Copy link

Copilot AI Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The build_decorator_config function lacks a docstring explaining its purpose, parameters, and return value. Adding documentation would improve code maintainability.

Suggested change
) -> dict:
) -> dict:
"""
Build the decorator configuration for running a model with the graph decomposer.
The returned dictionary is encoded and passed to the framework-specific
`run_model` entry point to configure how the graph extraction and optional
post-processing should be performed.
:param framework: Name of the ML framework (e.g. ``"paddle"`` or ``"torch"``),
used to select framework-specific extractor paths and optional post-processors.
:param model_name: Logical name of the model to include in the decorator config.
:param output_dir: Directory where the graph decomposer should write its outputs.
:param split_positions: List of layer or block indices at which the model
computation should be split during graph decomposition.
:return: A nested dictionary describing the decorator, custom extractor, and,
for some frameworks (e.g. ``"paddle"``), additional post-extraction processing.
"""

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant