Skip to content

Debug Mode for workflow developers#314

Merged
chenyushuo merged 6 commits intoagentscope-ai:mainfrom
pan-x-c:feature/workflow_debug
Oct 13, 2025
Merged

Debug Mode for workflow developers#314
chenyushuo merged 6 commits intoagentscope-ai:mainfrom
pan-x-c:feature/workflow_debug

Conversation

@pan-x-c
Copy link
Collaborator

@pan-x-c pan-x-c commented Oct 10, 2025

Description

This PR adds a debug mode, which allows workflow developers to quickly test and verify their workflows.

Checklist

Please check the following items before code is ready to be reviewed.

  • Code has passed all tests
  • Docstrings have been added/updated in Google Style
  • Documentation has been updated
  • Code is ready for review

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @pan-x-c, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a comprehensive debug mode aimed at enhancing the developer experience for workflow creation and verification. It provides a new command-line interface for debugging specific modules, integrates a performance profiling tool, and establishes a dedicated environment for running and inspecting inference models and workflows. The changes streamline the process of identifying issues and optimizing performance during workflow development.

Highlights

  • Debug Mode Introduction: Introduced a new 'debug' command to the CLI, allowing workflow developers to quickly test and verify their workflows for either 'inference_model' or 'workflow' modules.
  • Performance Profiling: Integrated viztracer as a development dependency and within the new DebugWorkflowRunner to provide performance profiling capabilities for workflow debugging, generating HTML output files.
  • Debug Inference Model Management: Added functionality to create and retrieve debug inference models in a dedicated Ray namespace (TRINITY_DEBUG_NAMESPACE), ensuring isolated and controlled environments for debugging.
  • Refactored Environment Variable Handling: Centralized the collection of environment variables from the configuration into a new get_envs method within the Config class, improving maintainability and consistency.
  • New Debug Workflow Runner: Implemented a DebugWorkflowRunner class that extends the existing WorkflowRunner to specifically handle debugging scenarios, including reading single tasks and tracing their execution.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a debug mode for workflow developers, which is a great addition for improving the development and testing experience. The implementation looks solid, with a new debug command, supporting logic for creating and managing debug models, and a DebugWorkflowRunner. I've added a few suggestions to improve the robustness and clarity of the tests and type hints. Specifically, I've pointed out the use of a fixed time.sleep in tests, which can be flaky, and a minor type hint inconsistency.

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Oct 10, 2025

/unittest-diff

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
69 67 2 0 0 0 913ms

Failed Tests

Failed Tests ❌ Fail Message
❌ tests/cli/launcher_test.py::TestLauncherMain::test_main_run_in_dlc The test failed in the call phase
❌ tests/cli/launcher_test.py::TestLauncherMain::test_multi_stage_run The test failed in the call phase

Tests

Test Name Status Flaky Duration
tests/cli/launcher_test.py::TestLauncherMain::test_debug_mode 34ms
tests/cli/launcher_test.py::TestLauncherMain::test_main_run_command 6ms
tests/cli/launcher_test.py::TestLauncherMain::test_main_run_in_dlc 1ms
tests/cli/launcher_test.py::TestLauncherMain::test_main_studio_command 1ms
tests/cli/launcher_test.py::TestLauncherMain::test_multi_stage_run 1ms
tests/common/config_test.py::TestConfig::test_all_examples_are_valid 34ms
tests/common/config_test.py::TestConfig::test_config_flatten 1ms
tests/common/config_test.py::TestConfig::test_continue_from_checkpoint_is_valid 1ms
tests/common/config_test.py::TestConfig::test_load_default_config 3ms
tests/common/experience_test.py::TestEID::test_eid_properties 1ms
tests/common/experience_test.py::TestExperience::test_action_mask_and_logprobs_type 1ms
tests/common/experience_test.py::TestExperience::test_assertions 1ms
tests/common/experience_test.py::TestExperience::test_dpo_experience 1ms
tests/common/experience_test.py::TestExperience::test_gather 1ms
tests/common/experience_test.py::TestExperience::test_hf_datasets_conversion 1ms
tests/common/experience_test.py::TestExperience::test_multi_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_serialize_deserialize 1ms
tests/common/experience_test.py::TestExperience::test_single_turn_experience 1ms
tests/common/experience_test.py::TestExperience::test_to_dict 1ms
tests/common/experience_test.py::TestExperienceConversion::test_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_dpo_experience_batch_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_experience_model_experience_conversion 1ms
tests/common/experience_test.py::TestExperienceConversion::test_gather_experiences_with_custom_fields 1ms
tests/common/experience_test.py::TestExperienceConversion::test_multiturn_experience_batch_converstion 1ms
tests/common/vllm_test.py::ModelWrapperTest_0::test_generate 55ms
tests/common/vllm_test.py::ModelWrapperTest_1::test_generate 36ms
tests/common/vllm_test.py::ModelWrapperTest_2::test_generate 48ms
tests/common/vllm_test.py::TestModelLen_0::test_model_len 20ms
tests/common/vllm_test.py::TestModelLen_1::test_model_len 21ms
tests/common/vllm_test.py::TestAPIServer::test_api 24ms
tests/common/vllm_test.py::TestAsyncAPIServer::test_api_async 24ms
tests/common/vllm_test.py::TestTokenizer::test_action_mask 1ms
tests/common/vllm_test.py::TestTokenizer::test_action_mask_with_tools 1ms
tests/common/vllm_test.py::TestAPIServerToolCall_0_deepseek_r1::test_api_tool_calls 22ms
tests/common/vllm_test.py::TestAPIServerToolCall_1::test_api_tool_calls 20ms
tests/explorer/explorer_test.py::TestExplorerCountdownEval::test_explorer 65ms
tests/explorer/explorer_test.py::TestExplorerCountdownNoEval::test_explorer 58ms
tests/explorer/explorer_test.py::TestExplorerGSM8k::test_explorer 206ms
tests/explorer/explorer_test.py::ServeTest::test_serve 70ms
tests/explorer/scheduler_test.py::SchedulerTest::test_async_workflow 5ms
tests/explorer/scheduler_test.py::SchedulerTest::test_concurrent_operations 5ms
tests/explorer/scheduler_test.py::SchedulerTest::test_get_results 23ms
tests/explorer/scheduler_test.py::SchedulerTest::test_multi_step_execution 5ms
tests/explorer/scheduler_test.py::SchedulerTest::test_non_repeatable_workflow 6ms
tests/explorer/scheduler_test.py::SchedulerTest::test_scheduler_all_methods 15ms
tests/explorer/scheduler_test.py::SchedulerTest::test_scheduler_restart_after_stop 9ms
tests/explorer/scheduler_test.py::SchedulerTest::test_split_tasks 8ms
tests/explorer/scheduler_test.py::SchedulerTest::test_stepwise_experience_eid 5ms
tests/explorer/scheduler_test.py::SchedulerTest::test_wait_all 8ms
tests/explorer/scheduler_test.py::SchedulerTest::test_wait_all_timeout_with_multi_batch 14ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_reward_propagation_workflow_0 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_reward_propagation_workflow_1 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_step_wise_reward_workflow_0 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_step_wise_reward_workflow_1 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_workflows_raise_error 1ms
tests/explorer/step_wise_workflow_test.py::WorkflowTest::test_workflows_stop_at_max_env_steps 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_gsm8k_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_boxed_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_complex_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_eval_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_fraction_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_math_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_rm_gallery_workflow 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_repeatable_0 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_repeatable_1 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_resettable_0 1ms
tests/explorer/workflow_test.py::WorkflowTest::test_workflow_resettable_1 1ms
tests/explorer/workflow_test.py::MultiTurnWorkflowTest_0::test_multi_turn_workflow 18ms
tests/explorer/workflow_test.py::MultiTurnWorkflowTest_1::test_multi_turn_workflow 19ms

Github Test Reporter by CTRF 💚

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Oct 10, 2025

/unittest-module-cli

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
5 5 0 0 0 0 49ms

Tests

Test Name Status Flaky Duration
tests/cli/launcher_test.py::TestLauncherMain::test_debug_mode 33ms
tests/cli/launcher_test.py::TestLauncherMain::test_main_run_command 6ms
tests/cli/launcher_test.py::TestLauncherMain::test_main_run_in_dlc 1ms
tests/cli/launcher_test.py::TestLauncherMain::test_main_studio_command 1ms
tests/cli/launcher_test.py::TestLauncherMain::test_multi_stage_run 2ms

Github Test Reporter by CTRF 💚

@pan-x-c pan-x-c requested a review from Copilot October 10, 2025 11:07
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds a debug mode for workflow developers to quickly test and verify their workflows without running the full training pipeline. The debug mode includes two components: starting inference models in a debug namespace and running workflows with performance profiling.

Key changes:

  • Added DebugWorkflowRunner class for debugging workflows with viztracer profiling
  • Implemented debug inference model creation and retrieval functions
  • Added CLI commands for debugging both inference models and workflows

Reviewed Changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
trinity/explorer/workflow_runner.py Added DebugWorkflowRunner class for workflow debugging with viztracer integration
trinity/common/models/init.py Added debug inference model functions and named actor support
trinity/common/constants.py Added DEBUG_NAMESPACE_ENV_VAR constant
trinity/common/config.py Added get_envs() method to extract environment variables from config
trinity/cli/launcher.py Added debug command with inference_model and workflow module support
tests/cli/launcher_test.py Added test coverage for debug mode functionality
pyproject.toml Added viztracer dependency for performance profiling
docs/sphinx_doc/source_zh/tutorial/develop_workflow.md Added Chinese documentation for debug mode
docs/sphinx_doc/source/tutorial/develop_workflow.md Added English documentation for debug mode
Comments suppressed due to low confidence (1)

tests/cli/launcher_test.py:1

  • The main block has been removed but the function definition for debug_inference_model_process was added outside the test class. This function should be moved inside the test class or kept as a module-level helper with proper organization.
import multiprocessing

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@pan-x-c
Copy link
Collaborator Author

pan-x-c commented Oct 10, 2025

/unittest-module-trainer

@github-actions
Copy link

Summary

Tests 📝 Passed ✅ Failed ❌ Skipped ⏭️ Other ❓ Flaky 🍂 Duration ⏱️
20 18 0 2 0 0 2.0s

Skipped

Tests Status
tests/trainer/trainer_test.py::TestMultiModalGRPO::test_trainer skipped ⏭️
tests/trainer/trainer_test.py::TestMultiModalSFT::test_trainer skipped ⏭️

Tests

Test Name Status Flaky Duration
tests/trainer/trainer_test.py::TestTrainerCountdown_0_fsdp::test_trainer 142ms
tests/trainer/trainer_test.py::TestTrainerCountdown_1_megatron::test_trainer 317ms
tests/trainer/trainer_test.py::TestStepAheadAsyncRL::test_trainer 60ms
tests/trainer/trainer_test.py::TestTrainerGSM8K_0_fsdp::test_trainer 54ms
tests/trainer/trainer_test.py::TestTrainerGSM8K_1_fsdp2::test_trainer 58ms
tests/trainer/trainer_test.py::TestTrainerGSM8K_2_fsdp::test_trainer 54ms
tests/trainer/trainer_test.py::TestTrainerGSM8K_3_fsdp2::test_trainer 63ms
tests/trainer/trainer_test.py::TestTrainerSFTWarmupGSM8K::test_trainer 101ms
tests/trainer/trainer_test.py::TestTrainerDPO::test_trainer 40ms
tests/trainer/trainer_test.py::TestTrainerSFT::test_trainer 36ms
tests/trainer/trainer_test.py::TestTrainerToolsSFT::test_trainer_tools 37ms
tests/trainer/trainer_test.py::TestFullyAsyncMode_0_fsdp::test_fully_async_mode 83ms
tests/trainer/trainer_test.py::TestFullyAsyncMode_1_fsdp::test_fully_async_mode 84ms
tests/trainer/trainer_test.py::TestFullyAsyncMode_2_megatron::test_fully_async_mode 175ms
tests/trainer/trainer_test.py::TestTrainerCheckpointSave_0_fsdp::test_trainer 105ms
tests/trainer/trainer_test.py::TestTrainerCheckpointSave_1_megatron::test_trainer 350ms
tests/trainer/trainer_test.py::TestTrainerMIX::test_trainer 54ms
tests/trainer/trainer_test.py::TestMultiModalGRPO::test_trainer ⏭️ 1ms
tests/trainer/trainer_test.py::TestMultiModalSFT::test_trainer ⏭️ 1ms
tests/trainer/trainer_test.py::TestTrainerLoRA::test_trainer 164ms

Github Test Reporter by CTRF 💚

@chenyushuo chenyushuo merged commit 332f37a into agentscope-ai:main Oct 13, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants