Skip to content

veRL-SGLang slower than expected (GH200) #1208

@EduardDurech

Description

@EduardDurech

veRL-SGLang on GH200 aarch64 cluster, I got installation working and standalone SGLang works as well as veRL-vLLM, however it seems something is not working well with veRL-SGLang as it's significantly slower (uses much less memory though), I tested torch_memory_saver and it works standalone, if you have anything that I should debug can test on our end

Versions

flash_attn                        2.7.3
flash_attn_3                      3.0.0b1
flashinfer-python                 0.2.2.post1
sgl-kernel                        0.0.9.post2
sglang                            0.4.5.post3
torch_memory_saver                0.0.5
verl                              0.3.0.post1
vllm                              0.8.3

FA3, flashinfer, sgl-kernel, sglang, torch_memory_saver, verl, and vllm built from source

veRL Trainer

vLLM equivalent with actor_rollout_ref.rollout.name=vllm

python3 -m verl.trainer.main_ppo \
    algorithm.adv_estimator=grpo \
    data.train_files=$(pwd)/data/train.parquet \
    data.val_files=$(pwd)/data/test.parquet \
    data.train_batch_size=1024 \
    data.max_prompt_length=1024 \
    data.max_response_length=1024 \
    data.filter_overlong_prompts=True \
    data.truncation='error' \
    actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct \
    actor_rollout_ref.actor.optim.lr=1e-6 \
    actor_rollout_ref.model.use_remove_padding=True \
    actor_rollout_ref.actor.ppo_mini_batch_size=256 \
    actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=40 \
    actor_rollout_ref.actor.use_kl_loss=True \
    actor_rollout_ref.actor.kl_loss_coef=0.001 \
    actor_rollout_ref.actor.kl_loss_type=low_var_kl \
    actor_rollout_ref.model.enable_gradient_checkpointing=True \
    actor_rollout_ref.actor.fsdp_config.param_offload=False \
    actor_rollout_ref.actor.fsdp_config.optimizer_offload=False \
    actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=40 \
    actor_rollout_ref.rollout.tensor_model_parallel_size=2 \
    actor_rollout_ref.rollout.name=sglang \
    actor_rollout_ref.rollout.gpu_memory_utilization=0.6 \
    actor_rollout_ref.rollout.n=5 \
    actor_rollout_ref.rollout.enforce_eager=False \
    actor_rollout_ref.rollout.free_cache_engine=False \
    actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=40 \
    actor_rollout_ref.ref.fsdp_config.param_offload=True \
    algorithm.kl_ctrl.kl_coef=0.001 \
    trainer.critic_warmup=0 \
    trainer.logger=['console','wandb'] \
    trainer.project_name='verl_grpo_example_gsm8k' \
    trainer.experiment_name='grpo_GSM8k_qwen0.5_test' \
    trainer.n_gpus_per_node=4 \
    trainer.nnodes=1 \
    trainer.save_freq=-1 \
    trainer.test_freq=5 \
    trainer.total_epochs=15 \
    "$@"

Blue SGLang, Pink FA2-vLLM, Green FA3-vLLM (I don't know if veRL is actually exploiting FA3)

Image

More graphs

Image

Image

Image

Image

Log
++ pwd
++ pwd
+ python3 -m verl.trainer.main_ppo algorithm.adv_estimator=grpo data.train_files=/workspace/verl/data/train.parquet data.val_files=/workspace/verl/data/test.parquet data.train_batch_size=1024 data.max_prompt_length=1024 data.max_response_length=1024 data.filter_overlong_prompts=True data.truncation=error actor_rollout_ref.model.path=Qwen/Qwen2.5-0.5B-Instruct actor_rollout_ref.actor.optim.lr=1e-6 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.ppo_mini_batch_size=256 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=40 actor_rollout_ref.actor.use_kl_loss=True actor_rollout_ref.actor.kl_loss_coef=0.001 actor_rollout_ref.actor.kl_loss_type=low_var_kl actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.actor.fsdp_config.param_offload=False actor_rollout_ref.actor.fsdp_config.optimizer_offload=False actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=40 actor_rollout_ref.rollout.tensor_model_parallel_size=2 actor_rollout_ref.rollout.name=sglang actor_rollout_ref.rollout.gpu_memory_utilization=0.6 actor_rollout_ref.rollout.n=5 actor_rollout_ref.rollout.enforce_eager=False actor_rollout_ref.rollout.free_cache_engine=False actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=40 actor_rollout_ref.ref.fsdp_config.param_offload=True algorithm.kl_ctrl.kl_coef=0.001 trainer.critic_warmup=0 'trainer.logger=[console,wandb]' trainer.project_name=verl_grpo_example_gsm8k trainer.experiment_name=grpo_GSM8k_qwen0.5_test trainer.n_gpus_per_node=4 trainer.nnodes=1 trainer.save_freq=-1 trainer.test_freq=5 trainer.total_epochs=15
2025-04-23 01:44:20,852 INFO worker.py:1852 -- Started a local Ray instance.
(TaskRunner pid=16292) {'actor_rollout_ref': {'actor': {'checkpoint': {'contents': ['model',
(TaskRunner pid=16292)                                                              'hf_model',
(TaskRunner pid=16292)                                                              'optimizer',
(TaskRunner pid=16292)                                                              'extra']},
(TaskRunner pid=16292)                                  'clip_ratio': 0.2,
(TaskRunner pid=16292)                                  'entropy_coeff': 0.001,
(TaskRunner pid=16292)                                  'fsdp_config': {'fsdp_size': -1,
(TaskRunner pid=16292)                                                  'optimizer_offload': False,
(TaskRunner pid=16292)                                                  'param_offload': False,
(TaskRunner pid=16292)                                                  'wrap_policy': {'min_num_params': 0}},
(TaskRunner pid=16292)                                  'grad_clip': 1.0,
(TaskRunner pid=16292)                                  'kl_loss_coef': 0.001,
(TaskRunner pid=16292)                                  'kl_loss_type': 'low_var_kl',
(TaskRunner pid=16292)                                  'optim': {'lr': 1e-06,
(TaskRunner pid=16292)                                            'lr_warmup_steps': -1,
(TaskRunner pid=16292)                                            'lr_warmup_steps_ratio': 0.0,
(TaskRunner pid=16292)                                            'min_lr_ratio': None,
(TaskRunner pid=16292)                                            'total_training_steps': -1,
(TaskRunner pid=16292)                                            'warmup_style': 'constant'},
(TaskRunner pid=16292)                                  'ppo_epochs': 1,
(TaskRunner pid=16292)                                  'ppo_max_token_len_per_gpu': 16384,
(TaskRunner pid=16292)                                  'ppo_micro_batch_size': None,
(TaskRunner pid=16292)                                  'ppo_micro_batch_size_per_gpu': 40,
(TaskRunner pid=16292)                                  'ppo_mini_batch_size': 256,
(TaskRunner pid=16292)                                  'shuffle': False,
(TaskRunner pid=16292)                                  'strategy': 'fsdp',
(TaskRunner pid=16292)                                  'ulysses_sequence_parallel_size': 1,
(TaskRunner pid=16292)                                  'use_dynamic_bsz': False,
(TaskRunner pid=16292)                                  'use_kl_loss': True,
(TaskRunner pid=16292)                                  'use_torch_compile': True},
(TaskRunner pid=16292)                        'hybrid_engine': True,
(TaskRunner pid=16292)                        'model': {'enable_gradient_checkpointing': True,
(TaskRunner pid=16292)                                  'external_lib': None,
(TaskRunner pid=16292)                                  'override_config': {},
(TaskRunner pid=16292)                                  'path': 'Qwen/Qwen2.5-0.5B-Instruct',
(TaskRunner pid=16292)                                  'use_remove_padding': True},
(TaskRunner pid=16292)                        'ref': {'fsdp_config': {'param_offload': True,
(TaskRunner pid=16292)                                                'wrap_policy': {'min_num_params': 0}},
(TaskRunner pid=16292)                                'log_prob_max_token_len_per_gpu': 16384,
(TaskRunner pid=16292)                                'log_prob_micro_batch_size': None,
(TaskRunner pid=16292)                                'log_prob_micro_batch_size_per_gpu': 40,
(TaskRunner pid=16292)                                'log_prob_use_dynamic_bsz': False,
(TaskRunner pid=16292)                                'ulysses_sequence_parallel_size': 1},
(TaskRunner pid=16292)                        'rollout': {'disable_log_stats': True,
(TaskRunner pid=16292)                                    'do_sample': True,
(TaskRunner pid=16292)                                    'dtype': 'bfloat16',
(TaskRunner pid=16292)                                    'enable_chunked_prefill': True,
(TaskRunner pid=16292)                                    'enforce_eager': False,
(TaskRunner pid=16292)                                    'free_cache_engine': False,
(TaskRunner pid=16292)                                    'gpu_memory_utilization': 0.6,
(TaskRunner pid=16292)                                    'ignore_eos': False,
(TaskRunner pid=16292)                                    'load_format': 'dummy_dtensor',
(TaskRunner pid=16292)                                    'log_prob_max_token_len_per_gpu': 16384,
(TaskRunner pid=16292)                                    'log_prob_micro_batch_size': None,
(TaskRunner pid=16292)                                    'log_prob_micro_batch_size_per_gpu': 40,
(TaskRunner pid=16292)                                    'log_prob_use_dynamic_bsz': False,
(TaskRunner pid=16292)                                    'max_model_len': None,
(TaskRunner pid=16292)                                    'max_num_batched_tokens': 8192,
(TaskRunner pid=16292)                                    'max_num_seqs': 1024,
(TaskRunner pid=16292)                                    'n': 5,
(TaskRunner pid=16292)                                    'name': 'sglang',
(TaskRunner pid=16292)                                    'prompt_length': 1024,
(TaskRunner pid=16292)                                    'response_length': 1024,
(TaskRunner pid=16292)                                    'temperature': 1.0,
(TaskRunner pid=16292)                                    'tensor_model_parallel_size': 2,
(TaskRunner pid=16292)                                    'top_k': -1,
(TaskRunner pid=16292)                                    'top_p': 1,
(TaskRunner pid=16292)                                    'use_fire_sampling': False,
(TaskRunner pid=16292)                                    'val_kwargs': {'do_sample': False,
(TaskRunner pid=16292)                                                   'n': 1,
(TaskRunner pid=16292)                                                   'temperature': 0,
(TaskRunner pid=16292)                                                   'top_k': -1,
(TaskRunner pid=16292)                                                   'top_p': 1.0}}},
(TaskRunner pid=16292)  'algorithm': {'adv_estimator': 'grpo',
(TaskRunner pid=16292)                'gamma': 1.0,
(TaskRunner pid=16292)                'kl_ctrl': {'kl_coef': 0.001, 'type': 'fixed'},
(TaskRunner pid=16292)                'kl_penalty': 'kl',
(TaskRunner pid=16292)                'lam': 1.0},
(TaskRunner pid=16292)  'critic': {'checkpoint': {'contents': ['model',
(TaskRunner pid=16292)                                         'hf_model',
(TaskRunner pid=16292)                                         'optimizer',
(TaskRunner pid=16292)                                         'extra']},
(TaskRunner pid=16292)             'cliprange_value': 0.5,
(TaskRunner pid=16292)             'forward_max_token_len_per_gpu': 32768,
(TaskRunner pid=16292)             'forward_micro_batch_size': None,
(TaskRunner pid=16292)             'forward_micro_batch_size_per_gpu': None,
(TaskRunner pid=16292)             'grad_clip': 1.0,
(TaskRunner pid=16292)             'model': {'enable_gradient_checkpointing': True,
(TaskRunner pid=16292)                       'external_lib': None,
(TaskRunner pid=16292)                       'fsdp_config': {'fsdp_size': -1,
(TaskRunner pid=16292)                                       'optimizer_offload': False,
(TaskRunner pid=16292)                                       'param_offload': False,
(TaskRunner pid=16292)                                       'wrap_policy': {'min_num_params': 0}},
(TaskRunner pid=16292)                       'override_config': {},
(TaskRunner pid=16292)                       'path': '~/models/deepseek-llm-7b-chat',
(TaskRunner pid=16292)                       'tokenizer_path': 'Qwen/Qwen2.5-0.5B-Instruct',
(TaskRunner pid=16292)                       'use_remove_padding': False},
(TaskRunner pid=16292)             'optim': {'lr': 1e-05,
(TaskRunner pid=16292)                       'lr_warmup_steps_ratio': 0.0,
(TaskRunner pid=16292)                       'min_lr_ratio': None,
(TaskRunner pid=16292)                       'total_training_steps': -1,
(TaskRunner pid=16292)                       'warmup_style': 'constant'},
(TaskRunner pid=16292)             'ppo_epochs': 1,
(TaskRunner pid=16292)             'ppo_max_token_len_per_gpu': 32768,
(TaskRunner pid=16292)             'ppo_micro_batch_size': None,
(TaskRunner pid=16292)             'ppo_micro_batch_size_per_gpu': None,
(TaskRunner pid=16292)             'ppo_mini_batch_size': 256,
(TaskRunner pid=16292)             'shuffle': False,
(TaskRunner pid=16292)             'strategy': 'fsdp',
(TaskRunner pid=16292)             'ulysses_sequence_parallel_size': 1,
(TaskRunner pid=16292)             'use_dynamic_bsz': False},
(TaskRunner pid=16292)  'custom_reward_function': {'name': 'compute_score', 'path': None},
(TaskRunner pid=16292)  'data': {'filter_overlong_prompts': True,
(TaskRunner pid=16292)           'image_key': 'images',
(TaskRunner pid=16292)           'max_prompt_length': 1024,
(TaskRunner pid=16292)           'max_response_length': 1024,
(TaskRunner pid=16292)           'prompt_key': 'prompt',
(TaskRunner pid=16292)           'return_raw_chat': False,
(TaskRunner pid=16292)           'return_raw_input_ids': False,
(TaskRunner pid=16292)           'shuffle': True,
(TaskRunner pid=16292)           'tokenizer': None,
(TaskRunner pid=16292)           'train_batch_size': 1024,
(TaskRunner pid=16292)           'train_files': '/workspace/verl/data/train.parquet',
(TaskRunner pid=16292)           'truncation': 'error',
(TaskRunner pid=16292)           'val_batch_size': None,
(TaskRunner pid=16292)           'val_files': '/workspace/verl/data/test.parquet'},
(TaskRunner pid=16292)  'reward_model': {'enable': False,
(TaskRunner pid=16292)                   'forward_max_token_len_per_gpu': 32768,
(TaskRunner pid=16292)                   'max_length': None,
(TaskRunner pid=16292)                   'micro_batch_size': None,
(TaskRunner pid=16292)                   'micro_batch_size_per_gpu': None,
(TaskRunner pid=16292)                   'model': {'external_lib': None,
(TaskRunner pid=16292)                             'fsdp_config': {'fsdp_size': -1,
(TaskRunner pid=16292)                                             'param_offload': False,
(TaskRunner pid=16292)                                             'wrap_policy': {'min_num_params': 0}},
(TaskRunner pid=16292)                             'input_tokenizer': 'Qwen/Qwen2.5-0.5B-Instruct',
(TaskRunner pid=16292)                             'path': '~/models/FsfairX-LLaMA3-RM-v0.1',
(TaskRunner pid=16292)                             'use_remove_padding': False},
(TaskRunner pid=16292)                   'reward_manager': 'naive',
(TaskRunner pid=16292)                   'strategy': 'fsdp',
(TaskRunner pid=16292)                   'ulysses_sequence_parallel_size': 1,
(TaskRunner pid=16292)                   'use_dynamic_bsz': False},
(TaskRunner pid=16292)  'trainer': {'balance_batch': True,
(TaskRunner pid=16292)              'critic_warmup': 0,
(TaskRunner pid=16292)              'default_hdfs_dir': None,
(TaskRunner pid=16292)              'default_local_dir': 'checkpoints/verl_grpo_example_gsm8k/grpo_GSM8k_qwen0.5_test',
(TaskRunner pid=16292)              'del_local_ckpt_after_load': False,
(TaskRunner pid=16292)              'experiment_name': 'grpo_GSM8k_qwen0.5_test',
(TaskRunner pid=16292)              'logger': ['console', 'wandb'],
(TaskRunner pid=16292)              'max_actor_ckpt_to_keep': None,
(TaskRunner pid=16292)              'max_critic_ckpt_to_keep': None,
(TaskRunner pid=16292)              'n_gpus_per_node': 4,
(TaskRunner pid=16292)              'nnodes': 1,
(TaskRunner pid=16292)              'project_name': 'verl_grpo_example_gsm8k',
(TaskRunner pid=16292)              'resume_from_path': None,
(TaskRunner pid=16292)              'resume_mode': 'auto',
(TaskRunner pid=16292)              'save_freq': -1,
(TaskRunner pid=16292)              'test_freq': 5,
(TaskRunner pid=16292)              'total_epochs': 15,
(TaskRunner pid=16292)              'total_training_steps': None,
(TaskRunner pid=16292)              'val_generations_to_log_to_wandb': 0}}
(TaskRunner pid=16292) [validate_config] All configuration checks passed successfully!
(TaskRunner pid=16292) dataset len: 7473
(TaskRunner pid=16292) filter dataset len: 7473
(TaskRunner pid=16292) dataset len: 1319
(TaskRunner pid=16292) DeprecationWarning: `ray.state.available_resources_per_node` is a private attribute and access will be removed in a future Ray version.
(TaskRunner pid=16292) filter dataset len: 1319
(TaskRunner pid=16292) Size of train dataloader: 7
(TaskRunner pid=16292) Total training steps: 105
(WorkerDict pid=17885) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
(WorkerDict pid=17885) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention
(WorkerDict pid=17885) [rank3]:[W423 01:45:07.460585879 ProcessGroupNCCL.cpp:4571] [PG ID 0 PG GUID 0 Rank 3]  using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id.
(WorkerDict pid=17565) Model config after override: Qwen2Config {
(WorkerDict pid=17565)   "architectures": [
(WorkerDict pid=17565)     "Qwen2ForCausalLM"
(WorkerDict pid=17565)   ],
(WorkerDict pid=17565)   "attention_dropout": 0.0,
(WorkerDict pid=17565)   "eos_token_id": 151645,
(WorkerDict pid=17565)   "hidden_act": "silu",
(WorkerDict pid=17565)   "hidden_size": 896,
(WorkerDict pid=17565)   "initializer_range": 0.02,
(WorkerDict pid=17565)   "intermediate_size": 4864,
(WorkerDict pid=17565)   "max_position_embeddings": 32768,
(WorkerDict pid=17565)   "max_window_layers": 21,
(WorkerDict pid=17565)   "model_type": "qwen2",
(WorkerDict pid=17565)   "num_attention_heads": 14,
(WorkerDict pid=17565)   "num_hidden_layers": 24,
(WorkerDict pid=17565)   "num_key_value_heads": 2,
(WorkerDict pid=17565)   "pad_token_id": 151643,
(WorkerDict pid=17565)   "rms_norm_eps": 1e-06,
(WorkerDict pid=17565)   "rope_scaling": null,
(WorkerDict pid=17565)   "rope_theta": 1000000.0,
(WorkerDict pid=17565)   "sliding_window": 32768,
(WorkerDict pid=17565)   "tie_word_embeddings": true,
(WorkerDict pid=17565)   "torch_dtype": "bfloat16",
(WorkerDict pid=17565)   "transformers_version": "4.51.0",
(WorkerDict pid=17565)   "use_cache": true,
(WorkerDict pid=17565)   "use_sliding_window": false,
(WorkerDict pid=17565)   "vocab_size": 151936
(WorkerDict pid=17565) }
(WorkerDict pid=17565) 
(WorkerDict pid=17565) Qwen2ForCausalLM contains 494.03M parameters
(WorkerDict pid=17565) wrap_policy: functools.partial(<function _or_policy at 0x40305686b7e0>, policies=[functools.partial(<function transformer_auto_wrap_policy at 0x40305686b6a0>, transformer_layer_cls={<class 'transformers.models.qwen2.modeling_qwen2.Qwen2DecoderLayer'>})])
(WorkerDict pid=17884) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention [repeated 3x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.)
(WorkerDict pid=17565) Actor use_remove_padding=True
(WorkerDict pid=17565) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)`
(WorkerDict pid=17884) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [repeated 3x across cluster]
(WorkerDict pid=17884) [rank2]:[W423 01:45:09.109160841 ProcessGroupNCCL.cpp:4571] [PG ID 0 PG GUID 0 Rank 2]  using GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect. Specify device_ids in barrier() to force use of a particular device, or call init_process_group() with a device_id. [repeated 3x across cluster]
(WorkerDict pid=17565) Model config after override: Qwen2Config {
(WorkerDict pid=17565)   "architectures": [
(WorkerDict pid=17565)     "Qwen2ForCausalLM"
(WorkerDict pid=17565)   ],
(WorkerDict pid=17565)   "attention_dropout": 0.0,
(WorkerDict pid=17565)   "eos_token_id": 151645,
(WorkerDict pid=17565)   "hidden_act": "silu",
(WorkerDict pid=17565)   "hidden_size": 896,
(WorkerDict pid=17565)   "initializer_range": 0.02,
(WorkerDict pid=17565)   "intermediate_size": 4864,
(WorkerDict pid=17565)   "max_position_embeddings": 32768,
(WorkerDict pid=17565)   "max_window_layers": 21,
(WorkerDict pid=17565)   "model_type": "qwen2",
(WorkerDict pid=17565)   "num_attention_heads": 14,
(WorkerDict pid=17565)   "num_hidden_layers": 24,
(WorkerDict pid=17565)   "num_key_value_heads": 2,
(WorkerDict pid=17565)   "pad_token_id": 151643,
(WorkerDict pid=17565)   "rms_norm_eps": 1e-06,
(WorkerDict pid=17565)   "rope_scaling": null,
(WorkerDict pid=17565)   "rope_theta": 1000000.0,
(WorkerDict pid=17565)   "sliding_window": 32768,
(WorkerDict pid=17565)   "tie_word_embeddings": true,
(WorkerDict pid=17565)   "torch_dtype": "bfloat16",
(WorkerDict pid=17565)   "transformers_version": "4.51.0",
(WorkerDict pid=17565)   "use_cache": true,
(WorkerDict pid=17565)   "use_sliding_window": false,
(WorkerDict pid=17565)   "vocab_size": 151936
(WorkerDict pid=17565) }
(WorkerDict pid=17565) 
(WorkerDict pid=17565) Qwen2ForCausalLM contains 494.03M parameters
(WorkerDict pid=17565) Total steps: 105, num_warmup_steps: 0
(WorkerDict pid=17885) wrap_policy: functools.partial(<function _or_policy at 0x4030615cb7e0>, policies=[functools.partial(<function transformer_auto_wrap_policy at 0x4030615cb6a0>, transformer_layer_cls={<class 'transformers.models.qwen2.modeling_qwen2.Qwen2DecoderLayer'>})]) [repeated 7x across cluster]
(WorkerDict pid=17883) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention [repeated 4x across cluster]
(WorkerDict pid=17565) Before building sglang rollout, memory allocated (GB): 0.46010828018188477, memory reserved (GB): 2.166015625
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
(WorkerDict pid=17883) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` [repeated 3x across cluster]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  6.04it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  6.02it/s]
(WorkerDict pid=17565) 
Capturing batches (avail_mem=34.33 GB):   0%|          | 0/35 [00:00<?, ?it/s]
Capturing batches (avail_mem=33.84 GB):   3%|| 1/35 [00:01<00:47,  1.39s/it]
Capturing batches (avail_mem=33.66 GB):   6%|| 2/35 [00:01<00:30,  1.10it/s]
Capturing batches (avail_mem=33.49 GB):   9%|| 3/35 [00:02<00:25,  1.24it/s]
Capturing batches (avail_mem=33.33 GB):  11%|█▏        | 4/35 [00:03<00:21,  1.43it/s]
Capturing batches (avail_mem=33.17 GB):  14%|█▍        | 5/35 [00:04<00:23,  1.27it/s]
Capturing batches (avail_mem=33.01 GB):  17%|█▋        | 6/35 [00:04<00:20,  1.44it/s]
Capturing batches (avail_mem=32.86 GB):  20%|██        | 7/35 [00:05<00:20,  1.34it/s]
Capturing batches (avail_mem=32.72 GB):  23%|██▎       | 8/35 [00:06<00:21,  1.26it/s]
Capturing batches (avail_mem=32.58 GB):  26%|██▌       | 9/35 [00:07<00:21,  1.22it/s]
Capturing batches (avail_mem=32.45 GB):  29%|██▊       | 10/35 [00:07<00:19,  1.28it/s]
Capturing batches (avail_mem=32.32 GB):  31%|███▏      | 11/35 [00:08<00:16,  1.44it/s]
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  6.60it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  6.58it/s]
(WorkerDict pid=17884) 
Capturing batches (avail_mem=32.20 GB):  34%|███▍      | 12/35 [00:08<00:14,  1.57it/s]
Capturing batches (avail_mem=32.08 GB):  37%|███▋      | 13/35 [00:09<00:14,  1.52it/s]
Capturing batches (avail_mem=33.99 GB):   0%|          | 0/35 [00:00<?, ?it/s]
Capturing batches (avail_mem=31.97 GB):  40%|████      | 14/35 [00:10<00:12,  1.63it/s]
Capturing batches (avail_mem=31.49 GB):  57%|█████▋    | 20/35 [00:13<00:07,  1.99it/s] [repeated 11x across cluster]
Capturing batches (avail_mem=31.11 GB):  86%|████████▌ | 30/35 [00:18<00:02,  2.01it/s] [repeated 21x across cluster]
Capturing batches (avail_mem=31.07 GB):  91%|█████████▏| 32/35 [00:19<00:01,  2.05it/s]
Capturing batches (avail_mem=31.06 GB):  94%|█████████▍| 33/35 [00:19<00:00,  2.07it/s]
Capturing batches (avail_mem=31.05 GB):  97%|█████████▋| 34/35 [00:20<00:00,  2.08it/s]
Capturing batches (avail_mem=31.05 GB): 100%|██████████| 35/35 [00:20<00:00,  1.70it/s]
(WorkerDict pid=17883) kwargs: {'n': 5, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False}
(WorkerDict pid=17885) Actor use_remove_padding=True [repeated 7x across cluster]
(WorkerDict pid=17885) Total steps: 105, num_warmup_steps: 0 [repeated 3x across cluster]
(WorkerDict pid=17565) /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
(WorkerDict pid=17565)   warnings.warn(
(WorkerDict pid=17565) After building sglang rollout, memory allocated (GB): 0.46010828018188477, memory reserved (GB): 2.166015625
(WorkerDict pid=17565) After building sharding manager, memory allocated (GB): 0.46010828018188477, memory reserved (GB): 2.166015625
Capturing batches (avail_mem=30.90 GB):  77%|███████▋  | 27/35 [00:13<00:03,  2.02it/s] [repeated 12x across cluster]
Capturing batches (avail_mem=30.76 GB):  91%|█████████▏| 32/35 [00:16<00:01,  2.03it/s]
Capturing batches (avail_mem=30.76 GB):  94%|█████████▍| 33/35 [00:16<00:00,  2.03it/s]
Capturing batches (avail_mem=30.75 GB):  97%|█████████▋| 34/35 [00:17<00:00,  2.03it/s]
(WorkerDict pid=17883) /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
(WorkerDict pid=17883)   warnings.warn(
Capturing batches (avail_mem=30.75 GB): 100%|██████████| 35/35 [00:17<00:00,  1.96it/s]
(WorkerDict pid=17885) /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
(WorkerDict pid=17885)   warnings.warn(
Capturing batches (avail_mem=30.77 GB):  89%|████████▊ | 31/35 [00:15<00:01,  2.02it/s] [repeated 4x across cluster]
(WorkerDict pid=17885) kwargs: {'n': 5, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False} [repeated 2x across cluster]
(TaskRunner pid=16292) wandb: Currently logged in as: <user> to https://api.wandb.ai. Use `wandb login --relogin` to force relogin
(TaskRunner pid=16292) wandb: Tracking run with wandb version 0.19.10
(TaskRunner pid=16292) wandb: Run data is saved locally in /workspace/verl/wandb/run-20250423_014617-n98y2dl8
(TaskRunner pid=16292) wandb: Run `wandb offline` to turn off syncing.
(TaskRunner pid=16292) wandb: Syncing run grpo_GSM8k_qwen0.5_test
(TaskRunner pid=16292) wandb: ⭐️ View project at https://wandb.ai/<user>/verl_grpo_example_gsm8k
(TaskRunner pid=16292) wandb: 🚀 View run at https://wandb.ai/<user>/verl_grpo_example_gsm8k/runs/n98y2dl8
(TaskRunner pid=16292) Using LocalLogger is deprecated. The constructor API will change 
(TaskRunner pid=16292) Checkpoint tracker file does not exist: %s /workspace/verl/checkpoints/verl_grpo_example_gsm8k/grpo_GSM8k_qwen0.5_test/latest_checkpointed_iteration.txt
(TaskRunner pid=16292) Training from scratch
(TaskRunner pid=16292) test_gen_batch meta info: {'eos_token_id': 151645, 'pad_token_id': 151643, 'recompute_log_prob': False, 'do_sample': False, 'validate': True}
(WorkerDict pid=17565) /usr/local/lib/python3.12/dist-packages/sglang/srt/entrypoints/verl_engine.py:160: RuntimeWarning: coroutine 'TokenizerManager.flush_cache' was never awaited
(WorkerDict pid=17565)   self._engine.tokenizer_manager.flush_cache()
(WorkerDict pid=17565) RuntimeWarning: Enable tracemalloc to get the object allocation traceback
(WorkerDict pid=17884) /usr/local/lib/python3.12/dist-packages/torch/distributed/fsdp/fully_sharded_data_parallel.py:690: FutureWarning: FSDP.state_dict_type() and FSDP.set_state_dict_type() are being deprecated. Please use APIs, get_state_dict() and set_state_dict(), which can support different parallelisms, FSDP1, FSDP2, DDP. API doc: https://pytorch.org/docs/stable/distributed.checkpoint.html#torch.distributed.checkpoint.state_dict.get_state_dict .Tutorial: https://pytorch.org/tutorials/recipes/distributed_checkpoint_recipe.html .
(WorkerDict pid=17884)   warnings.warn(
(WorkerDict pid=17884) self.sampling_params={'n': 1, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False}
(WorkerDict pid=17884) kwargs: {'n': 5, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False}
(WorkerDict pid=17884) /usr/local/lib/python3.12/dist-packages/sglang/srt/utils.py:888: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:203.)
(WorkerDict pid=17884)   tensor_data = torch.ByteTensor(
(WorkerDict pid=17884) /usr/local/lib/python3.12/dist-packages/sglang/srt/entrypoints/verl_engine.py:160: RuntimeWarning: coroutine 'TokenizerManager.flush_cache' was never awaited
(WorkerDict pid=17884)   self._engine.tokenizer_manager.flush_cache()
(WorkerDict pid=17884) RuntimeWarning: Enable tracemalloc to get the object allocation traceback
(TaskRunner pid=16292) validation generation end
(WorkerDict pid=17883) self.sampling_params={'n': 1, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False} [repeated 3x across cluster]
(TaskRunner pid=16292) [prompt] system
(TaskRunner pid=16292) You are Qwen, created by Alibaba Cloud. You are a helpful assistant.
(TaskRunner pid=16292) user
(TaskRunner pid=16292) Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market? Let's think step by step and output the final answer after "####".
(TaskRunner pid=16292) assistant
(TaskRunner pid=16292) 
(TaskRunner pid=16292) [response] To determine how much Janet makes at the farmers' market every day, we need to follow these steps:
(TaskRunner pid=16292) 
(TaskRunner pid=16292) 1. **Calculate the total number of eggs laid by the ducks in a day:**
(TaskRunner pid=16292)    - Janet's ducks lay 16 eggs per day.
(TaskRunner pid=16292) 
(TaskRunner pid=16292) 2. **Calculate the total number of eggs Janet eats in a day:**
(TaskRunner pid=16292)    - Janet eats 3 eggs for breakfast.
(TaskRunner pid=16292)    - She eats 4 muffins for baking.
(TaskRunner pid=16292)    - Therefore, the total number of eggs she eats in a day is:
(TaskRunner pid=16292)      \[
(TaskRunner pid=16292)      3 \text{ (breakfast)} + 4 \text{ (baking)} = 7 \text{ eggs}
(TaskRunner pid=16292)      \]
(TaskRunner pid=16292) 
(TaskRunner pid=16292) 3. **Calculate the number of eggs Janet sells at the farmers' market in a day:**
(TaskRunner pid=16292)    - She sells the remainder of the eggs at the farmers' market.
(TaskRunner pid=16292)    - The total number of eggs laid in a day is 16.
(TaskRunner pid=16292)    - Subtract the number of eggs she eats from the total:
(TaskRunner pid=16292)      \[
(TaskRunner pid=16292)      16 \text{ (total eggs)} - 7 \text{ (eggs eaten)} = 9 \text{ eggs}
(TaskRunner pid=16292)      \]
(TaskRunner pid=16292) 
(TaskRunner pid=16292) 4. **Calculate the total revenue from selling the eggs at the farmers' market:**
(TaskRunner pid=16292)    - Each egg is sold for $2.
(TaskRunner pid=16292)    - The number of eggs sold is 9.
(TaskRunner pid=16292)    - Therefore, the total revenue is:
(TaskRunner pid=16292)      \[
(TaskRunner pid=16292)      9 \text{ eggs} \times 2 \text{ dollars/egg} = 18 \text{ dollars}
(TaskRunner pid=16292)      \]
(TaskRunner pid=16292) 
(TaskRunner pid=16292) Thus, Janet makes \(\boxed{18}\) dollars every day at the farmers' market.
(TaskRunner pid=16292) [ground_truth] 18
(TaskRunner pid=16292) [score] 0.0
Training Progress:   0%|          | 0/105 [00:00<?, ?it/s]
(TaskRunner pid=16292) ("Initial validation metrics: {'val/test_score/openai/gsm8k': "
(TaskRunner pid=16292)  '0.000758150113722517}')
(TaskRunner pid=16292) step:0 - val/test_score/openai/gsm8k:0.001
Training Progress:   1%|          | 1/105 [01:42<2:57:35, 102.46s/it]
(WorkerDict pid=17565) /usr/local/lib/python3.12/dist-packages/sglang/srt/utils.py:888: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /opt/pytorch/pytorch/torch/csrc/utils/tensor_numpy.cpp:203.)
(WorkerDict pid=17565)   tensor_data = torch.ByteTensor(
(TaskRunner pid=16292) step:1 - global_seqlen/min:540297.000 - global_seqlen/max:558462.000 - global_seqlen/minmax_diff:18165.000 - global_seqlen/balanced_min:552118.000 - global_seqlen/balanced_max:552119.000 - global_seqlen/mean:552118.250 - actor/kl_loss:0.001 - actor/kl_coef:0.001 - actor/entropy_loss:0.564 - actor/pg_loss:0.006 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:0.085 - perf/mfu/actor:0.948 - perf/max_memory_allocated_gb:25.921 - perf/max_memory_reserved_gb:61.611 - perf/cpu_memory_used_gb:333.392 - actor/lr:0.000 - critic/score/mean:0.009 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.009 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.001 - critic/advantages/max:1.789 - critic/advantages/min:-1.095 - critic/returns/mean:-0.001 - critic/returns/max:1.789 - critic/returns/min:-1.095 - response_length/mean:326.915 - response_length/max:1024.000 - response_length/min:3.000 - response_length/clip_ratio:0.007 - prompt_length/mean:104.428 - prompt_length/max:215.000 - prompt_length/min:65.000 - prompt_length/clip_ratio:0.000 - timing_s/gen:74.396 - timing_s/old_log_prob:6.685 - timing_s/ref:3.780 - timing_s/adv:1.153 - timing_s/update_actor:15.487 - timing_s/step:101.599 - timing_per_token_ms/adv:0.001 - timing_per_token_ms/ref:0.002 - timing_per_token_ms/update_actor:0.007 - timing_per_token_ms/gen:0.044 - perf/total_num_tokens:2208473.000 - perf/time_per_step:101.599 - perf/throughput:5434.299
(WorkerDict pid=17885) self.sampling_params={'n': 5, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False} [repeated 4x across cluster]
(WorkerDict pid=17565) /usr/local/lib/python3.12/dist-packages/sglang/srt/entrypoints/verl_engine.py:160: RuntimeWarning: coroutine 'TokenizerManager.flush_cache' was never awaited
(WorkerDict pid=17565)   self._engine.tokenizer_manager.flush_cache()
(WorkerDict pid=17565) RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Training Progress:   2%|| 2/105 [03:19<2:49:58, 99.02s/it] 
(WorkerDict pid=17884) /usr/local/lib/python3.12/dist-packages/sglang/srt/entrypoints/verl_engine.py:160: RuntimeWarning: coroutine 'TokenizerManager.flush_cache' was never awaited
(WorkerDict pid=17884)   self._engine.tokenizer_manager.flush_cache()
(WorkerDict pid=17884) RuntimeWarning: Enable tracemalloc to get the object allocation traceback
(TaskRunner pid=16292) step:2 - global_seqlen/min:538646.000 - global_seqlen/max:556412.000 - global_seqlen/minmax_diff:17766.000 - global_seqlen/balanced_min:547190.000 - global_seqlen/balanced_max:547191.000 - global_seqlen/mean:547190.500 - actor/kl_loss:0.001 - actor/kl_coef:0.001 - actor/entropy_loss:0.554 - actor/pg_loss:0.001 - actor/pg_clipfrac:0.000 - actor/ppo_kl:0.000 - actor/grad_norm:0.069 - perf/mfu/actor:1.018 - perf/max_memory_allocated_gb:25.921 - perf/max_memory_reserved_gb:61.611 - perf/cpu_memory_used_gb:360.092 - actor/lr:0.000 - critic/score/mean:0.011 - critic/score/max:1.000 - critic/score/min:0.000 - critic/rewards/mean:0.011 - critic/rewards/max:1.000 - critic/rewards/min:0.000 - critic/advantages/mean:-0.003 - critic/advantages/max:1.789 - critic/advantages/min:-0.730 - critic/returns/mean:-0.003 - critic/returns/max:1.789 - critic/returns/min:-0.730 - response_length/mean:324.958 - response_length/max:1024.000 - response_length/min:9.000 - response_length/clip_ratio:0.004 - prompt_length/mean:102.534 - prompt_length/max:256.000 - prompt_length/min:63.000 - prompt_length/clip_ratio:0.000 - timing_s/gen:74.020 - timing_s/old_log_prob:3.566 - timing_s/ref:3.420 - timing_s/adv:1.167 - timing_s/update_actor:14.302 - timing_s/step:96.551 - timing_per_token_ms/adv:0.001 - timing_per_token_ms/ref:0.002 - timing_per_token_ms/update_actor:0.007 - timing_per_token_ms/gen:0.044 - perf/total_num_tokens:2188762.000 - perf/time_per_step:96.551 - perf/throughput:5667.355
(WorkerDict pid=17565) self.sampling_params={'n': 5, 'max_new_tokens': 1024, 'presence_penalty': 0.0, 'frequency_penalty': 0.0, 'repetition_penalty': 1.0, 'temperature': 1.0, 'top_k': -1, 'top_p': 1, 'ignore_eos': False} [repeated 4x across cluster]

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions