Skip to content

Loading saved HF weights errors Cannot find model module. #97

@tyler-griggs

Description

@tyler-griggs

What's the issue?

After saving HF model weights on intermediate steps, loading them back in a new training run (or with vllm serve) gives the error:

ValueError: Cannot find model module. 'FSDPQwen2ForCausalLM' is not a registered model in the Transformers library (only relevant if the model is meant to be in Transformers) and 'AutoModel' is not present in the model config's 'auto_map' (relevant if the model is custom).

The hf model_config, the architecture is:

"architectures": [
  "FSDPQwen2ForCausalLM"
],

So it seems like the HF model is still FSDP-wrapped (or labeled as FSDP-wrapped).

Simply overriding the architecture FSDPQwen2ForCausalLM to Qwen2ForCausalLM seems to work, so it might just be incorrect architecture labeling.

Script used

uv run --isolated --frozen --extra vllm -m skyrl_train.entrypoints.main_base \
  trainer.algorithm.advantage_estimator="grpo" \
  data.train_data="['${DATA_DIR}/train.parquet']" \
  data.val_data="['${DATA_DIR}/validation.parquet']" \
  trainer.policy.model.path="Qwen/Qwen2.5-3B-Instruct" \
  trainer.placement.colocate_all=true \
  trainer.strategy=fsdp2 \
  trainer.policy.optimizer_config.max_grad_norm=0.5 \
  trainer.placement.policy_num_gpus_per_node=4 \
  trainer.placement.ref_num_gpus_per_node=4 \
  generator.model_dtype=bfloat16 \
  generator.num_inference_engines=4 \
  generator.inference_engine_tensor_parallel_size=1 \
  trainer.epochs=2 \
  trainer.update_epochs_per_batch=1 \
  trainer.train_batch_size=256 \
  trainer.policy_mini_batch_size=256 \
  trainer.micro_forward_batch_size_per_gpu=4 \
  trainer.micro_train_batch_size_per_gpu=4 \
  trainer.max_prompt_length=1024 \
  generator.max_input_length=4096 \
  generator.sampling_params.max_generate_length=500 \
  trainer.policy.optimizer_config.lr=1e-6 \
  trainer.policy.optimizer_config.num_warmup_steps=10 \
  trainer.algorithm.use_kl_loss=true \
  trainer.ckpt_interval=100000 \
  generator.backend=vllm \
  generator.run_engines_locally=true \
  generator.weight_sync_backend=nccl \
  generator.async_engine=true \
  generator.batched=false \
  generator.n_samples_per_prompt=8 \
  generator.gpu_memory_utilization=0.7 \
  generator.max_turns=4 \
  generator.sampling_params.temperature=1.0 \
  generator.sampling_params.top_p=1.0 \
  +generator.sampling_params.stop='["</search>", "</answer>"]' \
  +generator.eval_sampling_params.stop='["</search>", "</answer>"]' \
  generator.use_conversation_multi_turn=true \
  generator.zero_reward_on_non_stop=true \
  trainer.logger="wandb" \
  trainer.project_name="search-llm" \
  trainer.run_name="qwen2.5-3b_em" \
  trainer.resume_mode=null \
  trainer.ckpt_path="$BASE_EXPORT_PATH/ckpt" \
  trainer.eval_batch_size=1024 \
  trainer.eval_before_train=true \
  trainer.eval_interval=20 \
  trainer.dump_eval_results=true \
  trainer.hf_save_interval=20 \
  trainer.export_path="$BASE_EXPORT_PATH"

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions