Skip to content

Pass base_dir to model files can be loaded for auto-tp/meta-tensor.#4348

Merged
awan-10 merged 1 commit intomasterfrom
amawa/fix-auto-tp-load-ckpt
Sep 15, 2023
Merged

Pass base_dir to model files can be loaded for auto-tp/meta-tensor.#4348
awan-10 merged 1 commit intomasterfrom
amawa/fix-auto-tp-load-ckpt

Conversation

@awan-10
Copy link
Contributor

@awan-10 awan-10 commented Sep 15, 2023

This PR will enable auto-tp and meta-tensor feature to work with all models including Llama2.

Usage example with https://github.com/microsoft/DeepSpeedExamples/tree/master/inference/huggingface/text-generation

Run Command:
deepspeed --num_gpus=2 inference-test.py --model meta-llama/Llama-2-7b-hf --use_meta_tensor --checkpoint_path=/home/user/.cache/huggingface/hub/models--meta-llama--Llama-2-7b-hf/snapshots/6fdf2e60f86ff2481f2241aaee459f85b5b0bbb9/

@awan-10
Copy link
Contributor Author

awan-10 commented Sep 15, 2023

FYI. @lekurile @molly-smith @mrwyattii

This is connected to the recently merged #4147 that fixed AutoTP with Meta Tensor support.

@awan-10 awan-10 added this pull request to the merge queue Sep 15, 2023
Merged via the queue into master with commit b9d719a Sep 15, 2023
CurryRice233 added a commit to CurryRice233/DeepSpeed that referenced this pull request Sep 28, 2023
* origin/master:
  Allow multiple inference engines in single script (deepspeedai#4384)
  adds triton flash attention2 kernel (deepspeedai#4337)
  Fix llama meta tensor loading in AutoTP and kernel injected inference (deepspeedai#3608)
  Fix min torch version (deepspeedai#4375)
  Fix multinode runner to properly append to PDSH_SSH_ARGS_APPEND (deepspeedai#4373)
  add the missing method (deepspeedai#4363)
  Openfold fix (deepspeedai#4368)
  deepspeed4science japanese blog (deepspeedai#4369)
  deepspeed4science chinese blog (deepspeedai#4366)
  Enable workflow dispatch on Torch 1.10 CI tests (deepspeedai#4361)
  Update conda env to have max pydantic version (deepspeedai#4362)
  add deepspeed4science blog link (deepspeedai#4364)
  added check to avoid undefined behavior when the input_id length is greater than max_tokens (deepspeedai#4349)
  Add the policy to run llama model from the official repo (deepspeedai#4313)
  fix deepspeed4science links (deepspeedai#4358)
  DeepSpeed4Science (deepspeedai#4357)
  Support InternLM (deepspeedai#4137)
  Pass base_dir to model files can be loaded for auto-tp/meta-tensor. (deepspeedai#4348)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants