Skip to content

Conversation

@waltforme
Copy link
Contributor

@waltforme waltforme commented Feb 28, 2025

This PR tries to better capture the time of loading weights.

As we know, there are two major phases when loading a model: (A) downloading model if not in cache, and (B) loading weights.

Due to the lazy nature of generators that yield tensors one by one, the downloading phase is not executed until a load_weights method of the corresponding model begins to iterate over the generator.

So, in order to accurately capture the time of loading weights, we need to start our stopwatch after downloading is finished and before the first tensor is yielded. That's the general idea of this PR.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Feb 28, 2025
Signed-off-by: Jun Duan <jun.duan.phd@outlook.com>
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR!

I gave this PR a try with meta-llama/Llama-3.2-1B and this is what it looks like on V0.

INFO 03-01 07:56:47 [weight_utils.py:273] Time spent downloading weights for meta-llama/Llama-3.2-1B: 14.584169 seconds
INFO 03-01 07:56:47 [weight_utils.py:307] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:05<00:00,  5.31s/it]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:05<00:00,  5.31s/it]

INFO 03-01 07:56:52 [loader.py:423] Loading weights took 5.38 seconds
INFO 03-01 07:56:53 [model_runner.py:1117] Loading model weights took 2.3185 GB and 20.256478 seconds

I left a minor comment - please take a look!

time_after_load = time.perf_counter()
self.model_memory_usage = m.consumed_memory
logger.info("Loading model weights took %.4f GB and %.6f seconds",
logger.info("Loading model took %.4f GB and %.6f seconds",
Copy link
Member

@ywang96 ywang96 Mar 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's update this on V0 as well?

I also think logger.info("Model loading took %.4f GB and %.6f seconds", sounds more natural and less confusing!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, thanks!
Latest push tweaks the wording as suggested, and contains the same change for V0.

Signed-off-by: Jun Duan <jun.duan.phd@outlook.com>
Copy link
Member

@ywang96 ywang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks for this QoL change!

@ywang96 ywang96 added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 1, 2025
@ywang96 ywang96 merged commit 82fbeae into vllm-project:main Mar 2, 2025
47 of 49 checks passed
@waltforme waltforme deleted the loading-weights branch March 2, 2025 08:05
Akshat-Tripathi pushed a commit to krai/vllm that referenced this pull request Mar 3, 2025
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
…4063)

Signed-off-by: Jun Duan <jun.duan.phd@outlook.com>
Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants