Skip to content

Conversation

@JArnoldAMD
Copy link
Contributor

@JArnoldAMD JArnoldAMD commented Dec 12, 2024

The llm_engine logs its VllmConfig during initialization. A recent change to the logging (#10999) has resulted in this log entry containing python object references rather than the actual details of the config, e.g.:

VllmConfig(model_config=<vllm.config.ModelConfig object at 0x7f123b55db50>, cache_config=<vllm.config.CacheConfig object at 0x7f1239087920>

This commit logs the VllmConfig using %s so that the string representation of the VllmConfig is logged, e.g.:

model='amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV', speculative_config=None, ...

The llm_engine logs its VllmConfig during initialization.  A
recent change to the logging has resulted in this log entry
containing python object references rather than the actual
details of the config, e.g.:

VllmConfig(model_config=<vllm.config.ModelConfig object at 0x7f123b55db50>, cache_config=<vllm.config.CacheConfig object at 0x7f1239087920>

This commit logs the VllmConfig using %s so that the string
representation of the VllmConfig is logged, e.g.:

model='amd/Mixtral-8x22B-Instruct-v0.1-FP8-KV', speculative_config=None, ...
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for fixing it!

@youkaichao youkaichao merged commit 9f3974a into vllm-project:main Dec 12, 2024
24 of 25 checks passed
gshtras pushed a commit to ROCm/vllm that referenced this pull request Dec 12, 2024
gshtras added a commit to ROCm/vllm that referenced this pull request Dec 12, 2024
Co-authored-by: Jeremy Arnold <103538711+JArnoldAMD@users.noreply.github.com>
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants