-
-
Notifications
You must be signed in to change notification settings - Fork 11k
Convert formatting to use ruff instead of yapf + isort
#26247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Forward fixes some of the issues skipped by vllm-project#26247 Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Forward fixes the last of the issues skipped by vllm-project#26247 Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com> Signed-off-by: Karan Goel <3261985+karan@users.noreply.github.com>
|
Hi @hmellor, has anyone run into an issue with the new ruff definitions? Edit: seems this is a known issue. Appending " #noqa E501" seems to overcome it temporarily. Would love to know how to solve it going forward |
|
I think you may have a faulty configuration. This is not a known issue and I don't like using In your Slack message you mentioned that the |
|
Having just found your PR I think what you have done is reasonable https://github.com/vllm-project/vllm/pull/24520/files#diff-9eeca590fd99f15621897e559dba39b3ec4e7c2c65ec3c3229711689e008b5f4R435 Sorry for the false alarm! |
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
|
This would do it: scheduler_config = self.vllm_config.scheduler_config
cache_hit_threshold = (
request.cache_hit_threshold
if request.cache_hit_threshold is not None
else scheduler_config.global_cache_hit_threshold
) |
|
hmm, nice idea. If we're adding another assignment, one can argue that we can might as well assign the global value and override if request has it. cache_hit_threshold = \
self.vllm_config.scheduler_config.global_cache_hit_threshold
# Cache hit threshold in request overrides global setting
if request.cache_hit_threshold is not None:
cache_hit_threshold = request.cache_hit_thresholdI guess it's a matter of preference. The way you suggested looks more Pythonic to me, I'll go with that. Thanks |
|
Fucking hell, how to create conflicts in one step on opened PRs |
|
Some disruption was inevitable. However:
|
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
…oject#26247) Signed-off-by: Harry Mellor <19981378+hmellor@users.noreply.github.com>
Closes #17657
This is a massive change and would be impossible to merge during the week. Any PR's merged while this was waiting for CI would cause merge conflicts which would be time consuming to solve in this PR.
Therefore, the plan to getting this PR merged quickly is as follows:
pre-commit run -aand note all the thingsruffcouldn't automatically fixpre-file-ignorestopyproject.tomlpre-commit run -aruffandruff formatshould pass immediatelyThen, these ignores will be systematically removed in smaller more manageable PRs afterwards.