Skip to content

Conversation

@tlrmchlsmth
Copy link
Member

@tlrmchlsmth tlrmchlsmth commented Aug 4, 2025

Purpose

Test is currently failing with:

[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683] ValueError: Failed to find a kernel that can implement the WNA16 linear layer. Reasons:
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683] MacheteLinearKernel requires capability 90, current compute  capability is 89
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683]  AllSparkLinearKernel cannot implement due to: For Ampere GPU, AllSpark does not support group_size = 128. Only group_size = -1 are supported.
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683]  MarlinLinearKernel cannot implement due to: Weight output_size_per_partition = 6840 is not divisible by  min_thread_n = 64. Consider reducing tensor_parallel_size or running with --quantization gptq.
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683]  Dynamic4bitLinearKernel cannot implement due to: Only CPU is supported
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683]  BitBLASLinearKernel cannot implement due to: bitblas is not installed. Please install bitblas by running `pip install bitblas>=0.1.0`
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683]  ConchLinearKernel cannot implement due to: conch-triton-kernels is not installed, please install it via `pip install conch-triton-kernels` and try again!
[2025-08-03T23:44:26Z] (EngineCore_0 pid=14405) ERROR 08-03 16:44:26 [core.py:683]  ExllamaLinearKernel cannot implement due to: Exllama only supports float16 activations

This PR switches the model for one with that should be more generally well supported.

Test history is here: https://buildkite.com/organizations/vllm/analytics/suites/ci-1/tests/7b01abb7-8064-8f64-9362-7eceb5b9280e?period=7days

Test Plan

Test Result

(Optional) Documentation Update

…tor_hashes

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
@tlrmchlsmth tlrmchlsmth requested a review from mgoin August 4, 2025 02:10
@github-actions
Copy link

github-actions bot commented Aug 4, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Aug 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a CI failure in test_shared_storage_connector_hashes by switching the model used in the test. The original model, using w4a16 quantization, was causing a 'kernel not found' error on CI machines with compute capability 89. The change to a w8a8 quantized model is a direct and appropriate fix, as this quantization scheme is more broadly supported and should resolve the CI issue. The change is minimal, well-justified, and appears correct. I have no further comments.

@tlrmchlsmth
Copy link
Member Author

Note:
Model was changed to the wNa16 version in #21973

Looking at the history, this test was passing in the CI for a while but then started failing.

Did we change the hardware used for the test, or is something truly broken?

@DarkLight1337
Copy link
Member

I am not aware of any hardware changes

@DarkLight1337
Copy link
Member

cc @khluu @seemethere do you know anything about this?

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) August 4, 2025 02:40
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 4, 2025
@vllm-bot vllm-bot merged commit 8ecb3e9 into main Aug 4, 2025
24 of 28 checks passed
@vllm-bot vllm-bot deleted the fix-test_shared_storage_connector_hashes_2 branch August 4, 2025 05:19
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Noam Gat <noamgat@gmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
…tor_hashes (vllm-project#22163)

Signed-off-by: Tyler Michael Smith <tyler@neuralmagic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants