-
-
Notifications
You must be signed in to change notification settings - Fork 11.1k
[ROCm][Quantization] extend AMD Quark to support mixed-precision quantized model #24239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[ROCm][Quantization] extend AMD Quark to support mixed-precision quantized model #24239
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request extends Quark to support mixed-precision models, specifically for {MXFP4, FP8} schemes. The changes involve updating quantization configuration logic to handle mixed-precision setups and adding new tests to validate model accuracies. My review identified two high-severity issues. First, in the new test file, environment variables are not handled safely, which could lead to test state leakage. I've recommended using pytest.monkeypatch for robust cleanup. Second, in the Quark configuration logic, a fragile substring check is used for matching layer names, which could result in applying incorrect quantization schemes. I've suggested a more robust pattern matching approach to ensure correctness. Addressing these issues will improve the reliability and correctness of the new mixed-precision quantization feature.
| def test_mixed_precision_model_accuracies(config: EvaluationConfig, task: str): | ||
| os.environ["VLLM_QUARK_EMU_MEM_OPT"] = "1" | ||
|
|
||
| results = lm_eval.simple_evaluate(model="vllm", | ||
| model_args=config.get_model_args(), | ||
| tasks=task, | ||
| batch_size="auto") | ||
|
|
||
| rtol = 0.05 | ||
|
|
||
| EXPECTED_VALUE = config.excepted_value | ||
| measured_value = results["results"][task]["acc,none"] | ||
| assert (measured_value - rtol < EXPECTED_VALUE | ||
| and measured_value + rtol > EXPECTED_VALUE | ||
| ), f"Expected: {EXPECTED_VALUE} | Measured: {measured_value}" | ||
|
|
||
| del os.environ["VLLM_QUARK_EMU_MEM_OPT"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting and deleting an environment variable directly using os.environ can lead to state leakage between tests if an exception occurs before the del statement. This can cause subsequent tests to fail or behave unexpectedly. It's safer to use pytest's monkeypatch fixture, which is already used in this file, to manage environment variables. monkeypatch ensures that the environment is restored to its original state after the test function completes, regardless of whether it passes or fails.
| def test_mixed_precision_model_accuracies(config: EvaluationConfig, task: str): | |
| os.environ["VLLM_QUARK_EMU_MEM_OPT"] = "1" | |
| results = lm_eval.simple_evaluate(model="vllm", | |
| model_args=config.get_model_args(), | |
| tasks=task, | |
| batch_size="auto") | |
| rtol = 0.05 | |
| EXPECTED_VALUE = config.excepted_value | |
| measured_value = results["results"][task]["acc,none"] | |
| assert (measured_value - rtol < EXPECTED_VALUE | |
| and measured_value + rtol > EXPECTED_VALUE | |
| ), f"Expected: {EXPECTED_VALUE} | Measured: {measured_value}" | |
| del os.environ["VLLM_QUARK_EMU_MEM_OPT"] | |
| def test_mixed_precision_model_accuracies(config: EvaluationConfig, task: str, monkeypatch): | |
| monkeypatch.setenv("VLLM_QUARK_EMU_MEM_OPT", "1") | |
| results = lm_eval.simple_evaluate(model="vllm", | |
| model_args=config.get_model_args(), | |
| tasks=task, | |
| batch_size="auto") | |
| rtol = 0.05 | |
| EXPECTED_VALUE = config.excepted_value | |
| measured_value = results["results"][task]["acc,none"] | |
| assert (measured_value - rtol < EXPECTED_VALUE | |
| and measured_value + rtol > EXPECTED_VALUE | |
| ), f"Expected: {EXPECTED_VALUE} | Measured: {measured_value}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, along with several refactoring.
| layer_quant_configs = list() | ||
| for name_pattern in layer_quant_config: | ||
| if fnmatch.fnmatch(layer_name, name_pattern): | ||
| return layer_quant_config[name_pattern] | ||
| if layer_name in name_pattern: | ||
| layer_quant_configs.append( | ||
| layer_quant_config[name_pattern]) | ||
| return layer_quant_configs[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The matching logic if layer_name in name_pattern: is likely incorrect and can lead to bugs. It performs a substring check, which can cause unintended matches (e.g., attention.q_proj would incorrectly match a pattern for cross_attention.q_proj). This is likely not the intended behavior for matching layer configurations and could lead to applying the wrong quantization scheme to a layer.
Given that fnmatch is used elsewhere in this file for pattern matching, it seems the intention is to support glob patterns. If name_pattern can be a comma-separated list of patterns, the logic should be updated to split the string and apply fnmatch to each part. This ensures accurate matching of layer configurations and prevents applying the wrong quantization scheme.
The current implementation also unnecessarily creates a list layer_quant_configs to immediately return its first element. This can be simplified by returning directly upon finding a match.
| layer_quant_configs = list() | |
| for name_pattern in layer_quant_config: | |
| if fnmatch.fnmatch(layer_name, name_pattern): | |
| return layer_quant_config[name_pattern] | |
| if layer_name in name_pattern: | |
| layer_quant_configs.append( | |
| layer_quant_config[name_pattern]) | |
| return layer_quant_configs[0] | |
| for name_pattern in layer_quant_config: | |
| patterns = [p.strip() for p in name_pattern.split(',')] | |
| for p in patterns: | |
| if fnmatch.fnmatch(layer_name, p): | |
| return layer_quant_config[name_pattern] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code snippet suggest from gemini-code-assist is problematic. Because for name_pattern, it looks like model.layers.0.block_sparse_moe.experts.0.w1 as an example. So name_pattern.split(',') doesn't make sense and subsequent fnmatch.fnmatch is also irrelevant.
BowenBao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, great start!
| dict[str, Any], self.quant_config.get("layer_quant_config")) | ||
| layer_quant_configs = list() | ||
| for name_pattern in layer_quant_config: | ||
| if fnmatch.fnmatch(layer_name, name_pattern): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change necessary? Also layer_quant_configs seem unused: appends the first matched config and immediately returns it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update as also suggested #24239 (comment)
| ) -> tuple[torch.Tensor, None]: | ||
| assert block_shape is None | ||
| if not current_platform.supports_mx(): | ||
| VLLM_QUARK_EMU_MEM_OPT = (os.environ.get("VLLM_QUARK_EMU_MEM_OPT", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general for env flags it is better to add to vllm/vllm/envs.py with comments on its effect.
Can you keep this change local? In particular we want to move away from simulation to triton kernels as we move forward. cc @fxmarty-amd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally agree on that.
The reason why VLLM_QUARK_EMU_MEM_OPT is not added into vllm/vllm/envs.py is because it's better to make it as a local and temporal environment variable, just for make things work at this moment. After non-emulation kernels such as triton or aiter implementations are integrated, we can totally remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xuebwang-amd this variable that I added previously has been removed as per @mgoin request in order to avoid adding new a new unnecessary env variable to vllm, especially given that we have a decently fast mxfp4 dequantization kernel.
Please avoid adding this environment variable, keep it local for testing if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate your previous effort about this emulation approach, it played a role more than local test. The functionality goes on like what I'm doing here.
Actually, it indeed goes to the mx.qdq_mxfp4 defined in the https://github.com/vllm-project/vllm/blob/8de261b04a0a0e916d3d25d528d0f2ddeede2a6b/vllm/model_executor/layers/quantization/utils/mxfp4_utils.py#L94C5-L94C25 with enable the VLLM_QUARK_EMU_MEM_OPT=1.
The real motivation of this environment variable is to let flow go to the emulation flow regardless of platform support of MX because the non-emulation kernels haven't been implemented into the flow.
Therefore, the solution here is to remove the if-else statement:
if not current_platform.supports_mx(): A = quant_dequant_mxfp4(A) else: raise NotImplementedError()
and let it to be always A = quant_dequant_mxfp4(A).
| layer_quant_set = set(layer_quant_names) | ||
|
|
||
| if not kv_cache_set.issubset(layer_quant_set): | ||
| if not (kv_cache_set.issubset(layer_quant_set) or \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you explain what is goal for these changes around kv cache?
For AMP models, are kv-caches still uniformly quantized the same way across all layers?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, currently mixed precision is not applied on the KV cache dimension across all KV layers.
Changes here aim to correctly verify if the kv cache pattern such as {'*v_proj', '*k_proj'} can match, in other words, can be found in at least one layer_quant_set keys (i.e., layer names).
This is essential when going to AMP scenarios that layer_quant_names are specified one by one, rather than concentrating in a fuzzy matching way.
| @pytest.fixture(scope="function", autouse=True) | ||
| def use_v0_only(monkeypatch): | ||
| """ | ||
| This module relies on V0 internals, so set VLLM_USE_V1=0. | ||
| """ | ||
| monkeypatch.setenv('VLLM_USE_V1', '0') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's avoid using v0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For test purpose, especially for accuracy test, using V0 is safe. Even for hardware metric test later, using V0 is still safer while valuable for demonstrations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
vllm v0 is deprecated: #18571
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
V1 is reported to be having issues as you can see. Since mixed-precision quantization is not dependent on V0/V1 engine, it's safe to use V0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use_v0_only had been removed as the V0 backend is deprecated #25351 very recently. Thanks @fxmarty-amd
| try: | ||
| huggingface_hub.list_repo_refs( | ||
| "amd/Llama-2-70b-chat-hf-WMXFP4FP8-AMXFP4FP8-AMP-KVFP8") | ||
| HF_HUB_AMD_ORG_ACCESS = True | ||
| except huggingface_hub.errors.RepositoryNotFoundError: | ||
| HF_HUB_AMD_ORG_ACCESS = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use public models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These models are under progress for publish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have an ETA for when we can expect these models to be published?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AMD's colleagues are speeding up the progress, hopefully they can make it happen some time next week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xuebwang-amd I meant that for unit testing you can probably use small models just for integration test purposes (as e.g. in
vllm/tests/kernels/moe/test_mxfp4_moe.py
Lines 51 to 55 in 58c360d
| @pytest.mark.parametrize('model_case', [ | |
| ModelCase("fxmarty/qwen_1.5-moe-a2.7b-mxfp4", tp=1), | |
| ModelCase("fxmarty/deepseek_r1_3_layers_mxfp4", tp=8), | |
| ModelCase("fxmarty/Llama-4-Scout-17B-16E-Instruct-2-layers-mxfp4", tp=1) | |
| ]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fxmarty-amd your motivation here is to reduce the CI time cost, that's good. We can consider pick up one public model into the CI test. @gshtras @SageMoore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you have an ETA for when we can expect these models to be published?
Eventually they are published.
| reason="Read access to huggingface.co/amd is required for this test.") | ||
| def test_mixed_precision_model_accuracies(model_name: str, | ||
| accuracy_numbers: dict, monkeypatch): | ||
| monkeypatch.setenv("VLLM_QUARK_EMU_MEM_OPT", "1") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This environment variable has no effect - it has been removed from vllm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then we need to remove the if-else statement in the _mxfp4_quantize, as commented in above #24239 (comment)
| ) -> tuple[torch.Tensor, None]: | ||
| assert block_shape is None | ||
| if not current_platform.supports_mx(): | ||
| VLLM_QUARK_EMU_MEM_OPT = (os.environ.get("VLLM_QUARK_EMU_MEM_OPT", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xuebwang-amd this variable that I added previously has been removed as per @mgoin request in order to avoid adding new a new unnecessary env variable to vllm, especially given that we have a decently fast mxfp4 dequantization kernel.
Please avoid adding this environment variable, keep it local for testing if needed.
| As examples, we provide some ready-to-use quantized mixed precision model to show the usage in vLLM and the accuracy benifits. They are: | ||
|
|
||
| - amd/Llama-2-70b-chat-hf-WMXFP4FP8-AMXFP4FP8-AMP-KVFP8 | ||
| - amd/Mixtral-8x7B-Instruct-v0.1-WMXFP4FP8-AMXFP4FP8-AMP-KVFP8 | ||
| - amd/Qwen3-8B-WMXFP4FP8-AMXFP4FP8-AMP-KVFP8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make these public + add link
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They're going to be published.
Can you provide:
|
One can check the detailed layerwise MXFP8/FP8 configuration in the |
| A = quant_dequant_mxfp4(A) | ||
| else: | ||
| raise NotImplementedError() | ||
| A = quant_dequant_mxfp4(A) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this mean that before the PR MI350 would get an exception, and now this method is being called unconditionally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes! it was an oversight in a previous PR, we should be able to run simulation on cdna4 until kernels are integrated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see #22355
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR focus on the accuracy benefits of mixed precision via emulated QDQ at this moment, so the if-else about the platform is removed.
The benefits of hardware metrics using real kernel integrated is the next step.
fxmarty-amd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@xuebwang-amd if I understand correctly, this PR is now mostly about adding documentation right?
| for name_pattern, config in layer_quant_config.items(): | ||
| if layer_name in name_pattern: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we make sure somewhere that e.g. q_proj from the checkpoint/Transformers gets correctly mapped to qkv_proj in vllm (https://github.com/ROCm/vllm/blob/eb9d4de9eb7649bdf36b2d0e4832fcaab8465153/vllm/model_executor/models/llama.py#L150) prior to doing the check layer_name in name_pattern?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question.
Quark model/config is nearly decoupled with vllm's model implementation. The q_proj, k_proj, v_proj are merged in vllm, while they are separated in Quark quantized model and configs. The q_proj, k_proj, v_proj are mandatorily to have same bitwidth, i.e., same quantization scheme in Quark's AMP so that the alignment is achieved.
Therefore, the matching of Quark's layerwise quant config is applied to q_proj, k_proj, v_proj individually.
| layer_quant_set = set(layer_quant_names) | ||
|
|
||
| if not kv_cache_set.issubset(layer_quant_set): | ||
| if not (kv_cache_set.issubset(layer_quant_set) or \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'd avoid writing the nested loops inline for readability purposes.
| for name_pattern in layer_quant_config: | ||
| if fnmatch.fnmatch(layer_name, name_pattern): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused. What is this PR changing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The gemini-code-assist had also similar questions. Please see my comments above, e.g., #24239 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for referencing our previous discussion, @xuebwang-amd. I'd like to clarify the change in behavior introduced by replacing fnmatch.fnmatch(layer_name, name_pattern) with layer_name in name_pattern.
Key Difference:
fnmatch.fnmatch(layer_name, name_pattern): This function performs shell-style wildcard matching. For example, ifname_patternis"model.layers.*.attn.q_proj", it would match"model.layers.0.attn.q_proj","model.layers.1.attn.q_proj", etc.layer_name in name_pattern: This performs a simple substring check. For"model.layers.0.attn.q_proj" in "model.layers.*.attn.q_proj", it would returnFalsebecause"model.layers.0.attn.q_proj"is not a substring of"model.layers.*.attn.q_proj".
This change fundamentally alters how layer names are matched against the layer_quant_config. If the intention is to support wildcard patterns for layer quantization (which is a common practice for applying the same quantization scheme to multiple layers), then the current change would likely break existing configurations or limit the flexibility of defining quantization patterns.
Unless there's a specific reason to remove wildcard matching, I recommend reverting to fnmatch.fnmatch to maintain the expected pattern matching behavior. The iteration over layer_quant_config.items() is correct for accessing the config value, but the matching logic should be adjusted.
| for name_pattern in layer_quant_config: | |
| if fnmatch.fnmatch(layer_name, name_pattern): | |
| for name_pattern, config in layer_quant_config.items(): | |
| if fnmatch.fnmatch(layer_name, name_pattern): | |
| return config |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gemini code bot is not useful here.
@xuebwang-amd I don't understand why this PR introduces an handling different than e.g.
https://github.com/amd/Quark/blob/release/0.9/quark/torch/export/main_export/quant_config_parser.py#L67-L70
and e.g.
https://github.com/amd/Quark/blob/release/0.9/quark/torch/quantization/model_transformation.py#L80-L84
why would the handling in vllm be different than we have in quark, e.g. when reloading models through Transformers library? I think it is not a good thing. Maybe existing models rely on fnmatch.fnmatch and things would break now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There have been lots of discussions about it in this PR.
To emphasize here is, this is for AMP in which layers are specified one by one, so name_pattern in layer_quant_config works in a strict matching way while fnmatch.fnmatch doesn't fit here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can fully understand your concern here. Please find my explanations above like:
#24239 (comment)
#24239 (comment)
#24239 (comment)
To ensure no breaking or confliction to existed PTQ model matching, I add a a non-mixed-precision (PTQ, public) model as a reference to demonstrate pipeline compatibility in the tests/quantization/test_mixed_precision.py https://github.com/xuebwang-amd/vllm/blob/db3cc7eba1609370e34b35f51c7a5fa3111bb868/tests/quantization/test_mixed_precision.py#L45
Conclusion is: no conflicts or breakings using precise substring containment matching rule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can set both glob-style wildcard character and precise substring containment for layer_quant_config matching. @BowenBao
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I apologize for coming to this discussion late but I also have some concerns here. It looks like you would like to add substring matching to this check. So that layer_name will match with layer_name_0, layer_name_1, etc. Before your change the code would only do substring matching when the * character is appended to the end of the substring. So you would have to have layer_name*. The concern that I have is that you are turning on substring matching by default. Meaning that layer_name_1 will match with layer_name_12 even if that's not the callers intention. Would it make more sense to leave the code as is and just append the * to the quant config?
I'm not familiar with how quantization configs are specified but this does seem like it's introducing a foot gun?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @SageMoore, that's a good suggestion and should work as well.
However, our current approach can be efficient, let me break it down into two aspects:
- Simple substring matching is mostly more efficient than fnmatch which involves pattern matching. That's the reason why put substring matching as default one. Note this is for a single match check.
- From a model-level perspective, the single-match effect described above aggregates across an Auto Mixed Precision (AMP) model whose layers are explicitly enumerated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @SageMoore, regarding the concern over incorrect exact match, I think it is fine as it's dangerous only when the name ends with layer idx with no followed characters, while the names pattern we expect in this case is more like xxx.layers.1.yyy or xxx.experts.12.zzz.
@xuebwang-amd feel free to add a couple of examples here/in comment for illustration.
Not the case. This PR aims to support layerwise mixed-precision quantization of Quark, and demonstrate the resulting accuracy gains. |
|
Hello @SageMoore , @gshtras ,
cc to @BowenBao , @fxmarty-amd |
| huggingface_hub.list_repo_refs( | ||
| "amd/Qwen3-8B-WMXFP4FP8-AMXFP4FP8-AMP-KVFP8") | ||
| HF_HUB_AMD_ORG_ACCESS = True | ||
| except huggingface_hub.errors.RepositoryNotFoundError: | ||
| HF_HUB_AMD_ORG_ACCESS = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since the models are now public, could we remove this part?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, it's removed.
|
Documentation preview: https://vllm--24239.org.readthedocs.build/en/24239/ |
BowenBao
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from Quark side.
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
|
CI test https://buildkite.com/vllm/ci/builds/36689/steps/canvas?jid=019a2d89-7117-4eba-a593-770cdfaa5212 failed: |
…uark_layerwise_mixed_precision
|
CI test failed: |
|
Should be fixed on main, let me rebase |
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
…uark_layerwise_mixed_precision
Purpose
This PR aims to support layerwise mixed-precision quantization model inference, extending from quantized models in single scheme such as MXFP4, FP8 (aka PTQ models).
Here, the layerwise mixed-precision configuration for a given model is searched and then quantized by amd-quark. Specifically, in this PR, we focus on mixed scheme of {MXFP4, FP8}. FP8 here denotes for FP8 per-tensor scheme.
With the mixed-precision quantized model, one could achieve an optimal balance between accuracy and hardware metrics.
To demonstrate the benefits of mixed-precision model in the PR, we show the model accuracies on several commonly used tasks only using Quark emulation kernel for MXFP4 and triton kernel for FP8.
Test Plan
Test on
Test Result
List of TODO items