We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent eb277cc commit e9a2ab2Copy full SHA for e9a2ab2
.github/scripts/torchao_model_releases/quantize_and_upload.py
@@ -589,7 +589,7 @@ def _untie_weights_and_save_locally(model_id):
589
590
[TODO: fix command below where necessary]
591
```Shell
592
-python -m executorch.examples.models.qwen3.convert_weights $(hf download pytorch/Qwen3-4B-INT8-INT4) pytorch_model_converted.bin
+python -m executorch.examples.models.qwen3.convert_weights $(hf download {quantized_model}) pytorch_model_converted.bin
593
```
594
595
Once we have the checkpoint, we export it to ExecuTorch with a max_seq_length/max_context_length of 1024 to the XNNPACK backend as follows.
0 commit comments