Skip to content

Conversation

@suquark
Copy link
Contributor

@suquark suquark commented Apr 16, 2023

This is a temporary stash of all the mess. I keep this as a PR for each lookup.

@suquark
Copy link
Contributor Author

suquark commented Apr 16, 2023

close it: no longer need it in the short term

@suquark suquark closed this Apr 16, 2023
@zhuohan123 zhuohan123 deleted the prefix_stash_siyuan branch June 18, 2023 07:30
tianyil1 pushed a commit to tianyil1/vllm that referenced this pull request Jun 5, 2024
dtrifiro pushed a commit to dtrifiro/vllm that referenced this pull request Jun 5, 2024
yukavio pushed a commit to yukavio/vllm that referenced this pull request Jul 3, 2024
SUMMARY
* update `TORCH_CUDA_ARCH_LIST` to match `magic_wand`
* update "test vllm" action to run tests serially
* add helper script to find *.py tests, run them serially, and output
JUnit formatted xml

TEST
working through changes manually on debug instance

---------

Co-authored-by: andy-neuma <andy@neuralmagic.com>
bigPYJ1151 pushed a commit to bigPYJ1151/vllm that referenced this pull request Jul 30, 2024
@alixiaodi alixiaodi mentioned this pull request Aug 2, 2024
Bounty-hunter pushed a commit to Bounty-hunter/vllm that referenced this pull request Sep 25, 2025
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Nov 27, 2025
* refactor torch complie for fp8 and int

Signed-off-by: Zhu, Zufang <zufang.zhu@intel.com>

* refine for pre-commit

Signed-off-by: Zhu, Zufang <zufang.zhu@intel.com>

* fix

Signed-off-by: Zhu, Zufang <zufang.zhu@intel.com>

* update

Signed-off-by: Zhu, Zufang <zufang.zhu@intel.com>

---------

Signed-off-by: Zhu, Zufang <zufang.zhu@intel.com>
AndreasKaratzas pushed a commit to AndreasKaratzas/vllm that referenced this pull request Dec 1, 2025
- Add comprehensive labels.yml with all priority, type, status,
  platform, hardware, component, model, and test labels
- Add AMD issue templates (800-860 series) for bug, ci-failure,
  documentation, feature, infrastructure, performance, and usage
- Add amd_project_automation.yml workflow to auto-assign issues:
  - amd-ci label -> AMD CI project (vllm-project#39)
  - amd label (w/o amd-ci) -> AMD project (vllm-project#38)
- Update issue_autolabel.yml to add 'amd' label when rocm detected
- Add sync_labels.yml workflow to sync labels from labels.yml
- Update config.yml with AMD ROCm discussions link

Labels routing:
- CI failures use 'amd-ci' label -> routes to project vllm-project#39
- All other AMD issues use 'amd' label -> routes to project vllm-project#38
AndreasKaratzas pushed a commit to AndreasKaratzas/vllm that referenced this pull request Dec 1, 2025
Changes:
- Replace labels.yml with labels-amd.yml (AMD-specific only)
  - Pruned to only AMD labels that won't conflict with main
  - Uses skip-delete to preserve existing upstream labels

- Update sync_labels.yml:
  - Manual trigger only (workflow_dispatch) - safe for upstream
  - Restricted to AMD team members
  - Added dry-run option
  - Uses crazy-max/ghaction-github-labeler with skip-delete

- Enhance issue_autolabel.yml:
  - Add hardware labels (mi300x, mi325x, mi350-series)
  - Add ROCm version labels (rocm-6.x, rocm-7.x)
  - Add component labels (aiter-backend, rccl)
  - Add status labels (needs-profiling, has-workaround, upstream)
  - Add test labels (flaky-test, test:distributed, test:benchmark)
  - Add rocm-blocker detection
  - Extended CC configuration for AMD team

- Improve amd_project_automation.yml:
  - Support for both issues and PRs
  - Detect any AMD-related label for project assignment
  - Status mapping logic for project columns
  - Job summary output

Project routing:
- amd-ci label → AMD CI project (vllm-project#39)
- Any other AMD label → AMD project (vllm-project#38)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants