Skip to content

Conversation

@celestialli
Copy link
Contributor

What this PR does / why we need it?

Add github action for release code (.tar.gz) and wheel (.whl) to PYPI.
The action is triggered when a tag begins with "v" is pushed.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

Well tested with my own fork repo.

RUN git clone --depth 1 $VLLM_REPO --branch ${TAG} /workspace/vllm
# In x86, triton will be installed by vllm. But in Ascend, triton doesn't work correctly. we need to uninstall it.
RUN VLLM_TARGET_DEVICE="empty" python3 -m pip install -v -e /workspace/vllm/ --extra-index https://download.pytorch.org/whl/cpu/ && \
python3 -m pip uninstall -y triton && \
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems there is no triton in vllm-empty dependencies:
https://github.com/vllm-project/vllm/blob/main/requirements/common.txt

I think we could remove it safely

Copy link
Contributor Author

@celestialli celestialli Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems there is no triton in vllm-empty dependencies: https://github.com/vllm-project/vllm/blob/main/requirements/common.txt

I think we could remove it safely

Thanks for stating this. During my testing with v0.8.4, triton can still be uninstalled, it is better kept as it is, considering we might release v0.8.4rc3 or v0.8.4 later on.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, we plan update to 0.8.5 now #715

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, we plan update to 0.8.5 now #715

pip uninstall will skip if the package does not exist.
Keeping this does no harm, but will have more compatibility.
I still suggest keeping this.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't uninstall triton. It's now being imported through dependency chains.

from vllm.model_executor.models.minicpm import MiniCPMAttention

from vllm.model_executor.models.minicpm import MiniCPMAttention
# then
from vllm.model_executor.layers.fused_moe import fused_moe
# then
import triton

Copy link
Contributor Author

@celestialli celestialli Apr 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't uninstall triton. It's now being imported through dependency chains.

from vllm.model_executor.models.minicpm import MiniCPMAttention

from vllm.model_executor.models.minicpm import MiniCPMAttention
# then
from vllm.model_executor.layers.fused_moe import fused_moe
# then
import triton

Hi. In x86, triton will be installed by vllm. But in Ascend, triton doesn't work correctly.
Uninstalling triton in this Dockerfile does no influence to this part of code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure if this affects the release process, but running vllm and vllm-ascend will cause this import error. I encountered this, and installing Triton fixed it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update: by installing vllm v0.8.5, we will also have triton 3.3.0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unsure if this affects the release process, but running vllm and vllm-ascend will cause this import error. I encountered this, and installing Triton fixed it.

Don't worry, release process is tested with triton uninstalled.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jianzs mengqing has a PR in vllm to fix triton install error. vllm-project/vllm@2f54045. So currenlty from 0.8.5, no matter triton is installed or not, the erro will not raised. @MengqingCao Right?

Signed-off-by: Shuqiao Li <celestialli@outlook.com>
@wangxiyuan
Copy link
Collaborator

We should be careful about the package release. There are some concern:

  1. pypi doesn't have revert function. It means that we should make sure the whl works 100% before push. If there is any problem, there is no way to push a new one to replace.
  2. this job only runs when a new tag is coming. But since code is always changed, how to ensure this CI job always work? If we hit build/push error when release a new tag, there is no way to re-do it again.

name: Release Code

on:
push:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my suggestion is that, like image build job. we should do the package build without push in every commit or in nightly job to make sure the job always work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

my suggestion is that, like image build job. we should do the package build without push in every commit or in nightly job to make sure the job always work.

good idea

cd vllm-ascend && \
python3 setup.py bdist_wheel && \
ls ./dist && \
python3 -m twine upload dist/* -u __token__ -p ${PYPI_TOKEN}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

push to pypi directly after build is a little dangerous, we should make sure the package works as expect first before push it to pypi.

Copy link
Collaborator

@jianzs jianzs Apr 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to deploy to Test PyPI first for validation, then proceed to official PyPI after successful testing.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to deploy to Test PyPI first for validation, then proceed to official PyPI after successful testing.

Can it be done by just switching to a test PYPI_TOKEN?

@celestialli celestialli closed this by deleting the head repository Apr 30, 2025
@celestialli
Copy link
Contributor Author

This PR is unexpectedly closed because I have purged the original fork repo. I'll create another PR to continue this work soon.

wangxiyuan pushed a commit that referenced this pull request May 26, 2025
### What this PR does / why we need it?

This is a continuing work of #716.
This PR add workflow to build and release wheel, and also release source
to PYPI.
We have 3 conditions to trigger the workflow:

1. PR to `main` and `*-dev`
2. push to `main` and `*-dev`
3. push tag with name of `v*`

Release to PYPI will only be done under condition 3. Under condition 1
and 2, it will generate .tar.gz and build .whl, upload to github
artifacts but will not release.

update:
Will build .whl and upload to github artifacts with scheduled task.


### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
All triggered conditions are well tested with my fork repo.

---------

Signed-off-by: Shuqiao Li <celestialli@outlook.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
XWFAlone pushed a commit to XWFAlone/vllm-ascend that referenced this pull request May 26, 2025
### What this PR does / why we need it?

This is a continuing work of vllm-project#716.
This PR add workflow to build and release wheel, and also release source
to PYPI.
We have 3 conditions to trigger the workflow:

1. PR to `main` and `*-dev`
2. push to `main` and `*-dev`
3. push tag with name of `v*`

Release to PYPI will only be done under condition 3. Under condition 1
and 2, it will generate .tar.gz and build .whl, upload to github
artifacts but will not release.

update:
Will build .whl and upload to github artifacts with scheduled task.


### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
All triggered conditions are well tested with my fork repo.

---------

Signed-off-by: Shuqiao Li <celestialli@outlook.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
momo609 pushed a commit to momo609/vllm-ascend that referenced this pull request May 30, 2025
### What this PR does / why we need it?

This is a continuing work of vllm-project#716.
This PR add workflow to build and release wheel, and also release source
to PYPI.
We have 3 conditions to trigger the workflow:

1. PR to `main` and `*-dev`
2. push to `main` and `*-dev`
3. push tag with name of `v*`

Release to PYPI will only be done under condition 3. Under condition 1
and 2, it will generate .tar.gz and build .whl, upload to github
artifacts but will not release.

update:
Will build .whl and upload to github artifacts with scheduled task.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
All triggered conditions are well tested with my fork repo.

---------

Signed-off-by: Shuqiao Li <celestialli@outlook.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Signed-off-by: wangxiaoxin (A) <w00664509@china.huawei.com>
chopper0126 pushed a commit to chopper0126/vllm-ascend that referenced this pull request Oct 16, 2025
### What this PR does / why we need it?

This is a continuing work of vllm-project#716.
This PR add workflow to build and release wheel, and also release source
to PYPI.
We have 3 conditions to trigger the workflow:

1. PR to `main` and `*-dev`
2. push to `main` and `*-dev`
3. push tag with name of `v*`

Release to PYPI will only be done under condition 3. Under condition 1
and 2, it will generate .tar.gz and build .whl, upload to github
artifacts but will not release.

update:
Will build .whl and upload to github artifacts with scheduled task.


### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
All triggered conditions are well tested with my fork repo.

---------

Signed-off-by: Shuqiao Li <celestialli@outlook.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Angazenn pushed a commit to Angazenn/vllm-ascend that referenced this pull request Oct 21, 2025
### What this PR does / why we need it?

This is a continuing work of vllm-project#716.
This PR add workflow to build and release wheel, and also release source
to PYPI.
We have 3 conditions to trigger the workflow:

1. PR to `main` and `*-dev`
2. push to `main` and `*-dev`
3. push tag with name of `v*`

Release to PYPI will only be done under condition 3. Under condition 1
and 2, it will generate .tar.gz and build .whl, upload to github
artifacts but will not release.

update:
Will build .whl and upload to github artifacts with scheduled task.


### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
All triggered conditions are well tested with my fork repo.

---------

Signed-off-by: Shuqiao Li <celestialli@outlook.com>
Signed-off-by: Yikun Jiang <yikunkero@gmail.com>
Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants