-
Notifications
You must be signed in to change notification settings - Fork 530
add github action for release code and whl #716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| RUN git clone --depth 1 $VLLM_REPO --branch ${TAG} /workspace/vllm | ||
| # In x86, triton will be installed by vllm. But in Ascend, triton doesn't work correctly. we need to uninstall it. | ||
| RUN VLLM_TARGET_DEVICE="empty" python3 -m pip install -v -e /workspace/vllm/ --extra-index https://download.pytorch.org/whl/cpu/ && \ | ||
| python3 -m pip uninstall -y triton && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems there is no triton in vllm-empty dependencies:
https://github.com/vllm-project/vllm/blob/main/requirements/common.txt
I think we could remove it safely
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems there is no triton in vllm-empty dependencies: https://github.com/vllm-project/vllm/blob/main/requirements/common.txt
I think we could remove it safely
Thanks for stating this. During my testing with v0.8.4, triton can still be uninstalled, it is better kept as it is, considering we might release v0.8.4rc3 or v0.8.4 later on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, we plan update to 0.8.5 now #715
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, we plan update to 0.8.5 now #715
pip uninstall will skip if the package does not exist.
Keeping this does no harm, but will have more compatibility.
I still suggest keeping this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't uninstall triton. It's now being imported through dependency chains.
| from vllm.model_executor.models.minicpm import MiniCPMAttention |
from vllm.model_executor.models.minicpm import MiniCPMAttention
# then
from vllm.model_executor.layers.fused_moe import fused_moe
# then
import tritonThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't uninstall
triton. It's now being imported through dependency chains.
from vllm.model_executor.models.minicpm import MiniCPMAttention from vllm.model_executor.models.minicpm import MiniCPMAttention # then from vllm.model_executor.layers.fused_moe import fused_moe # then import triton
Hi. In x86, triton will be installed by vllm. But in Ascend, triton doesn't work correctly.
Uninstalling triton in this Dockerfile does no influence to this part of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure if this affects the release process, but running vllm and vllm-ascend will cause this import error. I encountered this, and installing Triton fixed it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: by installing vllm v0.8.5, we will also have triton 3.3.0.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm unsure if this affects the release process, but running vllm and vllm-ascend will cause this import error. I encountered this, and installing Triton fixed it.
Don't worry, release process is tested with triton uninstalled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jianzs mengqing has a PR in vllm to fix triton install error. vllm-project/vllm@2f54045. So currenlty from 0.8.5, no matter triton is installed or not, the erro will not raised. @MengqingCao Right?
Signed-off-by: Shuqiao Li <celestialli@outlook.com>
|
We should be careful about the package release. There are some concern:
|
| name: Release Code | ||
|
|
||
| on: | ||
| push: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my suggestion is that, like image build job. we should do the package build without push in every commit or in nightly job to make sure the job always work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my suggestion is that, like image build job. we should do the package build without push in every commit or in nightly job to make sure the job always work.
good idea
| cd vllm-ascend && \ | ||
| python3 setup.py bdist_wheel && \ | ||
| ls ./dist && \ | ||
| python3 -m twine upload dist/* -u __token__ -p ${PYPI_TOKEN} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
push to pypi directly after build is a little dangerous, we should make sure the package works as expect first before push it to pypi.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to deploy to Test PyPI first for validation, then proceed to official PyPI after successful testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
better to deploy to Test PyPI first for validation, then proceed to official PyPI after successful testing.
Can it be done by just switching to a test PYPI_TOKEN?
|
This PR is unexpectedly closed because I have purged the original fork repo. I'll create another PR to continue this work soon. |
### What this PR does / why we need it? This is a continuing work of #716. This PR add workflow to build and release wheel, and also release source to PYPI. We have 3 conditions to trigger the workflow: 1. PR to `main` and `*-dev` 2. push to `main` and `*-dev` 3. push tag with name of `v*` Release to PYPI will only be done under condition 3. Under condition 1 and 2, it will generate .tar.gz and build .whl, upload to github artifacts but will not release. update: Will build .whl and upload to github artifacts with scheduled task. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? All triggered conditions are well tested with my fork repo. --------- Signed-off-by: Shuqiao Li <celestialli@outlook.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it? This is a continuing work of vllm-project#716. This PR add workflow to build and release wheel, and also release source to PYPI. We have 3 conditions to trigger the workflow: 1. PR to `main` and `*-dev` 2. push to `main` and `*-dev` 3. push tag with name of `v*` Release to PYPI will only be done under condition 3. Under condition 1 and 2, it will generate .tar.gz and build .whl, upload to github artifacts but will not release. update: Will build .whl and upload to github artifacts with scheduled task. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? All triggered conditions are well tested with my fork repo. --------- Signed-off-by: Shuqiao Li <celestialli@outlook.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it? This is a continuing work of vllm-project#716. This PR add workflow to build and release wheel, and also release source to PYPI. We have 3 conditions to trigger the workflow: 1. PR to `main` and `*-dev` 2. push to `main` and `*-dev` 3. push tag with name of `v*` Release to PYPI will only be done under condition 3. Under condition 1 and 2, it will generate .tar.gz and build .whl, upload to github artifacts but will not release. update: Will build .whl and upload to github artifacts with scheduled task. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? All triggered conditions are well tested with my fork repo. --------- Signed-off-by: Shuqiao Li <celestialli@outlook.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Co-authored-by: Yikun Jiang <yikunkero@gmail.com> Signed-off-by: wangxiaoxin (A) <w00664509@china.huawei.com>
### What this PR does / why we need it? This is a continuing work of vllm-project#716. This PR add workflow to build and release wheel, and also release source to PYPI. We have 3 conditions to trigger the workflow: 1. PR to `main` and `*-dev` 2. push to `main` and `*-dev` 3. push tag with name of `v*` Release to PYPI will only be done under condition 3. Under condition 1 and 2, it will generate .tar.gz and build .whl, upload to github artifacts but will not release. update: Will build .whl and upload to github artifacts with scheduled task. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? All triggered conditions are well tested with my fork repo. --------- Signed-off-by: Shuqiao Li <celestialli@outlook.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
### What this PR does / why we need it? This is a continuing work of vllm-project#716. This PR add workflow to build and release wheel, and also release source to PYPI. We have 3 conditions to trigger the workflow: 1. PR to `main` and `*-dev` 2. push to `main` and `*-dev` 3. push tag with name of `v*` Release to PYPI will only be done under condition 3. Under condition 1 and 2, it will generate .tar.gz and build .whl, upload to github artifacts but will not release. update: Will build .whl and upload to github artifacts with scheduled task. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? All triggered conditions are well tested with my fork repo. --------- Signed-off-by: Shuqiao Li <celestialli@outlook.com> Signed-off-by: Yikun Jiang <yikunkero@gmail.com> Co-authored-by: Yikun Jiang <yikunkero@gmail.com>
What this PR does / why we need it?
Add github action for release code (.tar.gz) and wheel (.whl) to PYPI.
The action is triggered when a tag begins with "v" is pushed.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
Well tested with my own fork repo.