Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Nemotron to PP_SUPPORTED_MODELS #6863

Merged
merged 1 commit into from
Jul 27, 2024
Merged

Add Nemotron to PP_SUPPORTED_MODELS #6863

merged 1 commit into from
Jul 27, 2024

Conversation

mgoin
Copy link
Collaborator

@mgoin mgoin commented Jul 27, 2024

Followup to #6611 (comment)

Tested TP=2 + PP=2 with server:

vllm serve nvidia/Minitron-4B-Base --tensor-parallel-size 2 --pipeline-parallel-size 2

Client request and result:

curl http://0.0.0.0:8000/v1/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer OPENAI_API_KEY" \
  -d '{
    "model": "nvidia/Minitron-4B-Base",
    "prompt": "The capital of the USA is" 
  }'
 
{"id":"cmpl-7029ed6cf71347c999a01361418bc6e2","object":"text_completion","created":1722094724,"model":"nvidia/Minitron-4B-Base","choices":[{"index":0,"text":" Washington D. C. which is a Fictional World. It was first mentioned in","logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":6,"total_tokens":22,"completion_tokens":16}}%  

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mgoin
Copy link
Collaborator Author

mgoin commented Jul 27, 2024

/ready

@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 27, 2024
@youkaichao
Copy link
Member

you can test the correctness locally with https://github.com/vllm-project/vllm/blob/main/tests/distributed/test_pipeline_parallel.py

@mgoin
Copy link
Collaborator Author

mgoin commented Jul 27, 2024

Thanks @youkaichao, all tests pass replacing Minitron for Llama

@mgoin mgoin enabled auto-merge (squash) July 27, 2024 21:52
@mgoin mgoin merged commit b1366a9 into main Jul 27, 2024
87 checks passed
@youkaichao youkaichao deleted the nemo-pipeline-parallel branch July 27, 2024 22:05
tjohnson31415 added a commit to tjohnson31415/vllm that referenced this pull request Jul 30, 2024
* upstream/main: (66 commits)
  [Bugfix] Fix PaliGemma MMP (vllm-project#6930)
  [TPU] Fix greedy decoding (vllm-project#6933)
  [Kernel] Tuned int8 kernels for Ada Lovelace (vllm-project#6848)
  [Kernel] Fix marlin divide-by-zero warnings (vllm-project#6904)
  [ci] GHA workflow to remove ready label upon "/notready" comment (vllm-project#6921)
  [Kernel] Remove unused variables in awq/gemm_kernels.cu (vllm-project#6908)
  [Frontend] New `allowed_token_ids` decoding request parameter (vllm-project#6753)
  [Bugfix] Allow vllm to still work if triton is not installed. (vllm-project#6786)
  [TPU] Support tensor parallelism in async llm engine (vllm-project#6891)
  [Kernel] Fix deprecation function warnings squeezellm quant_cuda_kernel (vllm-project#6901)
  [Core] Reduce unnecessary compute when logprobs=None (vllm-project#6532)
  [Kernel] Tuned FP8 Kernels for Ada Lovelace (vllm-project#6677)
  [Model] Initialize support for InternVL2 series models (vllm-project#6514)
  [Misc] Pass cutlass_fp8_supported correctly in fbgemm_fp8 (vllm-project#6871)
  Add Nemotron to PP_SUPPORTED_MODELS (vllm-project#6863)
  [Kernel] Increase precision of GPTQ/AWQ Marlin kernel (vllm-project#6795)
  [TPU] Reduce compilation time & Upgrade PyTorch XLA version  (vllm-project#6856)
  [Docs] Add RunLLM chat widget (vllm-project#6857)
  [Model] Initial support for BLIP-2 (vllm-project#5920)
  [CI/Build][Doc] Update CI and Doc for VLM example changes (vllm-project#6860)
  ...
kylesayrs pushed a commit to neuralmagic/vllm that referenced this pull request Aug 17, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Signed-off-by: Alvant <alvasian@yandex.ru>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants