Skip to content

Conversation

ShangmingCai
Copy link
Contributor

@ShangmingCai ShangmingCai commented Mar 7, 2025

This PR refactors the speculative decoding configuration and tests according to the discussion in #13601.

Major changes:

  • It makes the configuration of speculative decoding hierarchical like the compilation config. It requires users to provide a JSON string containing all configurations related to speculative decoding and pass it in with --speculative-config. After the next release, I will submit a PR to remove all stale speculative-related CLI args and do some cleanup.
  • It categorizes speculative-related configurations and hierarchically refactor the doc for class SpeculativeConfig, includes Configurable parameters and Non-configurable internal parameters. Configurable parameters mainly consist of all previous cli args, and are categorized into 3 types:
class SpeculativeConfig:
    """
    Configuration for speculative decoding.
    Configurable parameters include:
    - General Speculative Decoding Control:
        - num_speculative_tokens (int): The number of speculative
            tokens, if provided. It will default to the number in the draft
            model config if present, otherwise, it is required.
        - model (Optional[str]): The name of the draft model, eagle head,
            or additional weights, if provided.
        - method (Optional[str]): The name of the speculative method to use.
            If users provide and set the `model` param, the speculative method
            type will be detected automatically if possible, if `model` param
            is not provided, the method name must be provided.
            - Possible values:
                - ngram
                    Related additional configuration:
                    - prompt_lookup_max (Optional[int]):
                        Maximum size of ngram token window when using Ngram
                        proposer, required when method is set to ngram.
                    - prompt_lookup_min (Optional[int]):
                        Minimum size of ngram token window when using Ngram
                        proposer, if provided. Defaults to 1.
                - eagle
                - medusa
                - mlp_speculator
                - draft_model
        - acceptance_method (str): The method to use for accepting draft
            tokens. This can take two possible values: 'rejection_sampler' and
            'typical_acceptance_sampler' for RejectionSampler and
            TypicalAcceptanceSampler respectively. If not specified, it
            defaults to 'rejection_sampler'.
            - Possible values:
                - rejection_sampler
                - typical_acceptance_sampler
                    Related additional configuration:
                    - posterior_threshold (Optional[float]):
                        A threshold value that sets a lower bound on the
                        posterior probability of a token in the target model
                        for it to be accepted. This threshold is used only
                        when we use the TypicalAcceptanceSampler for token
                        acceptance.
                    - posterior_alpha (Optional[float]):
                        Scaling factor for entropy-based threshold, applied
                        when using TypicalAcceptanceSampler.
        - draft_tensor_parallel_size (Optional[int]): The degree of the tensor
            parallelism for the draft model. Can only be 1 or the same as the
            target model's tensor parallel size.
        - disable_logprobs (bool): If set to True, token log probabilities are
            not returned during speculative decoding. If set to False, token
            log probabilities are returned according to the log probability
            settings in SamplingParams. If not specified, it defaults to True.

    - Draft Model Configuration:
        - quantization (Optional[str]): Quantization method that was used to
            quantize the draft model weights. If None, we assume the
            model weights are not quantized. Note that it only takes effect
            when using the draft model-based speculative method.
        - max_model_len (Optional[int]): The maximum model length of the
            draft model. Used when testing the ability to skip
            speculation for some sequences.
        - revision: The specific model version to use for the draft model. It
            can be a branch name, a tag name, or a commit id. If unspecified,
            will use the default version.
        - code_revision: The specific revision to use for the draft model code
            on Hugging Face Hub. It can be a branch name, a tag name, or a
            commit id. If unspecified, will use the default version.

    - Advanced Control:
        - disable_mqa_scorer (bool): Disable the MQA scorer and fall back to
            batch expansion for scoring proposals. If not specified, it
            defaults to False.
        - disable_by_batch_size (Optional[int]): Disable speculative decoding
            for new incoming requests when the number of enqueued requests is
            larger than this value, if provided.

    Although the parameters above are structured hierarchically, there is no
    need to nest them during configuration.
    """
  • It does some refinement of parameter naming. Since all parameters are now encapsulated within --speculative_config, prefixes like "speculative_" have been omitted for brevity and clarity. Several overly lengthy advanced parameter names that are self-explanatory have not been shortened. Please let me know if I should revert some unpropely changes. Or if there are other parameter names that need to be streamlined, I can modify them as well.

Things want to achieve but meet some difficulty:

  • I initially intended to simplify the hierarchical configurations by modeling them after the CompilationConfig, using BaseModel and the from_cli method. However, the unique nature of the speculative config requires that certain internal parameters, such as the target_model_config, be passed during its initialization. This dependency implies that the initialization of speculative config must occur only after certain parameters have been initialized within the engine, making the direct use of abstractions like BaseModel and initialization methods such as from_cli infeasible.

Let me know if I should make more changes to make it perfect. I have run all spec-decode related tests in dir vllm/tests/spec_decode locally, and both the original and refactored test codes pass successfully.

CC List: @LiuXiaoxuanPKU @comaniac @WoosukKwon

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Copy link

github-actions bot commented Mar 7, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added documentation Improvements or additions to documentation speculative-decoding labels Mar 7, 2025
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Comment on lines -1925 to -1947
# TODO: The user should be able to specify revision/max model len
# for the draft model. It is not currently supported.
draft_revision = None
draft_code_revision = None
Copy link
Contributor Author

@ShangmingCai ShangmingCai Mar 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Almost forgot to mention that I made these two parameters configurable. If these parameters are unnecessary as configurable options, I will remove them.

@ShangmingCai
Copy link
Contributor Author

I am also considering shortening draft_tensor_parallel_size to draft_tp, I feel the most commonly used parameters should be as short as possible.

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
@mergify mergify bot added the v1 label Mar 12, 2025
Copy link

mergify bot commented Mar 12, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ShangmingCai.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 12, 2025
@mergify mergify bot removed the needs-rebase label Mar 12, 2025
@ShangmingCai ShangmingCai changed the title [Usage] Refactor speculative decoding configuration and tests [V1][Usage] Refactor speculative decoding configuration and tests Mar 13, 2025
Copy link

mergify bot commented Mar 16, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ShangmingCai.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link

mergify bot commented Mar 20, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ShangmingCai.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Mar 20, 2025
@mergify mergify bot removed the needs-rebase label Mar 20, 2025
self.use_spec_decode = True
# TODO: find a better way to check if we are using ngram.
assert self.speculative_config.ngram_prompt_lookup_min, \
assert self.speculative_config.method == "ngram", \
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@comaniac BTW, I think this new param kinda helps us if we want to check the speculative method type sometimes.

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
@LiuXiaoxuanPKU LiuXiaoxuanPKU added the ready ONLY add when PR is ready to merge/full CI is needed label Mar 22, 2025
@ShangmingCai
Copy link
Contributor Author

@LiuXiaoxuanPKU I see the ready label and the CI tests, happy that this PR is near the complete state. Just checked the failed CIs, and it seems there are some huggingface connection timeouts. Hoping that retriggering these tests should fix it and I haven't missed something important :)

@LiuXiaoxuanPKU LiuXiaoxuanPKU merged commit 50c9636 into vllm-project:main Mar 23, 2025
46 checks passed
erictang000 pushed a commit to erictang000/vllm that referenced this pull request Mar 25, 2025
…lm-project#14434)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
wrmedford pushed a commit to wrmedford/vllm that referenced this pull request Mar 26, 2025
…lm-project#14434)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Wes Medford <wryanmedford@gmail.com>
lulmer pushed a commit to lulmer/vllm that referenced this pull request Apr 7, 2025
…lm-project#14434)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Louis Ulmer <ulmerlouis@gmail.com>
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
…lm-project#14434)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
shreyankg pushed a commit to shreyankg/vllm that referenced this pull request May 3, 2025
…lm-project#14434)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
…lm-project#14434)

Signed-off-by: Shangming Cai <caishangming@linux.alibaba.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants