Skip to content

Conversation

@jeejeelee
Copy link
Collaborator

@jeejeelee jeejeelee commented Oct 8, 2025

Purpose

The following error occurs when using Python 3.11:

ERROR: Ignored the following versions that require a different python version: 0.0.0 Requires-Python >=3.13; 0.0.1 Requires-Python <3.13,>=3.12; 0.0.2 Requires-Python <3.13,>=3.12; 0.0.3 Requires-Python <3.13,>=3.12; 0.0.4 Requires-Python <3.13,>=3.12; 0.0.5 Requires-Python <3.13,>=3.12; 0.0.7 Requires-Python >=3.12; 0.0.8 Requires-Python >=3.12
ERROR: Could not find a version that satisfies the requirement gpt-oss>=0.0.7 (from versions: none)
ERROR: No matching distribution found for gpt-oss>=0.0.7

FIX #26434

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
@jeejeelee jeejeelee changed the title [Bugfix] Set the minimum python versionfor gpt-oss [Bugfix] Set the minimum python version for gpt-oss Oct 8, 2025
@mergify mergify bot added ci/build gpt-oss Related to GPT-OSS models labels Oct 8, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request fixes an installation failure of the gpt-oss dependency on Python versions below 3.12 by making it a conditional dependency. While this solves the immediate installation problem, it introduces a more subtle issue: users on older Python versions will experience runtime errors when trying to use the gpt-oss model, as the required package won't be installed. My review points out this degradation in user experience and recommends adding explicit runtime checks to inform the user about the version incompatibility, even though the relevant files are not part of this pull request.

setproctitle # Used to set process names for better debugging and monitoring
openai-harmony >= 0.0.3 # Required for gpt-oss
gpt-oss >= 0.0.7
gpt-oss >= 0.0.7; python_version > '3.11'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change fixes an installation error by making the gpt-oss dependency conditional. However, this approach trades a clear installation-time failure for a silent runtime failure. Users on Python < 3.12 will be able to install the package, but will get a cryptic ImportError if they try to use the gpt-oss model. This is a poor user experience.

A better approach would be to add a runtime check within the gpt-oss model code to detect if the dependency is missing and provide a clear error message to the user, explaining that the model requires Python 3.12 or newer. While that file is not in this PR, proceeding with this change alone is problematic as it can lead to confusion.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gpt-oss is an optional dependency in the code but CI requires it. What about removing this dependency and add it to requirements/test.in?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was added here 91ac7f7

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in 30faf26

Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Copy link
Collaborator

@heheda12345 heheda12345 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@github-project-automation github-project-automation bot moved this from To Triage to Ready in gpt-oss Issues & Enhancements Oct 8, 2025
@heheda12345 heheda12345 enabled auto-merge (squash) October 8, 2025 03:32
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 8, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
@vllm-bot vllm-bot merged commit 0c52d6e into vllm-project:main Oct 9, 2025
81 of 85 checks passed
zhiyuan1i pushed a commit to zhiyuan1i/vllm that referenced this pull request Oct 9, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
@jeejeelee jeejeelee deleted the fix-gpt-oss-dep branch October 9, 2025 03:55
845473182 pushed a commit to dsxsteven/vllm_splitPR that referenced this pull request Oct 10, 2025
…to loader

* 'loader' of https://github.com/dsxsteven/vllm_splitPR: (778 commits)
  [torchao] Add support for ModuleFqnToConfig using regex (vllm-project#26001)
  Add: Support for multiple hidden layers in Eagle3 (vllm-project#26164)
  Enable `RMSNorm` substitution for Transformers backend (vllm-project#26353)
  [Model] Gemma3: Fix GGUF loading and quantization (vllm-project#26189)
  Bump Flashinfer to v0.4.0 (vllm-project#26326)
  Update Dockerfile and install runai-model-streamer[gcs] package (vllm-project#26464)
  [Core] Relax the LoRA  max rank (vllm-project#26461)
  [CI/Build] Fix model nightly tests (vllm-project#26466)
  [Hybrid]: Decouple Kernel Block Size from KV Page Size (vllm-project#24486)
  [Core][KVConnector] Propagate all tokens on resumed preemptions (vllm-project#24926)
  [MM][Doc] Add documentation for configurable mm profiling (vllm-project#26200)
  [Hardware][AMD] Enable FlexAttention backend on ROCm (vllm-project#26439)
  [Bugfix] Incorrect another MM data format in vllm bench throughput (vllm-project#26462)
  [Bugfix] Catch and log invalid token ids in detokenizer #2 (vllm-project#26445)
  [Minor] Change warning->warning_once in preprocess (vllm-project#26455)
  [Bugfix] Set the minimum python version for gpt-oss (vllm-project#26392)
  [Misc] Redact ray runtime env before logging (vllm-project#26302)
  Separate MLAAttention class from Attention (vllm-project#25103)
  [Attention] Register FLASHMLA_SPARSE (vllm-project#26441)
  [Kernels] Modular kernel refactor (vllm-project#24812)
  ...
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Dhruvilbhatt pushed a commit to Dhruvilbhatt/vllm that referenced this pull request Oct 14, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: Dhruvil Bhatt <bhattdbh@amazon.com>
@wagerc97
Copy link

Following the installation guide for spinning up gpt-oss with vllm I still get this error:

Successfully installed uv-0.9.2
Using Python 3.12.12 environment at: $HOME/micromamba/envs/vllm_oct25
  × No solution found when resolving dependencies:
  ╰─▶ Because there is no version of torch==2.9.0.dev20250804+cu128 and vllm==0.10.1+gptoss depends on
      torch==2.9.0.dev20250804+cu128, we can conclude that vllm==0.10.1+gptoss cannot be used.
      And because you require vllm==0.10.1+gptoss, we can conclude that your requirements are unsatisfiable.

lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
0xrushi pushed a commit to 0xrushi/vllm that referenced this pull request Oct 26, 2025
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com>
Signed-off-by: 0xrushi <6279035+0xrushi@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci/build gpt-oss Related to GPT-OSS models ready ONLY add when PR is ready to merge/full CI is needed

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

[Bug]: gpt-oss dependency disallows vllm usage with python<=3.11

4 participants