Skip to content

Conversation

@rgsl888prabhu
Copy link
Collaborator

@rgsl888prabhu rgsl888prabhu commented Aug 14, 2025

Description

New architectures were added to support new chips, which increased the size of the wheel above the previous threshold, so increasing the threshold to accommodate the change.

Issue

Checklist

  • I am familiar with the Contributing Guidelines.
  • Testing
    • New or existing tests cover these changes
    • Added tests
    • Created an issue to follow-up
    • NA
  • Documentation
    • The documentation is up to date with these changes
    • Added new documentation
    • NA

@rgsl888prabhu rgsl888prabhu self-assigned this Aug 14, 2025
@rgsl888prabhu rgsl888prabhu requested a review from a team as a code owner August 14, 2025 18:34
@rgsl888prabhu rgsl888prabhu added the non-breaking Introduces a non-breaking change label Aug 14, 2025
@rgsl888prabhu rgsl888prabhu requested a review from AyodeAwe August 14, 2025 18:34
@rgsl888prabhu rgsl888prabhu added the improvement Improves an existing functionality label Aug 14, 2025
Copy link
Member

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we "need" to increase this?

The latest libcuopt wheel build succeeded, and the libcuopt-cu12 wheels were around 560 MB, for both x86_64 and arm64

----- package inspection summary -----
file size
  * compressed size: 0.561G
  * uncompressed size: 0.86G
  * compression space saving: 34.8%
contents

(build link)

Around 140 MB of extra space should be PLENTY to not disrupt development here. I honestly would even recommend decreasing this to something like 625M to be notified sooner of unexpected growth in the package sizes.

# PyPI limit is 700 MiB, fail CI before we get too close to that
# 11.X size is 300M compressed and 12.x size is 600M compressed
max_allowed_size_compressed = '700M'
max_allowed_size_compressed = '775M'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment above this is not accurate. There is no "PyPI limit" for this project... libcuopt-cu12 wheels are not published to pypi.org, only to pypi.nvidia.com (which does not have size limits).

That comment should be rewritten to something simple and future-proof, like this:

# detect when package size grows significantly
max_allowed_size_compressed = '500M'

Copy link
Member

@jameslamb jameslamb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving this. Please change the title to something like:

Bump libcuopt size limit to 775MiB

I think we discovered offline that it's not papilo mainly adding to this, but the expanded set of GPU architectures: rapidsai/rapids-cmake#897

@rgsl888prabhu rgsl888prabhu changed the title Bump libcuopt size for wheel since we have added papilo Bump libcuopt size to 775MiB Aug 15, 2025
@rgsl888prabhu
Copy link
Collaborator Author

/merge

@rapids-bot rapids-bot bot merged commit 59d9bf4 into NVIDIA:branch-25.10 Aug 15, 2025
73 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

improvement Improves an existing functionality non-breaking Introduces a non-breaking change

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants