Skip to content

Conversation

@danielenricocahall
Copy link
Contributor

In the 9.0.0 release of pytest, the behavior was updated to report an error when a SkipTest is raised rather than skipping, as discussed in this issue. This is causing various tests in sklearn_test.py to fail in CI as there is an estimator check sklearn applies for non-deterministic tags, and raises a SkipTest based on their presence:

def check_pipeline_consistency(name, estimator_orig):
    if get_tags(estimator_orig).non_deterministic:
        msg = name + " is non deterministic"
        raise SkipTest(msg)

To mitigate this problem, we can pin pytest to less than 9.0.0 for the time being, which will currently install 8.4.2. If they decide to restore the old behavior in a future release of 9.x, this pin can be removed.

Alternatively, the tests can be updated to expect a SkipTest to be raised, although this feels less ergonomic and would have to be reverted if the behavior does get restored.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @danielenricocahall, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses and resolves recent CI failures caused by a significant behavioral change in pytest version 9.0.0. The new pytest release now reports SkipTest exceptions as errors, which was causing tests in sklearn_test.py to fail due to their use of SkipTest for non-deterministic estimator checks. The solution implemented is to temporarily pin the pytest dependency to a version prior to 9.0.0, ensuring the CI pipeline remains stable and functional.

Highlights

  • Pytest Version Pin: Pinned the pytest dependency to a version less than 9.0.0 in requirements-common.txt to prevent CI failures.
  • CI Test Stability: Resolved CI test failures that arose from pytest 9.0.0's new behavior of treating SkipTest exceptions as errors, which impacted sklearn_test.py.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request pins pytest to versions older than 9.0.0 to resolve a CI failure caused by a breaking change in how SkipTest is handled. This is a good and necessary short-term fix. To improve future maintainability, I've suggested adding a comment to requirements-common.txt to document the reason for this version pin, which will be helpful when the pin can be removed.

@codecov-commenter
Copy link

codecov-commenter commented Nov 10, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 82.66%. Comparing base (19ca9c1) to head (8bfe540).

Additional details and impacted files
@@           Coverage Diff           @@
##           master   #21839   +/-   ##
=======================================
  Coverage   82.66%   82.66%           
=======================================
  Files         577      577           
  Lines       59477    59477           
  Branches     9329     9329           
=======================================
  Hits        49167    49167           
  Misses       7907     7907           
  Partials     2403     2403           
Flag Coverage Δ
keras 82.48% <ø> (ø)
keras-jax 63.31% <ø> (ø)
keras-numpy 57.54% <ø> (ø)
keras-openvino 34.34% <ø> (ø)
keras-tensorflow 64.12% <ø> (ø)
keras-torch 63.60% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@fchollet
Copy link
Collaborator

Thanks for the PR! We want to be able to keep updating pytest going forward, so it would be preferable to refactor sklearn_test.py so that it doesn't trigger a failure.

@danielenricocahall
Copy link
Contributor Author

handled with 4d30a7f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants