Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xfail for test_G_star_Row_Standardized #336

Merged

Conversation

jGaboardi
Copy link
Member

This PR resolves #331

@jGaboardi jGaboardi changed the title xfail for test_getisord.py::TestGetisG::test_G_star_Row_Standardized xfail for test_G_star_Row_Standardized Jul 20, 2024
@jGaboardi jGaboardi self-assigned this Jul 20, 2024
@jGaboardi jGaboardi added the ci label Jul 20, 2024
@jGaboardi jGaboardi marked this pull request as draft July 20, 2024 18:27
@jGaboardi jGaboardi marked this pull request as ready for review July 20, 2024 18:47
@jGaboardi jGaboardi requested a review from sjsrey July 20, 2024 18:54
Comment on lines 78 to 79
if not record:
pytest.fail("Expected a warning!")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you do this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a fail safe so it fails exactly when no warnings are emitted. I could not figure out how to add that granularity to parametrize_w without it affecting other tests. Maybe I'm missing something easy? Any thoughts?

I couldn't determine the proper raises argument for pytest.mark.xfail and I think having only a general failure is too vague.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But this makes no difference, no? You are just doing explicitly what pytest would do here anyway. I probably don't follow here...

We can also split this into two tests, one checking the warning and the other the values.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I probably don't follow here...

I didn't explain well. What I mean it that I could not find the proper pytest error type to include in the xfail(raise=) parameter - the error raised when no warnings are emitted when in pytest.warns context.

We can also split this into two tests, one checking the warning and the other the values.

This may be the way to go if there isn't a more elegant solution that I am overlooking.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably split it.

Copy link

codecov bot commented Jul 20, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 81.3%. Comparing base (c6ca25f) to head (91df530).
Report is 2 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@          Coverage Diff          @@
##            main    #336   +/-   ##
=====================================
  Coverage   81.3%   81.3%           
=====================================
  Files         24      24           
  Lines       3333    3331    -2     
=====================================
  Hits        2709    2709           
+ Misses       624     622    -2     

see 1 file with indirect coverage changes

@martinfleis martinfleis merged commit faff9fd into pysal:main Jul 21, 2024
14 checks passed
@jGaboardi jGaboardi deleted the GH331_xfail_test_G_star_Row_Standardized branch July 21, 2024 15:20
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

weird Python 3.10 oldest error
3 participants