Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add secondary test to kernel explainer pytests for stability in Volta #3282

Merged
merged 3 commits into from
Dec 9, 2020

Conversation

dantegd
Copy link
Member

@dantegd dantegd commented Dec 8, 2020

There is some odd behavior happening on the linear regression step on Volta GPUs, so this PR adds a secondary test to make the tests stable for 0.17 release.

@dantegd dantegd added 3 - Ready for Review Ready for review by team tests Unit testing for project labels Dec 8, 2020
@dantegd dantegd requested a review from a team as a code owner December 8, 2020 23:17
@dantegd dantegd added bug Something isn't working non-breaking Non-breaking change labels Dec 8, 2020
Copy link
Contributor

@JohnZed JohnZed left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks useful as a stopgap during further investigation! Thanks

expected_sum = 0.99 <= (abs(np.sum(cp.asnumpy(
cu_shap_values))) / abs(fx - expected)) <= 1.01

if not close_values:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want the abs? seems like there should not be a sign flip?

how about just np.allclose(1.00, np.sum(cp.asnumpy(..)) / (fx - expected), atol=tolerance) so we're consistent with other tolerance use?

print("golden_result_values")
print(golden_result_values)

assert close_values or expected_sum
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More generally, I think this is livable to stabilize CI but I assume we'll commit to getting rid of it before graduating from experimental to non? It's like a nice invariant to maintain but not the most precise test. Maybe add to the shap remaining topics issue?

@codecov-io
Copy link

codecov-io commented Dec 9, 2020

Codecov Report

Merging #3282 (a266992) into branch-0.17 (d063ca4) will decrease coverage by 0.00%.
The diff coverage is n/a.

Impacted file tree graph

@@               Coverage Diff               @@
##           branch-0.17    #3282      +/-   ##
===============================================
- Coverage        71.46%   71.45%   -0.01%     
===============================================
  Files              205      205              
  Lines            16595    16594       -1     
===============================================
- Hits             11859    11858       -1     
  Misses            4736     4736              
Impacted Files Coverage Δ
python/cuml/experimental/explainer/common.py 92.06% <ø> (-0.13%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d063ca4...443a141. Read the comment docs.

@dantegd dantegd merged commit 5ff2794 into rapidsai:branch-0.17 Dec 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 - Ready for Review Ready for review by team bug Something isn't working non-breaking Non-breaking change tests Unit testing for project
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants