-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for CI testing #124
Conversation
Blocked on #128. |
See #128 (comment), thx! |
7763fc9
to
8e11153
Compare
19b1525
to
5ffa6b7
Compare
868d01b
to
a9ed0c6
Compare
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
/ok to test |
/ok to test |
/ok to test |
/ok to test |
/ok to test |
1 similar comment
/ok to test |
/ok to test |
Here are some updates since Friday:
It turns out that this is a misunderstanding (of mine), sorry! It's the other way around: It's the composite actions that do this, not reusable workflows, see, e.g. https://docs.github.com/en/actions/sharing-automations/avoiding-duplication#comparison-of-reusable-workflows-and-composite-actions. So I refactored in the opposite (and wrong) direction. Let us do this (change all composite actions to reusable workflows) in a follow-up PR since the CI is now working and there's no reason to delay.
This is another misunderstanding (of mine), sorry (again)! For both cases (distinct jobs vs distinct workflows), it require some handling of input/output. There's no way for sharing the env vars in either case.
As part of this I removed all CI scripts in commit 7b074f0. It is the best that we focus on testing pip-based workflows for now, and add conda next (which would be treated differently). Mixing-and-matching is not ideal. |
@pytest.fixture(scope="session", autouse=True) | ||
def always_init_cuda(): | ||
handle_return(driver.cuInit(0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, @ksimpson-work the CI was able to catch this issue: Depending on how the tests are run, it could be possible that a test ends without CUDA even initialized. So we must ensure it ourselves by the test start time.
ctx = handle_return(driver.cuCtxGetCurrent()) | ||
if int(ctx) == 0: | ||
# no active context, do nothing | ||
return |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, @ksimpson-work another issue caught by the CI (and also back in #261): A test could end early without a CUDA context set current, so we need to detect this at the test teardown time.
@@ -10,7 +10,6 @@ | |||
import os | |||
import sys | |||
|
|||
import cupy as cp |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now I treat CuPy as an optional test dependency, so any reference to CuPy in this file should be removed. (We're not using too much memory during tests anyway.)
def can_load_generated_ptx(): | ||
_, driver_ver = cuda.cuDriverGetVersion() | ||
_, nvrtc_major, nvrtc_minor = nvrtc.nvrtcVersion() | ||
if nvrtc_major * 1000 + nvrtc_minor * 10 > driver_ver: | ||
return False | ||
return True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI @ksimpson-work this is akin to this snippet that I added to CuPy in the past (PTX might not be loadable/JIT'able if it's newer than the driver):
https://github.com/cupy/cupy/blob/8eb16ac910e85c119a20f68a69de9a2e6034069c/tests/cupy_tests/core_tests/test_raw.py#L557-L568
Thanks for help, @sandeepd-nv! |
No description provided.