Skip to content

Check for required headers before compilation #514

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Mar 14, 2025

Conversation

vzhurba01
Copy link
Collaborator

Close #283

Since graphics APIs and types are redefined, we don't need to include or parse those headers. Note that in the future, issue #488 will extend the required headers to graphics headers as well.

Also fix conditional EGL API inclusions. They should always be
included because all of their types are already redefined and available.
@vzhurba01 vzhurba01 added enhancement Any code-related improvements P1 Medium priority - Should do cuda.bindings Everything related to the cuda.bindings module labels Mar 13, 2025
@vzhurba01 vzhurba01 self-assigned this Mar 13, 2025
Copy link
Contributor

copy-pr-bot bot commented Mar 13, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@vzhurba01
Copy link
Collaborator Author

/ok to test

This comment has been minimized.

@vzhurba01
Copy link
Collaborator Author

/ok to test

{{if 'cudaEGLStreamProducerPresentFrame' in found_functions}}cdef cudaError_t _cudaEGLStreamProducerPresentFrame(cudaEglStreamConnection* conn, cudaEglFrame eglframe, cudaStream_t* pStream) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
{{if 'cudaEGLStreamProducerReturnFrame' in found_functions}}cdef cudaError_t _cudaEGLStreamProducerReturnFrame(cudaEglStreamConnection* conn, cudaEglFrame* eglframe, cudaStream_t* pStream) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
{{if 'cudaGraphicsResourceGetMappedEglFrame' in found_functions}}cdef cudaError_t _cudaGraphicsResourceGetMappedEglFrame(cudaEglFrame* eglFrame, cudaGraphicsResource_t resource, unsigned int index, unsigned int mipLevel) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
{{if True}}cdef cudaError_t _cudaEGLStreamProducerPresentFrame(cudaEglStreamConnection* conn, cudaEglFrame eglframe, cudaStream_t* pStream) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an easy way to leave a helpful hint here?

What I have in mind:

# For tagging <explanation here>:
helpful_hint_here = True

Then, instead of {{if True}}{{if helpful_hint_here}}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For a complete solution it's not that easy. Updating this file and placing the explanation in setup.py is simple, the problem is that this pattern is used all across cuda.bindings as part of the auto-generation.

On a major release we'll resolve issue #488 and I think most if not all of these instances will disappear, so lets revisit this then.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Thanks for the explanation!

@@ -2206,6 +2206,7 @@ cdef cudaError_t _cudaGetTextureObjectResourceDesc(cudaResourceDesc* pResDesc, c
return err

{{endif}}
{{if True}}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we use the helpful_hint_here, too?

Out of curiosity: Are these {{if True}} actually required? — Even if not, I think helpful hints here are useful.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They're not needed but I want it there for consistency with the rest of the source base. Visually at least, it helps identify which sections use redefined types rather than extern with a C header.

Overall same comment as above, lets revisit this after #488.

}

# Assert that all headers exist
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's significantly safer / less error prone to move non-trivial code into functions. This avoids accidentally using or overwriting variables from elsewhere, among other things.

For example here:

def assert_that_all_headers_exist():
    ...

assert_that_all_headers_exist()

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

break
if not os.path.exists(path):
missing_headers += [header]

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change:

for path in path_candidate:
    if os.path.exists(path):
        header_paths.append(path)
        break
else:
    missing_headers.append(header)

Related ChatGPT convo with rationale: https://chatgpt.com/share/67d35f63-57d8-8008-97ae-cbcc582781a6

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Yeah that function felt really weird to me too but my personal queries with gpt didn't turn out the same. The for-else is a neat trick! Thanks!

if not os.path.exists(path):
print(f"Missing header {header}")

for library, header_paths in header_dict.items():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional, since you're not changing much here, but I'd move this loop into a function, too.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@vzhurba01
Copy link
Collaborator Author

/ok to test

Copy link
Collaborator

@rwgk rwgk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

{{if 'cudaEGLStreamProducerPresentFrame' in found_functions}}cdef cudaError_t _cudaEGLStreamProducerPresentFrame(cudaEglStreamConnection* conn, cudaEglFrame eglframe, cudaStream_t* pStream) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
{{if 'cudaEGLStreamProducerReturnFrame' in found_functions}}cdef cudaError_t _cudaEGLStreamProducerReturnFrame(cudaEglStreamConnection* conn, cudaEglFrame* eglframe, cudaStream_t* pStream) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
{{if 'cudaGraphicsResourceGetMappedEglFrame' in found_functions}}cdef cudaError_t _cudaGraphicsResourceGetMappedEglFrame(cudaEglFrame* eglFrame, cudaGraphicsResource_t resource, unsigned int index, unsigned int mipLevel) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
{{if True}}cdef cudaError_t _cudaEGLStreamProducerPresentFrame(cudaEglStreamConnection* conn, cudaEglFrame eglframe, cudaStream_t* pStream) except ?cudaErrorCallRequiresNewerDriver nogil{{endif}}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. Thanks for the explanation!

@vzhurba01 vzhurba01 merged commit 1e0e7c3 into NVIDIA:main Mar 14, 2025
73 checks passed
Copy link

Doc Preview CI
Preview removed because the pull request was closed or merged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cuda.bindings Everything related to the cuda.bindings module enhancement Any code-related improvements P1 Medium priority - Should do
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Specify hard header dependencies in setup.py and fail build early if those headers are missing
2 participants