-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use system mkldnn/onednn #289
base: main
Are you sure you want to change the base?
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am confused by this PR. We need any libraries linked by pytorch to be provided by conda-forge.
Patch pytorch to use the system mkldnn library from the onednn package rather than building one locally from within ideep submodules. Given that ideep itself is a header-only library, I presume this is what was meant in conda-forge#108 (comment), and indeed unvendoring onednn seems to improve build time significantly. That said, our onednn package does not support GPU runtime (conda-forge/onednn-feedstock#44) but at least according to my testing, that part of the library was not enabled by our PyTorch builds before (due to missing SYCL). The patch is a bit hacky, and probably needs some polishing before being submitted upstream (and testing on other platforms). Part of issue conda-forge#108
…nda-forge-pinning 2024.11.08.10.03.25
|
Ahh I see. LGTM! |
Hmm, I also should make the |
This will also require a patch to onednn-feedstock, to enable experimental ukernel. I will submit it once I finish testing (probably next week). |
In order to unvendor onednn from PyTorch feedstock (conda-forge/pytorch-cpu-feedstock#289), onednn needs to be built with DNNL_EXPERIMENTAL_UKERNEL enabled. Please also note that the current version of PyTorch requires onednn 3.5.3, so we are also going to request creating a branch with the equivalent patch added for that version.
You should keep the feature set consistent between windows/unix if you can.
You might need to disable things for non-x86_64 arch. You can test |
…nda-forge-pinning 2024.11.11.08.59.26
…nda-forge-pinning 2024.11.12.14.24.54
Hi! This is the friendly automated conda-forge-linting service. I wanted to let you know that I linted all conda-recipes in your PR ( Here's what I've got... For recipe/meta.yaml:
|
@conda-forge-admin please rerender |
Lets see how the improvement translates to azure. |
…nda-forge-pinning 2024.11.12.14.24.54
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The direction here looks very good (custom channel aside of course). 👍
diff --git a/CMakeLists.txt b/CMakeLists.txt | ||
index 98593c2d..94a9d63d 100644 | ||
--- a/CMakeLists.txt | ||
+++ b/CMakeLists.txt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For quickly applying the patches as they exist on the feedstock (i.e. if it's not you who did the last change to them), it's nicer to generate them with git format-patch <tag> --no-signature
.
That has several advantages:
- commit message can be used to provide context (this patch is pretty self-explanatory, but in general...)
- authorship is easily visible (who to ask in case there are questions about intent, e.g. when rebasing to new versions)
- easily apply them to a source-checkout in one go using e.g.
find ../path/to/feedstock/recipe/patches/ | xargs git am
- having a local branch of patches makes it much easier to rebase the changes to a new version
- etc.
So while raw diff's are technically possible, the output of git format-patch
is preferred.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was just my dirty proof-of-concept. Ideally, I'd prefer to make something suitable for upstreaming.
Checklist
0
(if the version changed)conda-smithy
(Use the phrase@conda-forge-admin, please rerender
in a comment in this PR for automated rerendering)Patch pytorch to use the system mkldnn library from the onednn package rather than building one locally from within ideep submodules. Given that ideep itself is a header-only library, I presume this is what was meant in #108 (comment), and indeed unvendoring onednn seems to improve build time significantly.
That said, our onednn package does not support GPU runtime (conda-forge/onednn-feedstock#44) but at least according to my testing, that part of the library was not enabled by our PyTorch builds before (due to missing SYCL).
The patch is a bit hacky, and probably needs some polishing before being submitted upstream (and testing on other platforms).
Part of issue #108