-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rebuild for openssl3 #2
Conversation
Hi! This is the friendly automated conda-forge-linting service. I just wanted to let you know that I linted all conda-recipes in your PR ( |
yes indeed. I just gave up on this... Let me know if you have an idea. |
needs to be fixed in conda-forge#1 first
…nda-forge-pinning 2022.12.10.21.24.59
Does this package actually need openssl? The link check doesn't seem to think so:
|
I haven't tried without but I know it has some networking features (distributed training) so it might be required. |
Seems to pass fine without openssl. I'm presuming the library doesn't implement networking itself - maybe there's a dependency missing (like |
It looks like the distributed stuff is only built if the |
|
So the clang docs say that " The problem is that we cannot really patch the upstream source without breaking the ability to use distributed builds, because katago makes it abundantly clear that only vanilla, unpatched versions may share artefacts, and if we apply a patch we get a dirty git tag. I cannot currently see how to square this circle, maybe @isuruf @xhochy have some experience with PS: Upstream code in question is here; we could propose a fix that actually works with (our) clang and wait for another release. In the meantime I think we should just accept the dirty tag on osx. |
@h-vetinari I'm not able to test this right now, but maybe we could get away with symlinking |
So you definitely got past the linker error, well done! But now conda complains that the symlink you created may not be possible to resolve later.
I think it may be enough to just delete the symlink after the build to make this pass. |
So, looks like this worked! :) From the output of
@hadim, anything else you can think of for testing distributed support here? I checked upstream and didn't find much (except one that says "you should never need this", and some not very involved mentions in I also noticed that there are python bindings upstream. I guess those could/should be packaged as well...? Not a thing for this PR though. |
OK, I added what little tests I found that somehow mentioned distributed. From my side, this PR is about as good as it's going to get now... 🙃 |
Thanks a lot @h-vetinari . Not much more to say on my side. Maybe I could try the windows/cuda build again given the changes in that PR. |
Let's see why the migration bot is complaining...
Ah... windows+CUDA has been failing from the very start, and not fixed yet (see #1) - disabling it for now.