-
Notifications
You must be signed in to change notification settings - Fork 26
Conversation
* there are no plans I'm aware of to change `master` to `main` on the wheels repo, but I've started seeing cron errors related to `master` checkouts of the main repo * so, try to fixup cases where `master` is incorrectly used to reference the main SciPy repo, but leave `master` in for those cases where we are referring to the wheels repo proper (at least for now) * reference CI failure: https://app.travis-ci.com/github/MacPython/scipy-wheels/jobs/558549551
Appveyor failure is Pythran, the Linux 32-bit wheel may be related since this PR pulls in a newer |
Close/re-open--let's see where we stand here a few months later. I'm almost certain we'll need more wheels repo |
* replicated scipygh-16139 on the latest maintenance branch because the `master` branch of the wheels repo will encounter the issues described in that PR (for example, see: MacPython/scipy-wheels#166 which has Travis and Azure failures caused by those same versioning issues) * I think the `cwd` is still correct even though the patch is being applied to a different file this time (used to be `setup.py`), though we could double check this by pointing the wheels PR at the commit hash of this PR if we want * any reason not to forward port this as well at this point, if we're going to need to keep backporting it?
* replicated scipygh-16139 on the latest maintenance branch because the `master` branch of the wheels repo will encounter the issues described in that PR (for example, see: MacPython/scipy-wheels#166 which has Travis and Azure failures caused by those same versioning issues) * I think the `cwd` is still correct even though the patch is being applied to a different file this time (used to be `setup.py`), though we could double check this by pointing the wheels PR at the commit hash of this PR if we want * any reason not to forward port this as well at this point, if we're going to need to keep backporting it?
* replicated scipygh-16139 on the latest maintenance branch because the `master` branch of the wheels repo will encounter the issues described in that PR (for example, see: MacPython/scipy-wheels#166 which has Travis and Azure failures caused by those same versioning issues) * I think the `cwd` is still correct even though the patch is being applied to a different file this time (used to be `setup.py`), though we could double check this by pointing the wheels PR at the commit hash of this PR if we want * any reason not to forward port this as well at this point, if we're going to need to keep backporting it?
replicated gh-16139 on the latest maintenance branch because the `master` branch of the wheels repo will encounter the issues described in that PR (for example, see: MacPython/scipy-wheels#166 which has Travis and Azure failures caused by those same versioning issues) [ci skip]
* try pinning setuptools on Appveyor
For the Windows failures I'll try pinning I'm also not going to be surprised if there is yet another complication with this merged in now: scipy/scipy#16335 |
@mdhaber @mckib2 I'm seeing a failure to compile Here's an image of a colored "word diff" between the two compile lines, and it seems almost identical beyond the tooling versions, so I'll give the bump a try... |
* bump the Visual Studio tool version used in Appveyor to better match the main repo, because I'm seeing a failure to compile `HEkkDual` source in the wheels repo
I may need to ask for some more Travis credits too based on the banner I'm seeing there this evening.. |
Ah, for appveyor I see stuff hardcoded to
I may need to debug that on a fork or something. The usage of both years and numbers for versions doesn't make things clearer either. |
32-bit Windows jobs are successfully building with VS 2019 now (though failing a few tests), but 64-bit Windows jobs need some checking. Making progress on my fork now: tylerjereddy#1 |
* fixup the 64-bit mingw path needed for new Appveyor image
The Travis CI team filled up our credits again but I've temporarily disabled Travis CI here while I iterate. The most annoying issue at the moment is that the Linux 32-bit jobs seemed to recently switch to basically running forever at the test stage and ignoring timeout cancellation requests, along with not providing any usable log output. I'm hoping that changes with the latest push, but if it doesn't I may need to check this in a 32-bit container locally. |
Another thing to consider is that stuff like this may not be forward compatible now that I've bumped the compiler version: I suspect I should try to copy in the DLL from the matching version of Visual Studio. |
I'll iterate/debug the DLL updates on my fork until that works: tylerjereddy#3 |
The DLL updates have been cherry-picked in. The 32-bit linux test-time hang/infinite run I'm still investigating. |
@isuruf @matthew-brett @charris @mattip have you seen this issue where the 32-bit Linux jobs literally run for days instead of timing out and giving useful output? https://github.com/MacPython/scipy-wheels/runs/6785487807 |
Appveyor was fully passing so I've temporarily disabled that and also simplified the Azure matrix to focus on debugging the 32-bit Linux builds that keep running indefinitely. As a start, I'll see what happens if I bump If I finally see a proper error/termination with that change, then I'll probably have to switch the image back and experiment a bit. I could try bumping the |
This reverts commit 36e8cb6.
Ok, using an invalid It feels like a bit of a stretch, but worth a try. |
Looks like the "hang" happens fairly early in the 32-bit Linux testsuite: One option then is to use a higher verbosity test run to see which part of the suite is visited to get a hint where the freeze happens. Before I do that though, let me temporarily point this branch at the tip of |
Pointing to the tip of the maintenance branch allowed the 32-bit Linux job with Python 3.9 to fail the testsuite in the normal way, but |
…9.x" This reverts commit ff3dbe2.
Watching the problematic 32-bit Linux jobs more carefully, in real time, I see 20+ minutes stuck here for both Python 3.8/3.9:
|
Ok, both jobs stuck there for 40 minutes now, better touch base with NumPy team re: this getting built from source and taking forever/hanging vs. the binary availble in |
Have you tried with manylinux2014? |
* try forcing binary NumPy install when installing the pre-built SciPy wheel, to avoid the hang when building NumPy `1.22.x` from source on 32-bit Linux * this was suggested by Matti on the mailing list
Haven't tried that, let me see if Matti's suggestion of |
* local testing suggests `--prefer-binary` will work for forcing an older/binary NumPy for 32-bit builds when `pip` installing the pre-built SciPy wheel, so try that * revert some higher verbosity testing that used for debugging
That didn't work, but a generic |
I think this is in pretty good shape now @rgommers. Here's a summary of what has changed:
What is still failing in CI?
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Tyler! Everything in this PR LGTM, and I agree that the test failures on the two 32-bit Linux jobs are not a blocker. In it goes
there are no plans I'm aware of to change
master
tomain
on the wheels repo, but I've started seeing cronerrors related to
master
checkouts of the main reposo, try to fixup cases where
master
is incorrectly usedto reference the main SciPy repo, but leave
master
in forthose cases where we are referring to the wheels repo proper
(at least for now)
reference CI failure:
https://app.travis-ci.com/github/MacPython/scipy-wheels/jobs/558549551
I do realize we plan to replace the wheels infrastructure here eventually, but until then...