-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TST: Test fails locally [pyOpenSci review] #238
Comments
When you do so, can you please also include your system info (as requested in our bug report issue template)? (These tests are passing in our CI, so will need to try and track it down) System (please complete the following information):
|
OS: Linux, PopOS 22.04 LTS (Ubuntu-like) I could also reply with a dump of the virtual environment to a requirements.txt if that would help.
|
Hi @NickleDave, just getting around to this now. Can you give me a full dump of the virtual environment? However, I think this is an issue with setting the seed, which determines the initial conditions for the optimization (most importantly, the patch of white noise that we use as the initial image). Given that, can you run: import plenoptic as po
import torch
import imageio
po.tools.set_seed(0)
# from the data directory for now
einstein = po.load_images('data/256/einstein.png')
mdl = po.simul.OnOff((31, 31), pretrained=True, cache_filt=True)
po.tools.remove_grad(mdl)
met = po.synth.Metamer(einstein, mdl)
imageio.imsave('init.png', (255 * po.to_numpy(met.metamer).squeeze()).astype(np.uint8)) and then upload the resulting If they're different, then I can try uploading the version I get and seeing if using that in the tests fixes the problem. I'd also be open to other alternatives for this test -- basically, I'm checking that, for a given set of initial conditions, |
Alright, based on testing with @BalzaniEdoardo 's Mac, I'm 99% sure this is a result of getting a different sample of white noise during initialization. My proposed fix is to actually check against the stop criterion, which makes more sense anyway. |
Sorry for the slow reply Not sure I follow all the logic of the test but I agree actually checking against the stop criterion makes more sense |
Can you check with |
@NickleDave can you try running the tests again? (with the |
Hi @billbrod sorry for the really slow reply on this I don't have access to the same Ubuntu machine anymore. Not relevant for this issue but note I had to comment out the line addopts = "--cov=plenoptic -n auto" or otherwise I got the error ERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...]
pytest: error: unrecognized arguments: --cov=plenoptic -n
inifile: /Users/davidnicholson/Documents/repos/opensci/plenoptic/pyproject.toml
rootdir: /Users/davidnicholson/Documents/repos/opensci/plenoptic Not sure if that's operator error. Also not a big deal but you might want to change the snippet in the dev installation instructions here to say |
Thanks! Did you run (And that's a good point about the dev instructions, I'll make that change) |
Ah, I see, I had run Not to complicate your life any further but jic you care, you can recursively define optional dependencies so that |
Oh interesting, it might make sense to rename my current If I understand that link correctly, I'd add something like: dev = [
"plenoptic[docs,nb,test]",
] which will grab the version currently on PyPI, right? Ideally, it would grab the contents of those bundles from the same file, in case ( |
Oh wait, I spoke too soon -- it looks like it is grabbing the bundle from the current file (tested by incrementing the version required of a package in docs). Alright, that's good to know. |
Yeah it's a bit confusing. IIUC since you will already have (the local version of) your package installed, the dependency resolver won't go get it off PyPI, and instead will just get the optional dependencies that are declared recursively See https://discuss.python.org/t/where-is-nested-recursive-optional-dependencies-documented/35648/7 and GitHub issue linked therein |
I think for now that I won't use the "recurisve optional dependencies" like this, since that Github issue hasn't been closed. If I understand correctly, it's because of they're not testing this behavior right now, so it seems like a bad idea to rely upon it. If many folks have this confusion in the future, I may revisit this decision. In that case, I'm going to close this with the merging of #294. (The actual test fix happened in #251) |
This test failed for me locally:
tests/test_metamers.py::TestMetamers::test_stop_criterion
I need to reboot and re-run so I can give you the full log from the pytest run, will do so after I finish the comment with my entire review 🙂
The text was updated successfully, but these errors were encountered: