Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test on Julia v1.10 (and possibly 1.9) #26

Closed
sloede opened this issue Mar 8, 2024 · 6 comments · Fixed by #29
Closed

Test on Julia v1.10 (and possibly 1.9) #26

sloede opened this issue Mar 8, 2024 · 6 comments · Fixed by #29

Comments

@sloede
Copy link
Member

sloede commented Mar 8, 2024

IMHO the main tests should be run on the most recent stable release, plus a smaller testset possibly on older versions to verify that they are still supported.

@patrickersing
Copy link
Contributor

We were just about to update the CI to Julia v1.9 in #22.
Since Trixi.jl is still testing on Julia v1.9, do you think it make sense to wait for Trixi.jl to update the testing or should we move directly to v1.10?

@sloede
Copy link
Member Author

sloede commented Mar 8, 2024

No, just move directly to v1.10 unless you find hard issues that we need to solve in Trixi.jl first.

@patrickersing
Copy link
Contributor

Running the test on Julia v1.10 some of the sensitive (mostly well-balanced) tests fail just outside their tolerance, see #29

image

Does anyone know of any changes that could have triggered this difference?

Since the results are still correct, I would suggest that we just loosen the tolerance a bit.

@ranocha
Copy link
Member

ranocha commented Mar 11, 2024

I have seen similar differences in SummationByPartsOperators.jl. I guess that they are just caused by some floating point differences coming from different LLVM versions, but I am not 100% certain. However, I would be fine with following your strategy (which I have done also in SummationByPartsOperators.jl).

@sloede
Copy link
Member Author

sloede commented Mar 11, 2024

I think I've seen similar floating point differences in trixi-framework/Trixi.jl#1562, where some tests apparently fail due to small changes in the error norms.

@patrickersing
Copy link
Contributor

patrickersing commented Mar 11, 2024

It makes sense that the problem would be caused by some floating point differences as these tests are very sensitive to this. The failing tests can also be reproduced when I run them on my local machine.
Then if nobody has any objections, I will just adjust the tolerances in #29 .

@andrewwinters5000 andrewwinters5000 linked a pull request Mar 13, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants