Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rewrite MOI.Test #1398

Closed
22 tasks done
blegat opened this issue Jun 19, 2021 · 19 comments
Closed
22 tasks done

Rewrite MOI.Test #1398

blegat opened this issue Jun 19, 2021 · 19 comments
Labels
breaking Submodule: Tests About the Tests submodule
Milestone

Comments

@blegat
Copy link
Member

blegat commented Jun 19, 2021

The test names do not follow the style guide, should we rename contlineartest to continuous_linear_test ?
Edit: @odow: hijacking this issue to lay out my plan for the new MOI.Test

Purpose

The purpose of this issue is to track the items that need to be completed in order to rewrite our MOI.Test submodule.

Significant progress and design was started in #1404. But rather than continue that, I'm going to break it into smaller pieces.

Where we are

Our current testing regime is comprehensive, but a bit all over the place.

There's a mix of things like MOI.Test.unittest that wrap a whole lot of tests, and others like MOI.Test.default_status_test that you just need to add. Even the documentation for how to test a solver is complicated (#224)! https://jump.dev/MathOptInterface.jl/dev/submodules/Test/overview/#How-to-test-a-solver

The current design is also bad because it's hard to add new tests.

As evidenced by the documentation: https://jump.dev/MathOptInterface.jl/dev/submodules/Test/overview/#How-to-add-a-test, you need to write a test, then write a test for the test, and then make sure that everything works. It's hard to run a small contingent of tests, and it's hard to decide where to put a new test, and where to put the test of the test.

This also evidenced by the large number of open test issues that aren't getting addressed. These range from #470 (August 2018!) to #1201 (November 2020). If they were easier to add, it would happen quicker!

Naming and visibility of each test is also a problem: #1029. If linear10btest fails, what does that mean?

New design

The basic design is runtests, a single entry point to all tests in MOI.

Instead of breaking down tests by files or dictionaries, tests are normal Julia functions with descriptive names that can be excluded or included by the user.

Here's the code to test the MockOptimizer:

MOI.Test.runtests(
    MOIU.MockOptimizer(MOIU.UniversalFallback(MOIU.Model{Float64}())),
    MOI.Test.Config(basis = true),
    exclude = ["test_nonlinear_"],
)

Much better.

There is also a need for certain tests to modify the model prior to running the test (changing solver parameters/tolerances, for example). That can be achieved by overloading setup_test(::typeof(f), ::MOI.ModelLike, ::MOI.Test.Config) for the particular function f.

TODO

@odow
Copy link
Member

odow commented Jun 19, 2021

I had wanted to do a much larger refactoring of the tests, but it seemed like a lot of breakage for little gain...

@odow
Copy link
Member

odow commented Jun 21, 2021

There are quite a few test-related issues, and it can be hard to make sure you run all tests. At this point I'd say we have a lot of tests, but they could do with a better organization and structure. And that it would make a big quality of life improvement for developers and solver authors.

#224
#307
#1029

It's also a lot of effort to add new tests.

I wonder about having a single Test.runtests function with appropriately named functions that you can include/exclude based on their name.

MOI.Test.runtests(
    optimizer, 
    config, 
    include = [
        "test_linear_",  # name -> startswith(name, "test_linear_")
    ],
    exclude = [
        r"conic",  # name -> occursin(name, r"conic")
    ],
)

@blegat
Copy link
Member Author

blegat commented Jun 21, 2021

I agree, we could do that if we make the tests automatically exclude themself if what's tested is not supported, instead of testing support_constraint and supports at the beginning of each test.
That way, the excluded tests will only indicate failures, an we don't need to update every solver every time we add a new test.

@odow
Copy link
Member

odow commented Jun 22, 2021

I'm working on a PR that I think makes a lot more sense. Stay tuned.

@odow
Copy link
Member

odow commented Jul 2, 2021

@blegat: these get_fallbacks throw an ArgumentError. Shouldn't they throw UnsupportedAttribute instead?

function get_fallback(
model::ModelLike,
attr::Union{AbstractModelAttribute,AbstractOptimizerAttribute},
)
return throw(
ArgumentError(
"$(typeof(model)) does not support getting the attribute $(attr).",
),
)
end

This is causing GLPK to fail the new tests because it doesn't skip unsupported attributes.

@blegat
Copy link
Member Author

blegat commented Jul 2, 2021

UnsupportedAttribute is when the attributes cannot be set and is thrown during set only at the moment.
I guess if is_copyable(attr) && !supports(model, attr) then it might make sense to throw UnsupportedAttribute I have not really understood the use case though.

@odow
Copy link
Member

odow commented Jul 5, 2021

I think it only makes sense to define supports for optional attributes. ListOfModelAttributes etc aren't optional. They're a core part of the API.

@odow
Copy link
Member

odow commented Jul 14, 2021

@blegat any suggestions for how to refactor the bridge testing? They currently take forever to run. And there are some open issues: #820, #750

@blegat
Copy link
Member Author

blegat commented Jul 14, 2021

I think it only makes sense to define supports for optional attributes. ListOfModelAttributes etc aren't optional. They're a core part of the API.

I don't thing there is a well-defined notion of core part of the API. Many ModelLike don't implement ListOfModelAttributesSet

@blegat
Copy link
Member Author

blegat commented Jul 14, 2021

@blegat any suggestions for how to refactor the bridge testing? They currently take forever to run

Moving the @testsets to functions would be a good start, I don't have much ideas of any other improvements.

@odow
Copy link
Member

odow commented Jul 15, 2021

Changing the fallback to throw UnsupportedAttribute isn't so simple, because they actually get captured in a a Test.TestSetException. That means we can't distinguish them from a standard failure like returning the wrong answer.

The work-around would be to change

@test MOI.get(model, MOI.SolverName()) isa AbstractString

to

ret = MOI.get(model, MOI.SolverName())
@test ret isa AbstractString

but that's pretty painful and likely to break.

The better long-term solution is to move away from Base.Test to https://github.com/IainNZ/Pukeko.jl, and modify the @test error to wrap the inner error. We should be able to modify Pukeko to also run each function on a different Julia thread for speed.

@blegat
Copy link
Member Author

blegat commented Jul 15, 2021

I don't understand, we would do that right ?

if MOI.supports(model, MOI.SolverName())
    @test MOI.get(model, MOI.SolverName()) isa AbstractString
end

And you want a way to avoid having to write the if everytime right ?
Maybe something like test_attribute(expected_value, model, attr, args...) that first tests with supports and then do the get ? The isa is more tricky but I assume it's not that common. Otherwise we can have one for isa as well

@odow
Copy link
Member

odow commented Jul 15, 2021

Yeah:

@requires MOI.supports(model, MOI.SolverName())

The other approach is that we already catch and pass on Unsupported or NotAllowed errors. But the mixing with @test obscures this.

Extending supports is still on the cards.

@odow
Copy link
Member

odow commented Jul 23, 2021

Okay! That was an ordeal. I'm closing this issue as complete. The "supports" thing is still unresolved, but I might leave it for now and try updating some solvers. We can open a new issue to track if it becomes a problem.

@odow odow closed this as completed Jul 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Submodule: Tests About the Tests submodule
Development

No branches or pull requests

2 participants