Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-0.18] light at the end of the tunnel #1432

Merged
merged 17 commits into from
Aug 27, 2018

Conversation

ExpandingMan
Copy link
Contributor

@ExpandingMan ExpandingMan commented Aug 21, 2018

Hello all. I apologize for the gargantuan size of this PR, but I was on a role and it got away from me.

I've made a large number of fixes to get this running on 1.0. All non-solver-dependent tests now pass on 1.0 with two exceptions

  • There as a block (in print.jl) that I took out completely, because it was extremely pedantic and had been taken out in master (even though the functionality still exists in master).
  • There seem to be various errors with OffsetArrays. According to the stack traces these seem to be entirely contained to OffsetArrays. For now I have commented out a few (I think 3) tests that use OffsetArrays.

Most of the fixes here involve macros or operators. I have taken as much code from @mlubin 's fixes to master as possible. Fortunately this was quite a lot.

Obviously the operator code is now a mess and needs major refectoring (for arrays, scalars should be fine). I'm pretty sure what's here now should be about as good as what's in master. I now have lots of good ideas for an operator PR to master, so I hope to get to that relatively soon.

@ExpandingMan
Copy link
Contributor Author

Ok, I have fixed an absolutely bizarre @generated function related bug on 0.6, think this is it.

All that remains is to get the solver-dependent tests running, but with any luck that should mostly just be a matter of getting those packages updated and tagged.

Once we go through review and make changes, if you approve let's merge and finish solvers on a separate PR.

src/affexpr.jl Outdated
# variables must be ∈ ℝ but coeffs can be ∈ ℂ
Compat.adjoint(a::GenericAffExpr{<:Real}) = a
function Compat.adjoint(a::GenericAffExpr)
GenericAffExpr(a.vars, [z' for z ∈ a.coeffs], a.constant')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you use adjoint. (a.coeffs) and adjoint (constant) ? It is more readable IMO as it does not require to remember that ' is not transpose but adjoint

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would create an Adjoint{Vector} rather than a Vector, so no, you can't do that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, yes you can do it with the dot operator, didn't notice your dot. Ok, I'll do that.

src/quadexpr.jl Outdated
# variables are ∈ ℝ but coeffs can be ∈ ℂ
Compat.adjoint(q::GenericQuadExpr{<:Real}) = q
function Compat.adjoint(q::GenericQuadExpr)
GenericQuadExpr(q.qvars1, q.qvars2, [z' for z ∈ q.qcoeffs], aff')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here

src/operators.jl Outdated
# this is a generic fallback from stdlib LinearAlgebra that's needed for tests to pass on 0.6
# for example, dot(::Array{Float64,3}, ::Array{Int,3}) is not defined in 0.6
if VERSION < v"0.7-"
function Base.dot(x::AbstractArray, y::AbstractArray)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is type piracy if we define a method for a Base function, it should be with custom types. Can't we implement this in a custom function like _dot ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I think I just started getting too desparate to get 0.6 working.

The proper solution would probably just be to ensure in the tests that this the arrays are promoted properly first. I'm not entirely sure why this wasn't causing a problem in the original 0.18. Let me look into it a bit. I don't think it would be unreasonable to change the test to ensure that promotion occurs, what do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking this I am more and more confident that the original 0.18 must have been commiting type piracy. Regardless, I think the fixes to the tests will be pretty simple so we'll all be much happier with that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which line of the tests is making it fail ?

@codecov
Copy link

codecov bot commented Aug 22, 2018

Codecov Report

Merging #1432 into release-0.18 will decrease coverage by 1.87%.
The diff coverage is 62.89%.

Impacted file tree graph

@@               Coverage Diff                @@
##           release-0.18    #1432      +/-   ##
================================================
- Coverage         90.41%   88.54%   -1.88%     
================================================
  Files                18       18              
  Lines              4623     4715      +92     
================================================
- Hits               4180     4175       -5     
- Misses              443      540      +97
Impacted Files Coverage Δ
src/parseExpr_staged.jl 93.97% <100%> (+0.27%) ⬆️
src/JuMPContainer.jl 70.51% <100%> (ø) ⬆️
src/solvers.jl 94.48% <100%> (ø) ⬆️
src/utils.jl 90.38% <100%> (-0.19%) ⬇️
src/quadexpr.jl 69.35% <33.33%> (-1.84%) ⬇️
src/operators.jl 78.3% <40.56%> (-16.57%) ⬇️
src/parsenlp.jl 68.07% <61.53%> (-0.68%) ⬇️
src/print.jl 91.83% <66.66%> (-1.08%) ⬇️
src/affexpr.jl 84.09% <66.66%> (-0.62%) ⬇️
src/JuMP.jl 89.82% <69.23%> (-0.5%) ⬇️
... and 8 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b412370...ee27487. Read the comment docs.

@ExpandingMan
Copy link
Contributor Author

ExpandingMan commented Aug 22, 2018

Ok, sorry that was really silly of me, should be better now. (Deep down I just think type piracy is cool ☠️ 😆 )

For the record, check out test/operator.jl line 510. Now figure that one out: how could that have been working before without type piracy? Regardless, it should be fine now.

src/operators.jl Outdated
# TODO: implement sparse * sparse code as in base/sparse/linalg.jl (spmatmul)
_multiply!(ret::AbstractArray{T}, lhs::SparseMatrixCSC, rhs::SparseMatrixCSC) where {T<:JuMPTypes} = _multiply!(ret, lhs, full(rhs))
_multiplyt!(ret::AbstractArray{T}, lhs::SparseMatrixCSC, rhs::SparseMatrixCSC) where {T<:JuMPTypes} = _multiplyt!(ret, lhs, full(rhs))
function Base.Matrix(S::SparseMatrixCSC{V}) where V<:Variable
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is needed because of this bug right: https://github.com/JuliaLang/julia/issues/27015 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that's one of the reasons. Also remember there's no dense in 1.0, so you need a safe way of converting to a dense matrix.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was added in master to fix #1275. and there is a comment See https://github.com/JuliaLang/julia/issues/27015 explaining why it is needed on master above the method. I think we should add it here too

@ExpandingMan
Copy link
Contributor Author

By the way, as you review please bear in mind src/operators.jl in the current master. I realize I changed the 0.18 operators extensively, but I tried hard to stick as close to 0.19 as possible (and stole the code in most cases).

@@ -628,9 +634,9 @@ Base.transpose(t::MySumType) = MySumType(t.a)
@test vec_eq(@JuMP.Expression(A*x/2), A*x/2)
@test vec_eq(X*v, [4X11; 6X23; 0])
@test vec_eq(v'*X, [4X11 0 5X23])
@test vec_eq(v.'*X, [4X11 0 5X23])
@test vec_eq(transpose(v)*X, [4X11 0 5X23])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't .' equivalent to transpose ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems it has been removed, never mind :)

Copy link
Member

@mlubin mlubin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • I'm impressed that you got this working.

  • I took a look at the CI logs for 0.7 and they print the addition operator warning thousands of times to the extent that travis won't display the rest of the output (https://travis-ci.org/JuliaOpt/JuMP.jl/jobs/419286826#L2087). This isn't maintainable. Is there an easy way to fix?

src/nlp.jl Outdated
@@ -530,7 +530,7 @@ function MathProgBase.eval_grad_f(d::NLPEvaluator, g, x)
end

function MathProgBase.eval_g(d::NLPEvaluator, g, x)
tic()
VERSION < v"0.7-" && tic()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are the tic()s version protected but not the toq()s? Does this code work?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the toqs are in the other packages no?

The easiest thing to do will be to disable them in those packages also in 0.7, the only other option is to re-implement the feature ourselves.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What other packages? I'm confused because both tic and toq are deprecated, and it's an error to run toq without a corresponding tic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm, I think for some reason I though they were in the solvers or MathProgBase. I'm not sure how it is that they are not being called in tests but tic is. I'll look into it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a bit tricky. I think the only good option here is to re-implement tic toq in 1.0. I don't think we can take this out completely as it looks like some of the MathProgBase logic may depend on it. Errors in tests were probably burried in the 1.0 solver errors.

@ExpandingMan
Copy link
Contributor Author

ExpandingMan commented Aug 23, 2018

Hm, something is wrong. It's not supposed to print that warning more than once. The new logging functionality has a max warnings feature, and I'm using it. A bug in the logger maybe? I'll look into it.

For what it's worth, getting this working would not even have been particularly difficult if it were not for the fact that we had to continue to support 0.6. It's extremely unfortunate that we were in the position of having to support it when no features will be added for it. I know the situation JuMP was in here was just a lot of bad luck in timing, but I hope they drop the METADATA restriction where you can't change language versions in minor package version numbers. It really caused a lot of grief here (whether or not I was masochistic to take this on in the first place).

@IainNZ
Copy link
Collaborator

IainNZ commented Aug 23, 2018

JuliaLang/julia#28786 This maybe?

@ExpandingMan
Copy link
Contributor Author

Yup, probably.

@mlubin
Copy link
Member

mlubin commented Aug 23, 2018 via email

@ExpandingMan
Copy link
Contributor Author

I was referring to MathProgBase.initialize in nlp.jl line 240. It checks whether the timer is zero to see whether it's already been initialized and exit.

Regardless, sorry, I didn't get a chance to look at it carefully yet. When I get a chance I'll look at it carefully and come up with a solution.

Any objections to abolishing the timer in favor of a simple flag, or will something else rely on it somewhere?

@mlubin
Copy link
Member

mlubin commented Aug 23, 2018 via email

eval_hesslag_timer::Float64
# init flags
eval_f_init::Bool
eval_g_init::Bool
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are any of these flags used except for eval_f_init? Please remove them if not.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apparently they all get set, but whether they actually get used for anything I don't know (probably not within JuMP). Perhaps best to just leave it as this is only 0.18.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And when I say "they all get set" I mean there are other functions that set them, not just when they are initialized in the constructors.

@ExpandingMan
Copy link
Contributor Author

Ok, I have:

  • Replaced the timers with simple flags (on both 0.6 and 1.0).
  • Temporarily commented out the addition warning. There don't seem to be any good solutions to this while the Julia logger seems to have a bug. If you want I can put a global const WARN_FLAGS = Dict(:warning_name=>f::Bool) to emulate the warn once behavior as a temporary solution.

@mlubin
Copy link
Member

mlubin commented Aug 26, 2018

Please re-enable the warning on 0.6 so we're not introducing a regression. The warning is pretty useful, and there will be people using the 0.18 branch on 0.6 for quite a while.

After that looks good to merge from my end.

@mlubin mlubin merged commit 101ba7e into jump-dev:release-0.18 Aug 27, 2018
@ExpandingMan
Copy link
Contributor Author

ExpandingMan commented Aug 27, 2018

With any luck most of the remaining work is just getting the various solvers stable on 1.0. Any tagging you feel you can do would be greatly appreciated, as I know the masters of a few of them are working on 1.0 (or at least 0.7) but not tagged.

@mlubin
Copy link
Member

mlubin commented Aug 27, 2018

We'll do a round of solver tagging soon (after MOI 0.6 is released). Right now it would just create more of a mess of incompatible versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants