-
-
Notifications
You must be signed in to change notification settings - Fork 399
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RFC] Reimplement dormant JuMPArray to work with any cartesian product index sets #590
Conversation
return d.innerArray[idx...] | ||
end | ||
Base.getindex{T}(d::JuMPArray{T,1}, I) = | ||
d.innerArray[_rev_lookup(d.lookup[1], d.indexsets[1], I[1])] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
d.indexsets[1]
cannot be type inferred, I can't see how this would be fast enough in the base case of ranges
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe with generated functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, since it's a tuple, maybe it can?
julia> Base.return_types(getindex, (typeof((1:3,1:4)), Int))
1-element Array{Any,1}:
UnitRange{Int64}
Might have to change d::NTuple{N}
to d::NT
parameterized on NT
or something.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what the original version has
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is that any different than what we have now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right now we're generating a type inside of a macro. We might lose some generality by moving away from that. The old JuMPArray code above has NTuple
with a uniform element type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This approach could get the same behavior by doing the from UnitRange
to StepRange
conversion inside the constructor if that's all you're concerned about.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the index sets have uniform type, then we don't gain much by adding an extra level of method calls, which may or may not be inlined.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@inline _rev_lookup
?
What kinds of index sets are we looking to support besides ranges? Is the goal to avoid promoting |
The goal is to make the system more modular, so that we deal with each dimension independently. That is a nice side-effect, though. |
length(indices) == N || error("Wrong number of indices for ",d.name,", expected ",length(d.indexsets)) | ||
idx = Array(Int, N) | ||
function Base.getindex{T,N}(d::JuMPArray{T,N}, I::NTuple{N}) | ||
idx = zeros(Int, N) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The speed issue is really here, we should use a staged function to generate efficient tuple code based on N
instead of allocating an array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I just did it this way initially because it's easy. I'm thinking it might be best just to compute the linear index into innerArray
and use that.
6982356
to
c769499
Compare
function Base.getindex{T}(d::JuMPArray{T,1,UnitRange{Int}},index::Real) | ||
@inbounds return d.innerArray[index - start(d.indexsets[1])+1] | ||
function JuMPArray{T,N}(innerArray::Array{T,N}, indexsets::NTuple{N}) | ||
JuMPArray(innerArray, indexsets, ntuple(N) do i |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does type inference work here? The mix of anonymous functions might make it tricky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good question. My wishful thinking is that ntuple
is sufficiently optimized to make this acceptable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could always become a generated function as well, I guess.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that would be the safe way to go
48e8ded
to
bf7d904
Compare
$(esc(instancename)) = JuMPDict{$T,$N}() | ||
) | ||
indexsets = Expr(:tuple, [:($(esc(idxset))) for idxset in idxsets]...) | ||
:($(esc(instancename)) = JuMPArray(Array($T, $sizes), $indexsets)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In which cases do we use JuMPDict
now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When there's a conditional and Cartesian indexing doesn't make sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this logic in @defVar
?
On
On this branch:
|
There's probably optimizations to do, but it's going to be hard to beat one-off codegen that uses the integer literal offsets like |
I'm okay with that, but not sure if it accounts for most of the extra instructions. Definitely need to benchmark |
end | ||
d.innerArray[idx...] = val | ||
Expr(:call, :getindex, :(d.innerArray), indexing...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These can be @inbounds
The point of comparison would be something like:
which is specialized for each dimension being a unit range. I'm afraid that the julia compiler won't do a good job (anytime soon) of optimizing the current case where there's extra levels of indirection. |
bf7d904
to
45a5c90
Compare
Rebased. Actually can't find any performance regressions. Will run some more |
Hard to tell if there are regressions, but nothing's off by an order of magnitude so I'd say this approach is good enough. |
What were you using to benchmark? |
speed2.jl and JuMPSupplement/fac |
I'll finish this up in the next few days then |
Not seeing any difference on speed2.jl:
|
fac:
|
RFC now |
cb5e1e5
to
cdfcd7d
Compare
cdfcd7d
to
f6feb73
Compare
@@ -5,67 +5,79 @@ | |||
|
|||
# This code is unused for now. See issue #192 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update this comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lol
I like the code reduction. |
I'll review today |
@@ -69,105 +68,16 @@ Base.isempty(d::JuMPContainer) = isempty(_innercontainer(d)) | |||
# 0:K -- range with compile-time starting index | |||
# S -- general iterable set | |||
export @gendict |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we un-export this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no reason this needs to be a macro as opposed to a function AFAICT
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't be exported, yeah
LGTM, will need to work through it closer to understand everything exactly but should at least start trying it out |
[RFC] Reimplement dormant JuMPArray to work with any cartesian product index sets
💯 |
WIP, just opening to solicit comments on the approach.