Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make use of mapslices consistent throughout Julia #3893

Open
johnmyleswhite opened this issue Jul 31, 2013 · 63 comments
Open

Make use of mapslices consistent throughout Julia #3893

johnmyleswhite opened this issue Jul 31, 2013 · 63 comments
Assignees
Labels
breaking This change will break code needs decision A decision on this change is needed

Comments

@johnmyleswhite
Copy link
Member

I'd like to propose the radical, breaking change of removing all forms of implicit mapslices calls from Julia. I think they make the language much less consistent and create a situation in which one's expectations are routinely violated about the interface for functions. As an example, the list below shows some functions that effectively implement implicit calls to mapslices:

  • sum
  • prod
  • mean
  • var
  • std

In contrast, the following similar functions do not support the foo(A, dim) interface at all:

  • median
  • quantile
  • indmax / indmin
  • findmax / findmin
  • Everything defined in the Stats module

I understand that this change would break a great deal of Matlab compatibility and make the language a little more verbose. But I think the gains in consistency would more than make up for that loss by making the language much less confusing. Removing this shorthand would mean that you wouldn't have to memorize which functions belong to a privileged subset that perform implicit mapslices.

As one example of the memorization required to use the foo(A, dim) interface, you need to memorize that empty tuples passed to min are required to trick min into creating slices. I'd much rather know that mapsplices works the same way for all functions in the language.

@stevengj
Copy link
Member

You don't have to memorize the subset now; nothing stops you from applying mapslices to sum if you want.

We can promote the mapslices interface as the Julian way to do things, while still retaining the old behavior in a few functions to ease the Matlab transition (as with sum) or for performance (as with fft).

@johnmyleswhite
Copy link
Member Author

Yes, that's certainly true. But we've previously made many decisions to ensure that there aren't two dialects of Julia that provide two mechanisms for doing the same thing.

I suspect I'll be outvoted on this one very quickly, but wanted to raise the issue for discussion.

@BobPortmann
Copy link
Contributor

I think using a DIM keyword consistently would be cleanest. E.G., min(a,dim=2).

@lindahua
Copy link
Contributor

@johnmyleswhite I appreciate the pursuit of consistency over function definitions. This is of course a very important aspect in language design.

However, I think what we should do here is to add the support of computation along specified dimension to other functions over time, instead of removing this capability from sum, max, etc.

Something like sum(x, 1) or sum(x, 2) has been extensively used in various packages. It would be a substantial cost to make changes to all these codes. Also, mapslices(sum, x, dim) are less performant than sum(x, dim), especially when dim > 1, since the memory access pattern of mapslices is not cache-friendly when dim > 1. (However, I think we can add support of dimensions to reduce and mapreduce, which can be implemented in a cache-friendly way though).

That being said, I think we may consider how to improve the consistency of the interface over these functions. (e.g. using dim keyword, etc).

@johnmyleswhite
Copy link
Member Author

I would be very happy with either the consistent use of a dim/slice keyword or any mechanism that ensures that most functions in the language support the foo(A, dim) interface. As @lindahua suggests, we can add this ability incrementally over time, but it's a non-trivial amount of work because every new function needs to implement this functionality. Because functions like min don't quite match the standard interface, we will end up creating a new interface for many functions. And because findmin doesn't return a scalar, we'll have to decide how to allocate the resulting tuples in an array.

As Dahua points out, the major challenge (besides time) is making sure that the resulting interface is still high performance.

@blakejohnson
Copy link
Contributor

I agree with @lindahua that I'd rather add this interface to the stats functions that don't have it rather than remove it from the ones that do. I use these in almost every data analysis script I write, and I find the compact syntax very convenient.

We do need a more friendly syntax for max and min. After N threads of discussion on that point, I think the conclusion was to use a keyword dims argument. Just, no one has gotten around to doing it.

@johnmyleswhite
Copy link
Member Author

Should we have perhaps have a @mapslices macro that constructs the keyword argument version of these functions from the base case?

@dcjones
Copy link
Contributor

dcjones commented Jul 31, 2013

I'm very much in favor of mapslices, or rather, very much against reimplementing the same functionality in any function that could conceivably be applied across a matrix.

The performance difference just doesn't strike me as great enough to embrace such an inconsistency. But I also get that Julia has a strong Matlab background, and not having dim arguments in certain functions is jarring to many.

@timholy
Copy link
Member

timholy commented Jul 31, 2013

While it would be a fairly radical change, despite my Matlab background in principle I think this may be the right thing to do.

But not now. The performance of mapslices is nowhere close to where it needs to be to become the backbone for such common operations; not just because of the cache-unfriendliness issue @lindahua mentioned (although that indeed is very important), but also because mapslices still has quite a lot of overhead. Fixing this is yet another thing that relies on breaking the SubArray/cartesian iteration logjam (and I don't think we're ready to incorporate Cartesian.jl into base anytime soon).

@stevengj
Copy link
Member

Although Julia normally tries to have one way to do things, the cost/benefit ratio of keeping the Matlab-like syntax available (even if we promote mapslices as the "right" way to do it) seems to me to favor keeping the redundant Matlab-like syntax (costs: some redundant syntax; benefits: compatibility and ease of transition for Matlab users, for a few key functions that are used all over the place).

This is independent of whether we can eliminate the performance penalty of mapslices. Note also that for functions like sum we will always be able to do special-case optimizations that will be hard to obtain in a generic mapslices routine.

@JeffBezanson
Copy link
Member

Philosophically I totally agree with john, but on the practical side tim's and steve's comments nail it.

@JeffBezanson
Copy link
Member

I think we should only keep the dim argument for sum and prod, and maybe any and all, since these happen to have special names (unlike max, min, and other functions in general).

In addition to mapslices, we have reducedim, which is much more efficient when we know the mapped function is doing the equivalent of reduce. reducedim is a silly name since it can actually work on multiple dimensions. It would be nice to name these similarly; maybe mapdims and reducedims, or mapslices and reduceslices (that last one seems a bit long-winded).

@johnmyleswhite
Copy link
Member Author

When you say special names, you mean that sum(1, 2) is a special name for 1 + 2?

I actually like mapslices and reduceslices, but mostly because I'm used to mapslices. I'd be up for mapdims and reducedims as well.

@JeffBezanson
Copy link
Member

I mean that sum is a name for x->reduce(+,x). In fact some of the reductions are defined this way:

all(A::AbstractArray{Bool}, region) = reducedim(&,A,region,true)

We can make reduceslices and mapslices the basic thing, but have such definitions for convenience where possible.

@blakejohnson
Copy link
Contributor

@JeffBezanson The problem is that mean, std, and var are not naturally written with reducedim since they are not composable as binary operations. So, we are back to the problem that mapslices has a large performance gap compared to the current dim argument versions of each of these functions. For instance, mapslices of mean along the second dimension of a 1000x10x10 array is over 50x slower than directly calling mean with a dim argument. The gap is even wider if you compare to @lindahua's NumericExtensions version.

This was referenced Mar 14, 2016
@bramtayl
Copy link
Contributor

bramtayl commented Jul 24, 2016

I saw #17266 in NEWS and it made me wonder whether the performance of mapslices is good enough now to reconsider this issue?

@timholy
Copy link
Member

timholy commented Jul 24, 2016

As much as it's been improved, we're still not there:

julia> @time mapslices(maximum, A, 2);
  0.001857 seconds (4.56 k allocations: 105.172 KB)

julia> @time maximum(A, 2);
  0.000269 seconds (11 allocations: 4.422 KB)

BenchmarkTools indicates a 6x difference. It's about a 3:5 ratio of time spent copying data into a temp array and in computing maximum on that temp array. I'm not quite sure why the maximum part is so slow, it might merit some investigation by an interested party (willing to try, @bramtayl?). The memory allocation seems high too, which might indicate that it's a simple as forcing julia to specialize the reduction, changing foo(f, ...) to foo{F}(f::F, ...).

@KristofferC
Copy link
Member

KristofferC commented Jul 24, 2016

Looking at @code_warntype on mapslices it is littered with type instabilities so it is not strange it is slow.

Another more extreme example:

julia> @time maximum(a, 2);
  0.014243 seconds (14 allocations: 7.630 MB)

julia> @time mapslices(maximum, a, 2);
  2.958281 seconds (10.00 M allocations: 205.989 MB, 12.02% gc time)

@timholy
Copy link
Member

timholy commented Jul 24, 2016

That's definitely true, though still a bit surprising that it matters so much given that the unsafe_getindex! and f(Aslice) do all the real work (and that both of those are "protected" by a function barrier).

One thing that's really backwards is the fact that we accept dims as a Tuple but coerce it into an AbstractVector. Instead, we should be accepting an AbstractVector and coercing it into a Tuple. Then all the index manipulations could be made type stable. EDIT: no, even that's not true. We'd need to introduce a function barrier for the first part of mapslices and generate idx as a tuple, then pass that to the "real" function.

@KristofferC
Copy link
Member

KristofferC commented Jul 24, 2016

. We'd need to introduce a function barrier for the first part of mapslices and generate idx as a tuple, then pass that to the "real" function.

@timholy That's exactly what I experimented with for a bit but there is a write to idx later in the code. I'm sure it can be worked around though. I'm also a bit worried about overspecialization.

@timholy
Copy link
Member

timholy commented Jul 24, 2016

Sure, in the "preamble" just stuff : in every tuple-slot you don't want to rewrite, and fill the ones you want to rewrite with 1. (As you surely know, this is the part that won't be type-stable because dims is encoded as a tuple-of-values.) Then in the loop, call out to a lispy function that substitutes the elements of the CartesianIndex into the non-Colon slots and returns a new full tuple. That part will be type-stable.

@bramtayl
Copy link
Contributor

bramtayl commented Jul 25, 2016

I'm not sure I understand enough to be able to help. But with the new Julia closures scheme, it should be possible to make specialized methods of mapslices for different functions, correct? I see two possibilities:

  1. mapslices is just inefficient

we should be able to fix it?

  1. mapslices is inefficient because specialized code by function speeds things up

in that case, we can just use specialized code for critical functions, which should be all already written

P.S. if 2 is the case, then the same code specialization strategy could work for a lot of other functional programming functions, like map, reduce, broadcast etc.

P.P.S. we already have the fancy new dot syntax for broadcast. I wonder if there isn't the possibility for a similar fancy syntax for mapslices. Something like:

f.(g.(h.(A, sliceby = 2))) goes to mapslices(x -> f(a(b(x))), A, 2)

P.P.P.S. filter could also by worked into the syntax:

f.(g.(h.(A)), filter = true) goes to filter(x -> f(a(b(x))), A)

All that would be required is adding keyword versions of broadcast and broadcast!

@timholy
Copy link
Member

timholy commented Jul 25, 2016

How about this for a deal: I'll take the time to write up (in the next couple of days) a blog post on tricks I've learned for writing high performance manipulations of multidimensional indices using tuple magic, and someone else agrees to use the information in that post to make mapslices awesome.

It would actually be less work for me to just fix mapslices, but we can't keep going on this way forever.

@KristofferC
Copy link
Member

I tried poking around a bit yesterday but I had trouble wrapping my head around all the new unconventional index stuff so I wasn't sure what I did was safe or not and when I needed to special case to indices or size etc.

@timholy
Copy link
Member

timholy commented Jul 25, 2016

It should already be safe for unconventional indices, because I updated it already. (See tests here that prove it's working.)

It's just down to stuffing things in the right slots now.

@KristofferC
Copy link
Member

I know it is safe now but I wasn't sure if my modifications were safe :). But yeah, you are right, can just check with the test. Was also a bit curious about the OneTo type but there was no documentation about it but I guess it is just a way to dispatch on what would be 1:n for normal arrays?

@bramtayl
Copy link
Contributor

bramtayl commented Aug 3, 2017

Wait do you mean changing a positional argument to a keyword argument with a new name in tens of functions? That doesn't make sense to me at all.... To maintain compatibility, it might make sense to have a positional dims argument for all of these functions deprecate to calling the function on a Slices iterator. The slices iterator could specialize on the function being sliced if its necessary to maintain performance. As far as naming goes, something like SliceOver(A, dims), where dims is a list of the dimensions to be iterated over (and colons in all the other slots).

@stevengj
Copy link
Member

stevengj commented Aug 3, 2017

For better or worse, a lot of people use functions like sum with tiny arrays (also SVectors), and the keyword overhead worries me. I honestly don't see what problem is being solved here: when you have a function with only 2-3 arguments, positional arguments seem unobjectionable to me.

@malmaud
Copy link
Contributor

malmaud commented Aug 3, 2017

I wouldn't mind having the keyword arguments for consistency in addition to the positional ones for convenience. But that whole idea could be tabled until the keyword penalty is fixed.

@bjarthur
Copy link
Contributor

bjarthur commented Aug 3, 2017

if positional dims arguments were removed, then it would be easy to make predicate methods for any and all like that which already exists for map. see #20181

@simonbyrne
Copy link
Contributor

simonbyrne commented Aug 3, 2017 via email

@StefanKarpinski
Copy link
Member

IIRC, part of the reason min and minimum had to be separate functions was the ambiguity of min(A, d): do you want to take the minimum of each slice of A reducing the d dimension or do you want to take the minimum of each element of A with d? If we used a dim keyword then that wouldn't be an issue anymore. Not to reopen old wounds, but the min versus minimum thing still trips me up somewhat regularly. Of course, there's a semantic correctness argument to be made that min and minimum are simply different functions – the former the operator (like +) and the latter the reducer (like sum), but I suspect I'm not alone here.

@JeffBezanson
Copy link
Member

there's a semantic correctness argument to be made that min and minimum are simply different functions – the former the operator (like +) and the latter the reducer (like sum)

This is the position I take. Maybe a renaming, like minof, reducemin, minreduce, ... ?

@bramtayl
Copy link
Contributor

bramtayl commented Aug 5, 2017

There might be an argument for eliminating separate functions just for reductions of existing functions.

@StefanKarpinski
Copy link
Member

That argument would fail pretty badly once it got to + and sum – which is pretty much the starting gate, so I'm not sure that's got legs.

@JeffBezanson JeffBezanson added the triage This should be discussed on a triage call label Oct 2, 2017
@JeffBezanson
Copy link
Member

I propose moving this to 2.0. It doesn't look like we'll have time to redo the mapping-over-slices infrastructure.

@iamed2
Copy link
Contributor

iamed2 commented Oct 2, 2017

Is there a reason this can't be in before 2.0, if it gets implemented?

@bramtayl
Copy link
Contributor

bramtayl commented Oct 2, 2017

I have a working prototype which is ready to go, but requires several pending PRs related to type stable tuple operations.

@bramtayl
Copy link
Contributor

bramtayl commented Oct 2, 2017

However, my prototype is breaking: it has a julienne iterator which can be mapped over (and a combine function for combining the resulting pieces). It is sometimes an order of magnitude faster than the current implementation of mapslices.

@mbauman
Copy link
Member

mbauman commented Oct 2, 2017

It's not clear to me what is breaking here. We're not talking about changing meanings, just adding features or removing specific methods.

  • Introduce a slice iterator (like julienne): new name
  • Encourage the use of map(f, slices(A, dims)) or f.(slices(A, dims)) instead of mapslices: deprecation
  • Possibly use map(sum, slices(A, dims)) or sum.(slices(A, dims)) instead of sum(A, dims): deprecation. Removing sum(f, A, dims) is less obvious.
  • Possibly use non-integers (like (:, *, :) — anything that's more amenable to type inference will be similarly discriminable via dispatch) to specify the dimensions: deprecation

Are there any other possible actions here that are breaking? The only one I can see is:

  • Make reduction functions like sum(f, A, B, C) interpret the trailing arguments like map does — that is, sum((f(a,b,c) for (a,b,c) in zip(A,B,C))). This would be the generalization of make any and all interface consistent with map #20181 for any and all. I'd be opposed to this change in meaning.

@bramtayl
Copy link
Contributor

bramtayl commented Oct 2, 2017

There also is the issue that I still haven't beaten the performance of mapreducedims. I got around this in my version by a special optimization for reducing functions (sum, product, etc.) as well as a function optimization generalization (where you could pass Reduce(+) instead of sum to hook into mapreducedims. If this kind of behavior could also lead to mapreducedims being an unexported optimization rather than an export.

@simonbyrne
Copy link
Contributor

Now that I've had a bit of time to look over it, I really like the JuliennedArrays.jl approach. There is probably some bikeshedding to be done regarding naming and the tuple syntax, but overall it feels like the right direction.

mapreducedims being an unexported optimization rather than an export.

👍

@JeffBezanson
Copy link
Member

Anything that's done in time and that has enough support can be in 1.0, and if non-breaking 1.x. The milestone is only needed if this is considered essential for 1.0.

@JeffBezanson JeffBezanson removed the triage This should be discussed on a triage call label Oct 5, 2017
@StefanKarpinski
Copy link
Member

I think adding a composable way to express reductions with something like slices is a good piece of future design work, but deprecating sum(A, d) seems like not something we really should do.

@JeffBezanson JeffBezanson removed this from the 1.0 milestone Oct 5, 2017
@bjarthur
Copy link
Contributor

bjarthur commented Oct 12, 2017

the danger of not addressing the consistency of the mapslices interface now is that the sum(A,d) syntax will become entrenched and might never be changed.

why is there a rush to tag 1.0? i would much rather wait another minor release cycle or two (ie 0.7) for issues like this to get fixed, than to have to wait for a presumably much longer major release (ie 2.0).

@timholy
Copy link
Member

timholy commented Oct 12, 2017

I think we should aim to get a more efficient mapslices into Base for 1.0 (just so it's not embarrassing), but I don't think we should get rid of the sum(A, d) interface. Due to cache issues, mapslices will never match the performance of reductions that can be performed while visiting all array elements in storage order.

As for "why the rush?" that's pretty clear: many people are looking forward to not having their code break with each release. It's particularly a major disincentive for industrial adoption, but it's a drain even on us academics. If we wait too long, the world will move on to other technologies that may be less perfect but more predictable. It's time for 1.0.

@mbauman
Copy link
Member

mbauman commented Oct 12, 2017

@bjarthur Could you expound a little bit on what you don't like about the sum(A, d) API? I think our reduction method tables need to be thoroughly examined, and that's on the slate as part of #20402. Specifically, I think they should be structured as:

const ReductionRegion = Union{Integer, Tuple{Vararg{Integer}}}
reduction(f, A::AbstractArray, d::ReductionRegion)
reduction(f, A::AbstractArray)
reduction(A::AbstractArray, d::ReductionRegion)
reduction(A::AbstractArray)
reduction(f, iter)
reduction(iter)

No ambiguities, and it gives obvious hooks for concrete types to specialize on.

Does that satisfy your desire here? Or are you looking for bigger changes?

@bjarthur
Copy link
Contributor

@timholy i'll respectively disagree with your motivations for rushing. i personally enjoy the improvements in each new release. the pace and scope of such improvements is greater because breaking changes are permitted. we've got Compat and femtocleaner to help out. moreover, the longer we wait, the more mature the tools and libraries will be, and so the less likely that the onslaught of newcomers drawn in by a 1.0 gala party will be unimpressed and leave.

@mbauman my desire to deprecate any(A,d) was to make any(f,A,B,C) easier to implement. if it's truly the case that a slicing operator as you describe above will never be as fast, then i agree it would not make sense.

@bramtayl
Copy link
Contributor

Tada https://github.com/bramtayl/JuliennedArrays.jl is published. Sometimes an order of magnitude faster than mapslices and much more flexible.

@tpapp
Copy link
Contributor

tpapp commented Jan 22, 2019

I think the the introduction of eachslice by #29749 allows a potential solution for this issue, by making it work for multiple dimensions (currently one dimension is supported).

IanButterworth pushed a commit that referenced this issue Jun 4, 2024
Stdlib: Pkg
URL: https://github.com/JuliaLang/Pkg.jl.git
Stdlib branch: master
Julia branch: master
Old commit: ed7a8dca8
New commit: 4e43058c2
Julia version: 1.12.0-DEV
Pkg version: 1.12.0
Bump invoked by: @IanButterworth
Powered by:
[BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl)

Diff:
JuliaLang/Pkg.jl@ed7a8dc...4e43058

```
$ git log --oneline ed7a8dca8..4e43058c2
4e43058c2 Merge pull request #3887 from carlobaldassi/validate_versions
bc7c3207d abort querying more pacakge for hint auto complete (#3913)
a4016aed2 precompile repl switch (#3910)
a48c9c645 Fixed glitch in the manual (#3912)
d875aa213 Add timeout and new tests for resolver
aeb55f7f0 run artifact selection code with minimal compilation (#3899)
0180a0105 avoid doing some checks if the package will not be showed in status output (#3897)
c6c7ed502 improve precompilation for `st` in the Pkg REPL (#3893)
bffd0633c Add version validation during Graph simplification
c2ad07003 Fix padding in resolve's log journal printing
3eb86d29f Revert #2267, with better log message
acdbb727e Small extra check in Graph's check_consistency
1d446c224 Fix small bug in Graph constructor
3efc3cbff Fix show method for VersionSpecs
```

Co-authored-by: Dilum Aluthge <dilum@aluthge.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking This change will break code needs decision A decision on this change is needed
Projects
None yet
Development

No branches or pull requests