Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Maintainership roundtable and discussion #1272

Open
DeliciousHair opened this issue Mar 20, 2023 · 59 comments
Open

Maintainership roundtable and discussion #1272

DeliciousHair opened this issue Mar 20, 2023 · 59 comments

Comments

@DeliciousHair
Copy link

I'm just looking at the activity level in terms of PRs being merged, wondering if this project is still a thing?

@nilgoyette
Copy link
Collaborator

Last time I heard about this problem, both maintainers were too busy. IIRC, one is a teacher and has several projects going on and the other has just finished his phd and started working. xd009642 was added as a maintainer in 2020.

See this issue for more information.

@bluss
Copy link
Member

bluss commented May 3, 2023

I'd like to give away more of "my role" since I don't have the bandwidth unfortunately. That means bringing on permanent maintainers.

I maybe want to keeping going working on lower level stuff, like I have with matrixmultiply now and would maybe with numerical simd for other blas-like operations that could benefit ndarray.

I guess it's a bit of a pickle now that the organization is bigger than just one repo but low activity across the board. I can't necessarily do everything without asking others.

The status of the code is "not great" in terms of how easy it is to maintain and change (me knows most of the internals, some lack of abstractions for internals, lots of unsafe code that works just because of careful contributors, easy to mess it up).

What do @termoshtt @nilgoyette @adamreichold @jturner314 @LukeMathWalker think about this? What's the direction for ndarray (there's a lot that can be done - modernisation using const generics)? Are there other projects that we should emulate? Or that have made ndarray redundant?

@ZuseZ4
Copy link
Contributor

ZuseZ4 commented May 3, 2023

@bluss Since you mentioned matrixmultiply and simd in the other post, did you see the work from sarah (faer-rs)?
She does seem to have extremely competitive performance when compared to openBLAS, Eigen and Matrixmultiply.
I generally liked the design of ndarray a touch more than that of nalgebra due to the data I work with.
If I can add a wish for the direction of ndarray, I'd love to see ndarray using faer for more of it's operations.

@antimora
Copy link

antimora commented May 3, 2023

@bluss Thanks for the update because the community has been worried about the maintenance of such important Rust library. Many projects rely on its existence and can't find any drop in replacement. I hope the ndarray creators and maintainers can come up with a long term solution. I am sure there are people who would be happy very least to review PRs.

@DeliciousHair
Copy link
Author

@bluss good to see this is not abandoned! I also really appreciate the lack-of-bandwidth problem, suffering from it myself regularly.

The issue of how to bring on "permanent" maintainers is an ongoing problem though, at least for open-source projects like this as they often live and die at the bandwidth (or interest) of a small handful of people. Given that this project does not seem to have the corporate backing that tends to address this particular problem via financial incentives, one possible option is that maintenance is handled via committee, with membership that can be changed. This would require first setting up a contributing guideline and a code of conduct that would enable said committee to exist, but it may allow progress to actually progress without the time-poor bottleneck in the mix.

Just my two cents, really happy to see this conversation happening :-)

@adamreichold
Copy link
Collaborator

What do you think about this?

I think bringing in more people to share the load is a good idea. It can still fail as volunteers sometimes just do not have any time to contribute. For example, we do have multiple active maintainers who continue the work independently at PyO3. But currently, our active phases almost never overlap which makes small changes slow and it often feels impossible to obtain the necessary consensus for large changes.

As for actually doing it, I see two options: You give some people you are able to trust somewhat access and let it run living with the likely but hopefully temporary breakage resulting from that. Or you increase your time investment for a while to actively guide new people into reviewing PR and making releases but I am not sure if that is possible at all.

What's the direction for ndarray (there's a lot that can be done - modernisation using const generics)?

For me personally, with my rust-numpy hat on, this crate is mainly a fundamental data structure for scientific computing and hence I would - also in consideration of the bandwidth issue - prefer if it would focus on that. So yes, modernisation but also simplification, i.e. trying to move even more towards an ecosystem model like ndarray-stat and ndarray-linalg where most operations live outside of ndarray itself. If the whole can of worms of accelerated subroutines via BLAS et al. could be simplified or moved into separate add-on crates, the would be great as well.

Are there other projects that we should emulate? Or that have made ndarray redundant?

I do not know of any with the same "fundamental data structure" focus as ndarray.

Is the NumFOCUS organisation something you could see yourself contacting and asking for (monetary) support? Would money alone actually solve anything?

@DeliciousHair
Copy link
Author

DeliciousHair commented May 4, 2023

Would money alone actually solve anything?

I think the thing it does help with is that funding enables somebody to justify prioritizing their time to doing maintenance should there be conflicting pressures on them as well. I mean it's far from a perfect solution, but life is expensive so unless one can afford to volunteer their time to an open-source software project (and many people can and do, don't get me wrong) then if there is demand X that pays the rent vs. really-intersting-project Y, then X will usually win. A financial incentive simply helps to level this field a bit.

As for directions / applications / focus, it occurs to me that a selling point that could be used to attract some funding (I don't know how any of this stuff works, outside my realm of experience) is that using these shiny-new rust implementations of ubiquitous python libraries does have massive market appeal--just look at how hot polars has become, partially because it stomps all over pandas in many ways. Thus, rather than taking the view that ndarray is a sort of numpy-for-rust, in light of projects like faer and rapl, it may be better to view the ecosystem as the new rust-backed numpy, assuming said ecosystem would include some python wrappers of course.

Dispensing with BLAS/LAPACK and gaining out-of-the-box parallelization has immeasurable value to many industries and use-cases after all.

@adamreichold
Copy link
Collaborator

I think the thing it does help with is that funding enables somebody to justify prioritizing their time to doing maintenance should there be conflicting pressures on them as well. I mean it's far from a perfect solution, but life is expensive so unless one can afford to volunteer their time to an open-source software project (and many people can and do, don't get me wrong) then if there is demand X that pays the rent vs. really-intersting-project Y, then X will usually win. A financial incentive simply helps to level this field a bit.

I do not disagree, but I would like to add that this reasoning is limited to situations where one works on a project basis. If you have a steady job and obligations to a family, funding for individual projects does not change how much time one has for FOSS work.

@nilgoyette
Copy link
Collaborator

The status of the code is "not great" in terms of how easy it is to maintain and change (me knows most of the internals, some lack of abstractions for internals, lots of unsafe code that works just because of careful contributors, easy to mess it up).

That's my main problem with this crate if I'm going to help maintain it. When I open the internals, I don't understand what I'm reading. I'm usually able to add a method and whatnot, but I don't feel knowledgeable enough for "more complex" stuff.

  • "lack of abstractions for internals" @bluss Is there a list somewhere? Did you have ideas on how to solve them but never had the time to do it? Can you share those ideas?
  • IIRC, @jturner314 had a PoC for a new iteration management. I don't remember the details, but he claimed that it was, at least, faster. This is super interesting. Can we know the status of this project? Once we have more details, maybe someone will be able to finish it?

So yes, modernisation but also simplification, i.e. trying to move even more towards an ecosystem model like ndarray-stat and ndarray-linalg where most operations live outside of ndarray itself.

This is an excellent idea and this is already what's going on. I created ndarray-ndimage for that reason. ndarray should probably be kept "small" and clean, then let others build on it.

ndarray is a sort of numpy-for-rust

This is exactly my opinion. I don't think ndarray is redundant. nalgebra certainly share some features, but I do not use them in the same ways nor with the same goals. At least for now, ndarray has a reason to live!

@antimora
Copy link

antimora commented May 4, 2023

@nilgoyette

This is exactly my opinion. I don't think ndarray is redundant. nalgebra certainly share some features, but I do not use them in the same ways nor with the same goals. At least for now, ndarray has a reason to live!

Just to highlight the importance of this library. We use NDArray as one of our backends for Burn's deep learning framework.

@jturner314
Copy link
Member

Personally, now that I'm no longer a student, am working full-time, and have more responsibilities, I have less time and energy to devote to FOSS. And, unfortunately, I don't have much need for ndarray at work, so it's hard to justify spending time on it at work. (I still use ndarray for a few things, such as interacting with NumPy and conveniently parallelizing things in some cases, but nalgebra is usually a better fit for the things I'm working on now.)

I do think that an n-dimensional array type is very important; while nalgebra is nicely polished, the 1-D and 2-D vectors and matrices provided by nalgebra don't satisfy all use cases.

It would be great to bring on more people to take over the maintenance. I'd also be happy to move my ndarray-npy crate to the rust-ndarray organization; I haven't had the energy to really maintain it properly by myself.

As far as improvements go, I think that it would be possible to simplify ndarray's internals and public API by taking advantage of const generics and GATs and reworking the API in terms of traits (instead of the generic ArrayBase type). By simplifying the implementation (using better abstractions which enforce correctness) and making the public API easier to use, I'd hope that more people would use and contribute to ndarray.

I have some ideas for how to update the internals and API using traits, GATs, and const generics, but I doubt I'll find the time to implement it all myself. If someone is interested on working on it, I'd be willing to chat about it.

IIRC, @jturner314 had a PoC for a new iteration management. I don't remember the details, but he claimed that it was, at least, faster. This is super interesting. Can we know the status of this project? Once we have more details, maybe someone will be able to finish it?

Yeah, ndarray currently has optimal iteration only in a few special cases (all arrays standard layout or all Fortran layout). By reordering iteration to be as close to memory order as possible, flattening nested loops where possible, and tiling where necessary, it would be possible to improve the iteration performance over arrays of dissimilar or non-standard layouts. I put together an initial prototype of the reordering and loop-flattening pieces at https://github.com/jturner314/nditer. I also worked on automatic tiling but haven't pushed that to the repo. The primary thing that blocked me from finalizing that project was testing -- the code is complicated in some places, so I really wanted to implement proptest support for generating and testing with arrays of arbitrary shapes and layouts. I didn't get a chance to finish it. Another way to improve performance would be to take better advantage of SIMD, but I didn't work on that.

@bluss
Copy link
Member

bluss commented May 13, 2023

Great input from everyone. I wasn't fully aware of faer-rs, no, so thanks for the pointer.

I would like to invite those participating in the discussion here to become collaborators in ndarray.
It's unfortunately not realistic for me to take on a greater responsibility now, so that is not going to be the outcome of the discussion, even though one could reasonably wish for it. I want to leave the way open for others to develop ndarray without having me as gatekeeper.

Can I for example ask @adamreichold, are you interested? Do you have any contacts that are?

@adamreichold
Copy link
Collaborator

Can I for example ask @adamreichold, are you interested? Do you have any contacts that are?

Took me a while to consider the commitment but yes, I am interested. I would be glad if I could help with maintenance and eventually further development.

I do think my own time budget and my inexperience in maintaining this particular project imply that I could not immediately tackle any large charges. On contrary, in the beginning I would deliberately limit myself to building and packaging issues and reviewing contributions with the aim of producing point releases and hopefully eventually a 0.16.0 release. Ideally, I will be able to learn enough to do more in the future.

(I also do not want to give a wrong impression, I do not consider myself well-networked and have few contacts beyond direct collaboration via the FOSS projects. I will ask the one acquaintance who I think could be in a position to contribute though.)

@nilgoyette
Copy link
Collaborator

I would like to invite those participating in the discussion here to become collaborators in ndarray.

I find myself in jturner's situation (less/no more ndarray at work, for a while), but I really love ndarray and I can at least

  • check the issues page and answer, when I'm able.
    • We currently have 213 of those. Can I (we) clean it? I offer myself to read all of them and close them when they are no longer relevant. Can I have the right to do so?
  • try to review the PR when I do understand what's going on

@bluss
Copy link
Member

bluss commented May 16, 2023

Awesome, I've added you on this repo, but there is more admin to do - the whole org - which we will get to

@DeliciousHair
Copy link
Author

DeliciousHair commented Jun 5, 2023

Thought I'd chime in here that I'd be happy to put my hand up to volunteer for some sort of maintainer / reviewer status. At the moment I'm also trying to contribute to rapl so I've at least got my mind in the correct linear algebra / tensor space to be thinking about this. Work schedule is a bit up and down, rather "up" at the moment so free time is at a premium and contributions will be slim for the next month or so. However, I do have enough availability to do reviews most any time, and am happy to participate in any planning where my input may be of value.

@bluthej
Copy link

bluthej commented May 12, 2024

Hey! I was looking to start contributing to ndarray and found this thread.

If my understanding is correct it sounds like going through the good first issues and sending PRs might not be the most useful thing to do right now?
If I'm wrong I'd be happy to start contributing!

In any case, it might be worth updating the status section of the readme to make the information readily available 🙂
Especially for people who would like to depend on this crate or contribute to it.

@nilgoyette
Copy link
Collaborator

@bluthej This project is nor dead nor actively maintained. It seems that there has been as much action in the 2 last months than in the 3 least years, which is kinda promising. Look at the commits to get a better understanding.

As a side note, we built an important part of our main project (medical imaging company) on ndarray and we do not regret it at all. Could it be better? Yes, of course. Does it offer everything we needed? Yep.

I can't answer your specific question (Should I contribute?), but I can at least say that your issues/discussions will be answered and your MR will be read.

@grothesque
Copy link

It seems to me that the apparent stagnation of this project is only partially explained by the original contributors being too busy. This is normal, but it could be expected that new contributors show up for such an important or even foundational (for numerical Rust code) crate. I suspect that the very high level of complexity of this library plays a role in limiting contributions. And my gut feeling is that a fair share of this complexity is due to overengineering and could be removed.

Are there people who would be interested in discussing the feasibility of a radically simplified and modernized "next generation" ndarray? This could eventually lead to a prototype that could eventually either be absorbed into ndarray proper, or the old ndarray could eventually be deprecated, or remain as a compatibility layer, or whatever. I believe that work on such a simplified and modernized ndarray could be a way to revitalize numerics in Rust.

There seems to be consensus that if ndarray was started today with the benefit of hindsight (and with current rustc), its design would be quite different:

  • The core library would be limited to essential functionality that is required to safely work with n-dimensional arrays. Any operations that can be efficiently implemented externally would be provided by traits in separate crates. (See Maintainership roundtable and discussion #1272 (comment).)

  • Most array functionality would be implemented for a reference type. This would simplify the API and reduce monomorphization bloat.

  • The library would provide a few array types (most importantly an owned and a view variant in addition to the reference type mentioned above). Most essential functionality would be available through the reference type to which these types would dereference. Functionality that needs to take ownership would be available through one or a few common traits: see section "Future possibilities" of RFC: Array Reference Type #879 and Maintainership roundtable and discussion #1272 (comment).

While the above does not seem to be controversial, here are some additional ideas for further streamlining:

  • With the new design, the job of the array types (e.g. Array for an owned array) would be to manage memory. So it seems that all the RawData, Data, etc. would no longer be necessary.

  • Same for the various Repr variants like OwnedRepr. These could be dropped and the necessary logic incorporated into the array types. ArcArray and CowArray would be replaced by specific array types as well (or perhaps removed or delegated to a separate crate).

  • Given that ndarray is focused on dynamically sized arrays (i.e. the shape is dynamic), is there actually much value in supporting a static number of dimensions (i.e. ndim can be static)? It is true that in current ndarray IxDyn is slow, but looking at the example given in the issue, this is not because of the extra storage/allocation required (this should be negligible), but because of inefficient code.

    I believe that with a clever implementation the number of extra allocations required for an dynamical ndim could be reduced almost to zero. Currently each IxDyn instance (there are two per array, one for the shape, and one for the strides) holds one allocation (if ndim > 4). It should be possible to unite both in a single allocation or even in one allocation together with the data. In this way, a single extra allocation would be needed only for views that change stride/shape and have ndim > 4 (or some other threshold).

  • RawArrayView and similar structs are the equivalent of raw pointers. Do they actually have convincing use cases? They do have a cost in terms of code and API complexity, so perhaps they could be simply left out?


With all or most of the above, a new ndarray should be radically simpler and smaller, both in terms of implementation and API. User code that reads any array of f64 would simply take &ArrayRef<f64>, for example.

@nilgoyette
Copy link
Collaborator

nilgoyette commented Jul 20, 2024

I suspect that the very high level of complexity of this library plays a role in limiting contributions.

I tried contributing to this crate and everything non-trivial was/is too complex for me. As a professional Rust/C++/Python programmer, I'm not particularly proud of writing this, but it is the way it is. So, yes, I totally agree with your sentence. Now that I've said that I'm ignorant about the internals, here's my opinion :) on some of your points

  • Having external crates to handle the complexity seems like a good idea, but then we have several unmaintained projects (i.e. ndarray-stats). This might be an entirely different problem and we can't do much about it in the issue... I think that, if we remove something in this crate, there might not be another crate to offer it.
  • The reference type RFC. I never took the time to read it before, but now that I did, it seems logical to respect the way other Rust objects work. I'm the maintainer of ndarray-ndimage crate and I often wondered what kind of parameters I should accept.
  • I always thought IxDyn was slow because - well, it's dynamic. We need to allocate more and we have more branch prediction errors. If it's possible to make the dynamic arrays as fast as the static ones, it's perfect. However, how will we link some methods with arrays of a specific size, like some of the constructors for 2D arrays (identity and from_diag)?
  • I used ndarray for several years and I never saw a use-case for ArcArray and CowArray. Of course this is not a reason to remove them :) but if there's not much demand for those and supporting them is actually complex, then they should probably be removed.

@akern40
Copy link
Collaborator

akern40 commented Jul 20, 2024

I suspect that the very high level of complexity of this library plays a role in limiting contributions. And my gut feeling is that a fair share of this complexity is due to overengineering and could be removed.

Agreed! I had the same gut impression when I came to the library. However, after working intensively on the array reference RFC, I can say that much of the complexity is more warranted than it first appears. I'd want to be careful not to confuse "unneeded complexity" with "undocumented complexity". Still, I think there are cleaner designs that could keep the same capability with clearer abstractions.

Are there people who would be interested in discussing the feasibility of a radically simplified and modernized "next generation" ndarray? This could eventually lead to a prototype that could eventually either be absorbed into ndarray proper...

I am interested in this, but I'd strongly encourage this effort to be done under the ndarray name. The Rust multi-dim array ecosystem is already quite difficult for newcomers to navigate, having to find, understand, and differentiate between ndarray and the strongly-supported nalgebra with its focus on static-sized 2D matrices (not to include other participants, such as faer-rs). Since ndarray never hit 1.0 and therefore can still use breaking changes, I think it's likely that a path forward could be found that slowly weans users off of the current design via deprecation while building a stronger foundation for new users. I'd think of it more like aiming to bring ndarray to v1.0.

  • Given that ndarray is focused on dynamically sized arrays (i.e. the shape is dynamic), is there actually much value in supporting a static number of dimensions (i.e. ndim can be static)?

I believe if the library is designed carefully, we can leave the door open to (and maybe should provide) statically-dimensioned arrays. I think this for a few reasons: firstly, if you look at projects like numpy, PyTorch, or jax, they have spent years trying to build up type annotation systems that can at least indicate the dimensionality (if not shape) of their arrays. While this effort is easier in Rust (without the limitations of Python's type hint system), I think we should take a strong hint from efforts in other languages that knowing the dimensionality of an array in your type system is a pretty important capability. The second reason is that it opens the door to potentially-powerful optimizations, like things that you can do if you absolutely know that your array is just a 2D matrix, or even having some fancy "layout" type that just represents diagonal matrices; focusing only on dynamic-sized arrays may blind us to the designs necessary to make these things happen.

  • RawArrayView and similar structs are the equivalent of raw pointers. Do they actually have convincing use cases?

I think this is a bit of a misconception about the job of Raw... and its role in creating complexity. I don't think maintaining a "raw pointer" type in ndarray is particularly difficult (see my comment on the array reference RFC), and keeping the capability for low-level, unsafe, or other advanced usage seems important to such a fundamental crate.

  • Same for the various Repr variants like OwnedRepr. These could be dropped and the necessary logic incorporated into the array types. ArcArray and CowArray would be replaced by specific array types as well (or perhaps removed or delegated to a separate crate).

With all or most of the above, a new ndarray should be radically simpler and smaller, both in terms of implementation and API. User code that reads any array of f64 would simply take &ArrayRef<f64>, for example.

Interesting note on these two: I've been thinking it over a lot, and I think generics in this case are very hard to avoid as cleanly as the above example. Obviously you need one for the type of the element. Given that I'm arguing for keeping statically-dimensioned arrays (and opening the door to fancy other layouts), that requires a second generic. And after a lot of thought, I think that you essentially have two options for things like Arc and Cow at the data-level (keeping in mind you could always wrap an entire array reference in an Arc or Cow or whatever, as long as you're chill with having changes to the array's layout also being included in the reference count or clone-on-write capability).

  1. Option 1: pointer wrappers that aren't included in the type system. Bury the wrappers in an enum, build that enum into the basic ndarray array type, and voila, you can get away without a third generic. The costs here are pretty bad, though. Including Cow as an option suddenly requires every ndarray to include only elements that are Clone. It also precludes users (and us) from reasoning about the various shared array types without runtime inspection. And finally, that runtime inspection (and runtime access to the underlying pointer) may have performance implications that are very difficult to benchmark.
  2. Option 2: include the pointer wrappers in the type system. This is what ndarray does right now, and it requries a third generic: Rust just can't reason about non-concrete types without them. You also can't maintain ArcArray and CowArray as completely separate types while writing a non-generic function for them.

Sorry if those explanations kinda suck; I'm having trouble explaining clearly what I think is a fundamental trade off. Also, a disclaimer: it may be possible to still do this by managing to write a Deref from something like ArcArrayRef to ArrayRef, where ArcArrayRef carries an Arc<T> and ArrayRef carries a NonNull<T>, but I haven't been able to figure out how to do that.

I used ndarray for several years and I never saw a use-case for ArcArray and CowArray. Of course this is not a reason to remove them :) but if there's not much demand for those and supporting them is actually complex, then they should probably be removed.

Music to my ears.

Finally, a few thoughts that aren't included in the above conversation:

  1. If we're considering a redesign, I'd make a shortlist of capabilities we'd like to support. For example, GPU arrays are probably out (see this Reddit comment on why), but I'd bet we want to support sparse arrays. Special layouts (e.g., diagonals) may also be of interest. What needs to be considered in the design to make that possible?
  2. Despite my first bullet point, I'd encourage us to choose a first step and make that happen; I can imagine a goal like "redesign the internals to support all future use cases" as the sort of project that never gets to a release (speaking from personal experience here 😅). In addition, an increasing cadence of releases and contributions will give users trust in the vitality of the crate, hopefully drawing even more contributors.
  3. I'd also consider "competitive advantage": what can ndarray do that others can't, and what are we leaving to others? For example, nalgebra seems to have statically-shaped and stack-allocated matrices down pretty pat. Seems like we shouldn't focus on that use case?
  4. On that note, let's consider compatibility, with nalgebra but also with PyO3, numpy, and other efforts like faer-rs and other accelerator efforts.

@bluss
Copy link
Member

bluss commented Jul 20, 2024

Great to see your interest for this @akern40 and @grothesque!

IxDyn is slow because when I've been working on it, it's a feature that's an extra, along for the ride, and has not been designed for. Its purpose has been to help encapsulate data, not be used for numerical operations.

I like the analysis done here, and the complexity level in ndarray is high like you say. The internal model of ndarray is not too complicated, but the knowledge of it is used in too many places. So refactoring that would definitely be welcome.

(To some extent RawArrayView is an example of finding a more basic common building block for some operations - but its reason to exist is to handle uninitialized array elements correctly.)

And yes, the focus should (ideally..) be on continuous delivery (making releases). That's how we make real contributions to the ecosystem. This is also maybe the hardest thing to recruit, someone (or a group) who can take over driving making releases. 🙂

User code that reads any array of f64 would simply take &ArrayRef, for example.

I agree with the general push towards this ideal, without saying anything now about if dimensionality information should be static or not, if it should be a concrete type or a trait based interface or not.

@akern40
Copy link
Collaborator

akern40 commented Jul 21, 2024

Thanks for the feedback, @bluss! Can you chime in at all about your opinions on keeping the data-level Arc and Cow? It feels to me like that's a big design decision at the center of a refactor, and would love to hear some input from someone with a lot more knowledge of the library.

As for pushing releases, I'd be happy to be part of a team that works on this. I'm a relative newcomer, but I'm strongly interested and have an ok understanding of the array part of the codebase (I'm a little shakier on iteration and dimensionality, but those will come with time). I've also got the time right now in a way that others with more life responsibilities may not. Happy to talk offline if that's of interest.

@daniellga
Copy link
Contributor

daniellga commented Jul 21, 2024

* I used `ndarray` for several years and I never saw a use-case for `ArcArray` and `CowArray`. Of course this is not a reason to remove them :) but if there's not much demand for those and supporting them is actually complex, then they should probably be removed.

It's funny I stumbled into a use case today, the same day I am reading your comment. I also use ArcArray in my far less important crate for the COW behaviour and for being Send and Sync.
IxDyn seems to be the only option if I want to implement a port of ndarray in other languages, like python, for example. So I think it's really important to keep it.

Thanks you all for your work!

@grothesque
Copy link

Thanks for your comments! I will reply to all the points, but this may take me some time.

@akern40 wrote:

I am interested in this, but I'd strongly encourage this effort to be done under the ndarray name. The Rust multi-dim array ecosystem is already quite difficult for newcomers to navigate, having to find, understand, and differentiate between ndarray and the strongly-supported nalgebra with its focus on static-sized 2D matrices (not to include other participants, such as faer-rs). Since ndarray never hit 1.0 and therefore can still use breaking changes, I think it's likely that a path forward could be found that slowly weans users off of the current design via deprecation while building a stronger foundation for new users. I'd think of it more like aiming to bring ndarray to v1.0.

I fully agree about avoiding to split the ecosystem. What I tried to suggest is that there might be value to explore the viability of a radically simplified ndarray foundation in a separate crate. That may give clarity about what is feasible without having to consider current users of ndarray.

Once it is clear what is feasible, the necessary changes could be added to ndarray proper in a way that gives users and depending crates time to adapt. But without knowing what is feasible, it seems difficult to justify bold changes.

@grothesque
Copy link

grothesque commented Jul 21, 2024

Now for IxDyn. Would love to hear whether you agree or rather think that I get carried away in my argumentation.

@akern40 wrote:

I believe if the library is designed carefully, we can leave the door open to (and maybe should provide) statically-dimensioned arrays. I think this for a few reasons: firstly, if you look at projects like numpy, PyTorch, or jax, they have spent years trying to build up type annotation systems that can at least indicate the dimensionality (if not shape) of their arrays. While this effort is easier in Rust (without the limitations of Python's type hint system), I think we should take a strong hint from efforts in other languages that knowing the dimensionality of an array in your type system is a pretty important capability.

Sure, these typing systems are useful, but they are not just about ndim, but also about shapes (like fixing the length of some axes, or expressing that these two axes have the same length or that "unit"). Is anyone capable of doing such checking at compile time in any language?

Already a Rust array library where ndim and all elements of shape are static but fully generic would be very cool. However I am not sure whether technically the language is ripe even for this. (The nalgebra crate might be doing just what is feasible right now.)

Since ndarray’s core business is dynamically shaped arrays (as in BLAS/LAPACK-style linear algebra and not in 3d vector linear algebra), adding partially static shapes to this would only increase complexity: Given that already a purely static-shaped array library would be technically very difficult, going beyond that seems even more so.

Case in point: ndarray’s current model (dynamic shape/strides with optionally static ndim) is very limited compared to jaxtyping, but it’s already responsible for a fair share of API and implementation complexity (without much gain to show for it as I try to demonstrate below). My impression is that the price for the small gain in static checking is too high.

The second reason is that it opens the door to potentially-powerful optimizations, like things that you can do if you absolutely know that your array is just a 2D matrix, or even having some fancy "layout" type that just represents diagonal matrices; focusing only on dynamic-sized arrays may blind us to the designs necessary to make these things happen.

I would like to argue that it is incoherent to pair static ndim with dynamic shape and strides. Shape is a more low-level property in the sense that it matters in the innermost loops, while ndim is a more high-level property that rather determines the number of loops. (For optimizing inner loops knowing statically the innermost element of shape and strides would be more useful.) Moreover there are typically "few" possible values for ndim, so even if it is dynamic, it can be still dealt with efficiently:

  • To emit really efficient code (say SIMD-optimized code for 4x4x4 arrays), shape and strides would have to be static as well. Should knowing ndim statically matter, one is free to emit separate code paths for different ndim values and branch/check at runtime. In most cases where static ndim is applicable, the runtime cost of dynamic ndin should be negligible thanks to the CPU’s branch prediction (see also Slow iteration because of IxDyn #1340).

  • Currently IxDyn means two extra allocations when ndim > 4. I believe these could be easily reduced to a single one, or even zero if one is ready to share a single allocation for shape, strides and data - this should be good for cache locality as well.

It seems to me that the only real advantage of static ndim is the ability of the compiler to catch some errors at compile time. But note that ndim is just one among several potentially useful properties that could be encoded in the type (at least in principle). Singling out ndim, a property that can be checked at runtime at negligible cost, seems to me a needless complication for a library focused on rather large dynamically shaped arrays.

@nilgoyette wrote:

I always thought IxDyn was slow because - well, it's dynamic. We need to allocate more and we have more branch prediction errors. If it's possible to make the dynamic arrays as fast as the static ones, it's perfect. However, how will we link some methods with arrays of a specific size, like some of the constructors for 2D arrays (identity and from_diag)?

It would no longer be possible to express in the type system that identity emits a 2D array, but I do not think that this is a big deal. After all, identity emits a square array, and this is not encoded as well. The function identity would simply return a dynamically sized array that happens to be 2D and square.

@bluss wrote:

IxDyn is slow because when I've been working on it, it's a feature that's an extra, along for the ride, and has not been designed for. Its purpose has been to help encapsulate data, not be used for numerical operations.

But IxDyn is relevant. For example what brought me to ndarray is my work on a Rust library for tensor network computations (a library similar to https://itensor.org/). There, one has to deal with networks that contain a mixture of arrays of variable ndim all the time.

@akern40
Copy link
Collaborator

akern40 commented Jul 23, 2024

It's funny I stumbled into a use case today, the same day I am reading your comment.

@daniellga funny how those things happen! Question about that usage: is it important for your use that the data specifically is Arc, separate from the shape/strides? Would it be ok if the data and shapes/strides were all atomically reference counted together? Also, I notice that you use IxDyn for that implementation, then wrap that with a usize const generic. Two questions: why did you opt for const generic sizes for your tensors, and if there were a const-generic implementation of dimensions (it's been mentioned in a few comments and issues here), would that be of interest?

@grothesque:

Once it is clear what is feasible, the necessary changes could be added to ndarray proper in a way that gives users and depending crates time to adapt. But without knowing what is feasible, it seems difficult to justify bold changes.

Agreed. I actually have a repo that I was using for mocking up the array reference RFC; if that's a good place, happy to use its issues/PRs/codebase as a place for people to do some design concepts.

On the note of IxDyn, I think you make strong points. I'll try to be brief in my responses, but here's my top-line: dynamic-dimensional arrays should be possible, easy, and just as fast as static-dimensional arrays (should the latter exist). Ergonomic aliases should also exist to easily employ dynamic-dimensional arrays without worrying quite so much about generics.

only real advantage of static ndim is the ability of the compiler to catch some errors at compile time

Even if this I the only advantage, that seems worth it to me? I also think it aligns with a Rust ethos: if we can get the compiler to check it for us, let's do that.

But note that ndim is just one among several potentially useful properties that could be encoded in the type

It would no longer be possible to express in the type system that identity emits a 2D array, but I do not think that this is a big deal. After all, identity emits a square array, and this is not encoded as well

I actually think this is the stronger argument for maintaining genericity in some "layout" parameter. Seems like it would be better to build in a generic that lets us (and others) play with layouts (fixed dim, fixed stride, diagonal, sparse(?), etc), then expose that as a lower-level API. Maybe identity should indicate that it returns a square array. When users come to learn ndarray, present them first with a simple dynamic-dimensional &ArrayRef<f64>, then let them discover/learn the more complex API as needed. I think it would also make ndarray a stronger candidate as a base library for others to build on for specialized or niche tasks.

focused on rather large dynamically shaped arrays

Just to throw my two cents in here, I came to ndarray because I was working on an application that specifically deals with 1D, 2D, and 3D state matrices - matrices that are often (3,), (N,3), or (N,3,3). Frankly, I started with nalgebra, because their fixed-shapes implementation is particularly powerful for state matrices. However, they have no 3D support (nor does the C++ Eigen library, officially), so I moved to ndarray instead. But I still liked specifically knowing the dimensionality of the arrays.

@DeliciousHair
Copy link
Author

I am going to ping @sarah-ek into this discussion also since I seem to recall her mentioning the idea of including tensor ops for faer-rs. Assuming I am not imagining this, it could be a good opportunity for collaborative planning at the very least in terms of API design and whatnot.

@akern40
Copy link
Collaborator

akern40 commented Jul 27, 2024

Ya faer-rs is awesome! I'd love to see some collaboration between ndarray and other libraries that deal with n-dimensional arrays. I'd hope that ndarray could provide strong connective tissue off of which other libraries could interoperate and build.

@grothesque
Copy link

grothesque commented Jul 27, 2024

Here is another multidimensional array library for Rust: https://docs.rs/mdarray/latest/mdarray/

@akern40 wrote

I'd hope that ndarray could provide strong connective tissue off of which other libraries could interoperate and build.

That's my hope as well. Most of the libraries that I've seen (including faer-rs, rlst, and the above mdarray) have the following points in common:

  • There is a single active contributor, or at most two.
  • In addition to implementing some flavor of basic multidimensional array functionality, they provide a subset of applications of it.
  • They lack a design document that would justify and explain the design decisions that were taken.

Of course there is nothing wrong with implementing a multidimensional array library as an exercise or for a particular application. But given that the underlying storage is very similar, it seems that there is space to try to strengthen the role of ndarray as common infrastructure as far as technically feasible.

The way I see it (do you agree?) the fundamental design decision/constraint behind ndarray is that it must provide an efficient abstraction for arbitrary, dynamic multidimensional dense arrays: this is useful general, but in particular for projects like https://github.com/PyO3/rust-numpy

Keeping this in mind, I hope that some of the reasons why all these libraries rolled out their own array types can be mitigated by providing some coherent subset of the following (list is incomplete):

  • Useful traits for arrays
  • Arrays that live on the stack and not just on the heap
  • Arrays with more static layout
  • Better support for chunked/SIMD operations
  • Better support for concurrency
  • Support for lazy evaluation ("expression templates" in C++ jargon)
  • Sparse arrays interoperable with dense ones

Of course not everything can be done at the same time and coming up with a nice, efficient and coherent design is difficult, but I have the impression that several people here are interested in participating in this effort. I do hope that it will be possible to at least cover the application areas of several "large array" crates in a single infrastructure crate. (Providing common infrastructure also for nalgebra-like applications seems more difficult. For example #879 relies on explicitly storing the shape and the strides for each array, which seems a no-go for fast 2x2 arrays.)

@grothesque
Copy link

Also relevant: the new std::mdspan of C++

Unfortunately, I do not think that the design can be replicated in Rust. Or can it?

@bluss
Copy link
Member

bluss commented Jul 28, 2024

can you (or I) set up a Zulip/Slack/Discord to communicate?

Either Zulip or Discord would work for me, feel free to create either. (I can also do it, as long as we decide which one to use. 🙂 )

Disclosure: the previous "official" channel for ndarray was #rust-sci:matrix.org. If we create something new, rather not use matrix IMO

@akern40
Copy link
Collaborator

akern40 commented Jul 28, 2024

Ok, for any/all those interested, I have created a Zulip organization that you can sign up for here. It has the broad name "Rust Multidimensional Arrays", in the hopes that this can eventually be a place to converse about the topic in general in Rust (e.g., for a design working group), in addition to a mode of communication for logistics.

Please note: I have included a Code of Conduct borrowed from GitHub and included it in the announcements channel. I have also set some fairly strict limits on various kinds of activity such as channel creation and direct messaging, in addition to requiring signup via GitHub or GitLab. If people find these restrictions cumbersome, let me know and I can relax them.

Edit: I've added an email option because signups apparently weren't working with just GitHub/GitLab.

@bluss
Copy link
Member

bluss commented Jul 28, 2024

Great! Right now it says an invite is required to join. (Github login path.)

Edit: no, ok, I could join without that, using email. I might have misread that you said we had to use Github/Gitlab signup.

@bluss bluss pinned this issue Aug 2, 2024
@termoshtt
Copy link
Member

Maybe we need two things:

  • Get new maintainer continuously
    • Since new maintainers often become busy quickly, we need a system that can continuously get new maintainers. For this, we will need materials and procedures to train maintainers.
  • Create sub-leader positions to keep the role of new maintainer clear
    • It is hard to catch up everything about this repository, but the "main" maintainer have to know them. To get more maintainers, we need to narrow the responsibility of each maintainers.

Here is rough sketch of sub-leader responsibility:

  • Maintenance Burden Reduction Sub-Leader:
    • Recruit and train new maintainers.
    • Organize and distribute maintenance tasks.
  • Code Modernization Sub-Leader:
    • Lead efforts to modernize the codebase (e.g., const generics, internal abstraction improvements).
    • Oversee optimizations for low-level operations like SIMD and BLAS.
  • Community Engagement Sub-Leader:
    • Plan and execute activities to increase community contributions.
    • Facilitate collaboration with other projects and organize community events.
  • Documentation Improvement Sub-Leader:
    • Manage and update documentation to cover new features and changes.
      Create tutorials and guides to help new contributors.

@bluss
Copy link
Member

bluss commented Aug 23, 2024

@grothesque Thanks for the great input about mdspan and that mdarray project. This project wants a modernization of its fundamental datastructures, and mdarray more or less looks like it's exactly that. What does that mean, what does ndarray have left to offer? Should we just use mdarray instead?

For example #879 relies on explicitly storing the shape and the strides for each array, which seems a no-go for fast 2x2 arrays.)

Faer explains well in their documentation:

faer is recommended for applications that handle medium to large dense matrices, and its design is not well suited for applications that operate mostly on low dimensional vectors and matrices such as computer graphics or game development. For those purposes, nalgebra and cgmath may provide better tools.

And IMO this applies fully the same way to ndarray at its current state as well, ndarray should have the same sentence in its docs.

@akern40
Copy link
Collaborator

akern40 commented Aug 25, 2024

I've been a little quiet on the discussion for the past month specifically because I've been working to figure out a technical path forward that tries to account for your comments @grothesque, and I think I've reached the point where I'm ready to share more publicly. If you check out https://github.com/akern40/ndarray-design/tree/design-doc you'll see a repo that I've been using to sketch out this design. As per your suggestion, the README provides a comprehensive design document that explains (and advocates for) the design that I suggest. I'm still working on the example code in that repo, but it really just implements what's described in the document. There are still things that need work: a re-design of the dimensionality trait, moving ndarray functionality to traits, etc. But my goal was to create a design that hews closely to ndarray's existing internals (which are incredibly well thought-out) so that we could implement these core types without too much disruption to users. This design will accomplish the following:

  1. Provide one main way for users to write functions that operate on arrays, by way of a reference type.
  2. Create a "backend" abstraction that will allow for downstream implementations of stack-allocated arrays, GPU arrays, etc by giving a generic hook at both the "owned" and "referenced / viewed" array levels.
  3. Remove the data trait design currently in use in favor of multiple types, tied together by Deref implementations.
  4. Enhance soundness of the library by centralizing some safety guarantees (like ensure_unique) to be implemented in just one place.

This design is far from perfect; to say the least, it has holes that will need to be filled as it is actually incorporated into ndarray. But I'm hopeful that it's a strong starting point. Please, everyone, I'd encourage feedback in the comments here!

@grothesque
Copy link

grothesque commented Aug 28, 2024

Thanks for the design proposal @akern40. I will comment, but my throughput is limited, unfortunately.

@bluss wrote:

This project wants a modernization of its fundamental datastructures, and mdarray more or less looks like it's exactly that.

Yes, exactly, only that it fell from the sky and is much better than I could have imagined. I'm still in the process of understanding its inner workings, but I am very impressed by what I have seen so far. Perhaps the author of mdarray, @fre-hu, would like to join our discussion here?

What does that mean, what does ndarray have left to offer? Should we just use mdarray instead?

In its current form, mdarray supports only static number of dimensions (or rank), so it would not be suitable for interfacing with NumPy à la rust-numpy. (This is also a dealbreaker for my specific use case.) I am trying to understand whether the design could be extended to support a dynamic number of dimensions without losing its advantages. In a second iteration, I will try to understand how (ideas from) both projects could be merged.


My understanding so far is that mdarray addresses the gripe that I formulated above about ndarray's static ndim not being very useful because all the elements of the shape are dynamic. It does this through introduction of types that abstract data layouts. For example there is a layout where all the dimensions are dynamic except the innermost one which is static. This is not as general as the approach that mdspan of C++ takes, but it might be a good compromise for Rust in the absence of variadic generics.

@bluss
Copy link
Member

bluss commented Aug 31, 2024

@termoshtt that's interesting too. You're right that we need to be ready to accept maintainers continuously. I'm not sure we need to section it up so rigidly. It's a good idea that maintainers are not responsible for everything, and I don't think they are either, it's fine to focus on favourite or focus or knowledge areas. (I do so too.)

We should probably delineate exactly how a maintainer forum should work - only maintainers in this case - should it be on github discussions, issues or on zulip?

I'm most interested in getting to work with a few people making PRs and so on rather than making formal structures. I want to be in a place where multiple maintainers feel confident to merge their own work (if it doesn't require more feedback) or merge other's PRs, without asking me.

@fre-hu
Copy link

fre-hu commented Sep 1, 2024

Thanks for the interest and inviting me.

First, I can say that mdarray is a hobby project to explore what is possible. I will not really have time to drive it further myself, so I'm happy if it can be used in some way or if there are ideas that can be reused.

One thing I wonder is that it seems difficult to fulfill all requirements in one library. The design in mdarray works well with arrays that are directly addressable on CPU, and where you want control over the layout and optimize with static information. But I'm unsure if it can be generalized to other use cases.

About dynamic rank, yes it would be possible similar to ndarray. It will a bit more complex to derive array types, and it will not always be possible to get accurate layout types for array views and instead have to fallback to strided layout.

Another question is element order, where I use column major only. Maybe it makes more sense to switch to row major. It could also be possible to make it parameterized, but there is a risk it will increase complexity quite a bit.

@grothesque
Copy link

Hello @fre-hu!

First, I can say that mdarray is a hobby project to explore what is possible. I will not really have time to drive it further myself, so I'm happy if it can be used in some way or if there are ideas that can be reused.

Yes, it’s already very useful in this way! I think that you are not the only one with time constraints - contributors to ndarray have pretty much the same problem. It would be great if a community of interested people could be established behind one library to maintain some momentum. I do have some hope that this is happening here.

One thing I wonder is that it seems difficult to fulfill all requirements in one library.

Likewise Fortran’s built-in arrays or C++’s new mdspan/mdarray do not fulfill all requirements, but look at what impact the latter is having: Fortran people discuss that the last huge advantage of Fortran over C++ is going away.

Now Rust unfortunately doesn’t have variadic generics nor generic const expressions and it seems that both are still far away, but perhaps we could still manage to significantly improve on current ndarray as a general-purpose md-array abstraction in Rust.

The design in mdarray works well with arrays that are directly addressable on CPU, and where you want control over the layout and optimize with static information. But I'm unsure if it can be generalized to other use cases.

About dynamic rank, yes it would be possible similar to ndarray. It will a bit more complex to derive array types, and it will not always be possible to get accurate layout types for array views and instead have to fallback to strided layout.

Ndarray’s layouts are fully strided. The rank can be either static or generic. Wouldn’t it be possible to add such a general (but less efficient) layout to mdarray, while maintaining the other more static layouts?

Then we would have a library that could accept any array from Numpy say, but algorithms could be still implemented in Rust for specific layouts. Fallible conversions would be proposed between the different layouts.

Not sure how cumbersome the resulting library would have to be. Hopefully one could profit from the strengths of Rust’s packaging by limiting the content of the basic library to infrastructure, and keep actual algorithms in separate exchangeable crates.

Another question is element order, where I use column major only. Maybe it makes more sense to switch to row major. It could also be possible to make it parameterized, but there is a risk it will increase complexity quite a bit.

Is your choice motivated by BLAS/LAPACK being (marginally) more efficient for column-major data?

Do I understand correctly that mdarray is column major in the sense that the restricted layouts are column major? But the fully strided layout can accept any (fixed rank) strided array, right? Right now in Rust we cannot have a fully generic ndspan like in C++, but it should be possible to have a set of useful layouts for both column-major and row-major within a single library, or do you see a problem with this?

@fre-hu
Copy link

fre-hu commented Sep 3, 2024

Ndarray’s layouts are fully strided. The rank can be either static or generic. Wouldn’t it be possible to add such a general (but less efficient) layout to mdarray, while maintaining the other more static layouts?

Then we would have a library that could accept any array from Numpy say, but algorithms could be still implemented in Rust for specific layouts. Fallible conversions would be proposed between the different layouts.

Not sure how cumbersome the resulting library would have to be. Hopefully one could profit from the strengths of Rust’s packaging by limiting the content of the basic library to infrastructure, and keep actual algorithms in separate exchangeable crates.

I think the simplest way is to add dynamic rank as a new shape type and keep the existing layout types. The shape types for static rank are tuples of dimensions (each static or dynamic), and the new type will instead consist of a Box/Vec. The resulting layout mapping will then use Box/Vec for both shape and strides.

There can be limitations and for some operations like creating array views and permuting dimensions the rank must be static. But yes you can always convert to static rank for calculations.

Is your choice motivated by BLAS/LAPACK being (marginally) more efficient for column-major data?

Do I understand correctly that mdarray is column major in the sense that the restricted layouts are column major? But the fully strided layout can accept any (fixed rank) strided array, right? Right now in Rust we cannot have a fully generic ndspan like in C++, but it should be possible to have a set of useful layouts for both column-major and row-major within a single library, or do you see a problem with this?

The choice is only to have a convention, and then column major is common for linear algebra. It is used both for memory layout and to give the order of dimensions in iteration.

Using strided layout with row major data will work, but operations that depend on iteration order will have worse access pattern. It works fine for interfacing though, and internally one could make a copy or reverse indicies.

To have full support for both row and column major would require one more generic parameter for the order. I had it in an earlier version, both removed it as it made both the library and interface more complex. C++ mdspan gets around this since it is quite thin.

@bluss
Copy link
Member

bluss commented Sep 8, 2024

ndarray-linalg maintainership discussed in issue rust-ndarray/ndarray-linalg#381

@strasdat
Copy link

@akern40

I'd also consider "competitive advantage": what can ndarray do that others can't, and what are we leaving to others? For example, nalgebra seems to have statically-shaped and stack-allocated matrices down pretty pat. Seems like we shouldn't focus on that use case?

100%

The second reason is that it opens the door to potentially-powerful optimizations, like things that you can do if you absolutely know that your array is just a 2D matrix, ...

Possibly folks consider that not ergonomic or at least usual, but you basically can do that already by just building dynamic (ndarray) tensors of static tensors. I prototyped that a bit here:

https://github.com/sophus-vision/sophus-rs/blob/2cb11381710c2cdc5cadf34b5212eef9cd554586/crates/sophus_core/src/tensor/arc_tensor.rs#L326

You can even have the scalar type be a std::simd::Simd type, or an nalgebra matrix of std::simd::Simd's.

@akern40
Copy link
Collaborator

akern40 commented Sep 13, 2024

Ok, money where my mouth is! There is now1 code in the new-impl branch of my ndarray fork which starts the implementation of the new core design that I mentioned 3 weeks ago. That code:

  1. In the src/core folder, defines the very bare bones structure of types, deref implementations, and traits that would constitute a common format for just about any multidimensional array in Rust. I think we'd eventually want this to be its own crate, once its significantly more mature. This design is very slim and, as a result, should hopefully be able to fit all of the above requests.
  2. In the core.rs file, uses those types to redefine ndarray's core types. Significant work will be needed to endow those types with the behavior that ndarray currently has, but hopefully the outline of the sketch is visible.

Feedback is, as always, greatly appreciated!

Footnotes

  1. highly experimental, literally hot off the presses, take it with a grain of salt, etc

@grothesque
Copy link

grothesque commented Sep 17, 2024

Ok, money where my mouth is! There is now1 code in the new-impl branch of my ndarray fork which starts the implementation of the new core design that I mentioned 3 weeks ago.

Great, thanks! I cloned and also started looking at it. I see that it compiles and the tests run - but as far as I can see the tests do not touch the new code. Is it already possible to run something (even rudimentary), other than instantiating the new structures?

I finally managed to do a first read of your design document. (I wanted to first experiment and understand the inner workings of @fre-hu's mdarray crate which I believe I now finally do. (Do not hesitate to look at the issues I opened there (number 1 to 4).) It seems to me that the design you propose is in many ways similar to mdarray, which I think is a good thing, notably the bits relating to array references (called Span there) and array views (called Expr there).

In the design document you write about ndarray and constant dimensions:

That last case can already be handled by the dimensionality generic

Can you point me to the relevant part of the code, because from what I have seen so far ndarray's arrays always have dynamic shapes, i.e. individual elements of the shape are not part of the type.

@akern40:

I'd also consider "competitive advantage": what can ndarray do that others can't, and what are we leaving to others? For example, nalgebra seems to have statically-shaped and stack-allocated matrices down pretty pat. Seems like we shouldn't focus on that use case?

My favorite aspect of mdarray's design is how it allows to mix dynamic and compile-time shapes. See for example this comment. I think that this design allows to combine the strengths of ndarray and nalgebra in a single crate, and I do not see a reason why this approach could not be adopted by a redesign of ndarray. Any thoughts on this?

@edgarriba
Copy link

Feedback is, as always, greatly appreciated!

@akern40 in the proposed design, it would be great to include the ability to adopt straight away different backends such as recent popular crates like apache arrow-rs which the community is adopting quite fast and has a slim api for custom allocators which can open doors to hold not only cpu storage, but also cuda, wpgu, etc. For this reason, I recently dropped ndarray from main core of a crate I maintain for computer vision and deep learning moving towards to my own much simpler Tensor struct based on arrow::Buffer. See: https://github.com/kornia/kornia-rs/blob/main/crates/kornia-core/src/tensor.rs One extra reason for me was to have a lightweighted crate without all the bells and whistles of ops, axis iterators, etc in order to keep it very minimal.

@akern40
Copy link
Collaborator

akern40 commented Sep 21, 2024

Is it already possible to run something (even rudimentary), other than instantiating the new structures?

Not yet, this is very preliminary, just lays out what the fundamental data structures would look like. My next step is working on an implementation path forward that is as backwards-compatible as possible.

It seems to me that the design you propose is in many ways similar to mdarray, which I think is a good thing, notably the bits relating to array references (called Span there) and array views (called Expr there).

I think that's good as well! I didn't look too closely at mdarray's design, but I believe we both took heavy influence from C++ mdspan, so that makes sense.

In the design document you write about ndarray and constant dimensions:

That last case can already be handled by the dimensionality generic

Can you point me to the relevant part of the code, because from what I have seen so far ndarray's arrays always have dynamic shapes, i.e. individual elements of the shape are not part of the type.

Ah sorry that line is supposed to indicate that you don't need another / different generic to handle constant dimensions, you could build that into the Dimension or broader Layout generic; not to say that ndarray already handles this.

My favorite aspect of mdarray's design is how it allows to mix dynamic and compile-time shapes. See for example this comment. I think that this design allows to combine the strengths of ndarray and nalgebra in a single crate, and I do not see a reason why this approach could not be adopted by a redesign of ndarray. Any thoughts on this?

Absolutely agreed here - nalgebra actually has this capability as well. Coming from my very first introduction to the Rust numeric computing space, it's kind of essential for performance, because you can specialize for state matrices that have a dynamic number of entries, each of a fixed size (e.g., 3/6/9/12 for a 3-dimensional state).

@akern40
Copy link
Collaborator

akern40 commented Sep 21, 2024

@akern40 in the proposed design, it would be great to include the ability to adopt straight away different backends

Backend flexibility is a major goal of the current design! I'm curious - when you say "adopt straight away", what do you mean by that? As in, you'd like to see an Arrow-based backend included as a first-class supported ndarray type?

I recently dropped ndarray from main core of a crate I maintain for computer vision and deep learning moving towards ... a lightweighted crate without all the bells and whistles of ops, axis iterators, etc in order to keep it very minimal.

Is the idea here to just have a type that you can use for storage? Say there existed an ndarray-core which you could depend on, and which gave you a generic type that you could use for storage. Would the advantage be easy conversion to arrays, for when people need them? In other words, what is ndarray without the bells and whistles? Or am I totally off base here?

@akern40
Copy link
Collaborator

akern40 commented Oct 30, 2024

For all those who have been following along and participating in this discussion, you may be interested in seeing #1440, which is one step towards doing some core design work on the library. It doesn't even come close to achieving everything we've been talking about, but it will make most functions1 significantly easier to write - hopefully reducing friction for newcomers and veterans alike - and is 99% backwards compatible2. As a sneak peek, a function that takes in two mutable 2D arrays that aren't raw views (i.e., they both have S: DataMut) with the same element type was previously written

fn two_arrays<A, S1, S2>(arr1: &mut ArrayBase<S1, Ix2>, arr2: &mut ArrayBase<S2, Ix2>)
where
    S1: DataMut<Elem = A>,
    S2: DataMut<Elem = A>,
{
    // Fun with multidimensional arrays! If only the signature weren't such a pain...
}

under that PR can be written as

fn two_arrays<A>(arr1: &mut ArrayRef2<A>, arr2: &mut ArrayRef2<A>)
{
   // That seems better!
}

These two functions are functionally equivalent. This is just an example; the same idea goes for immutable borrows, but you'd take &ArrayRef2<A>.

In fact, that's not the only new type: functions that only want to read/write to an array's layout (shape / strides) also get a new API, although it's slightly more complex. The prior syntax would have been

fn two_arrays_just_shape<A, S1, S2>(arr1: &mut ArrayBase<S1, Ix2>, arr2: &mut ArrayBase<S2, Ix2>)
where
    S1: Data<Elem = A>,
    S2: Data<Elem = A>,
{
    // Fun with shapes! We'd still like a nicer signature, though...
}

and is now

fn two_arrays_just_shape<T, A>(arr1: &mut T, arr2: &mut T)
where T: AsRef<LayoutRef2<A>>
{
    // I think this is better, even if it's not quite as nice as the `ArrayRef` example
}

There is a rather-unavoidable reason for the roundabout AsRef requirement, which you can read about in the PR. There's even an analogous type (to be used with the AsRef syntax) for when you want to get an array with unsafe data access (corresponding to S: RawData(Mut)).

If this sneak peek is confusing, please know that when that PR gets merged and we write a changelog, I will do my best to write a clear explanation with thorough examples of the new API.

Footnotes

  1. Anything that previously took an array with S: Data(Mut) by reference or was taking ArrayViews by reference as a way of avoiding the fully-generic signature.

  2. Functionally it's fully backwards-compatible, but it's a big change so it's hard to be 100% certain. Plus, Rust's type inference will change, thereby inducing breaking changes we can't avoid.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests