Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The state of math libraries #25

Closed
Osspial opened this issue Jul 15, 2019 · 55 comments
Closed

The state of math libraries #25

Osspial opened this issue Jul 15, 2019 · 55 comments

Comments

@Osspial
Copy link

Osspial commented Jul 15, 2019

The big math libraries are cgmath and nalgebra. Both of them have deal-breaking flaws for many, namely (imo):

  • cgmath has a weird, idiosyncratic design that simultaneously does too much and too little, and tries to shoehorn in genericism with complex traits that are difficult to internalize.
  • nalgebra stretches the type system so far that the documentation is nigh impossible to understand without extreme patience and a deep understanding of generics.

As a result, several people have gone and made their own math libraries that try and solve those problems, which don't get adoption because the ecosystem hasn't grown up around them. I'd like this issue to house discussion on how exactly we can go about designing a library that addresses those issues, while developing enough consensus that we can potentially replace both of the current libraries.

@Osspial
Copy link
Author

Osspial commented Jul 15, 2019

At the risk of making a worn-out reference, I can't deny that we're at risk of falling into this trap:

image

Regardless, I think there has been a lot of cool research done in the community, but the only way I can see the fruits of that research reaching a wide audience is if it has some amount of official backing, which we (as the gamedev working group) can provide.


Anyhow, time for my opinions: I think we should take the standard library's approach of building a small, fast, and relatively uncontroversial core that the rest of the ecosystem can build around. With that in mind, I'd like to propose a scope for the standard math crate and guidelines for its design:

Scope

  • "Standard Math" should only contain Vector/Point/Matrix types. Iterating over designs and moving those types to 1.0 should be our highest priority.
    • This explicitly does not include Quaternion or Euler or anything relating specifically to rotations. There's a lot of design space to explore there and I think figuring those out is considerably more complex than figuring out the three types listed above.
  • All other work expanding upon the base types should initially be done in other crates, so that experimentation can be done without harming ecosystem interoperability.
    • Other work potentially includes (but is not limited to):
      • High-level matrix constructors, such as cgmath::PerspectiveFov.
      • Rotation and quaternions.
      • Higher-level geometric constructs (e.g. rectangles, circles, spheres, capsules, etc.)

Design

I've got various design issues I've seen in various math crates that I'd like a standard math library to address, and I'll list them here:

  1. .x, .y, .z accessors are a must.
  2. Swizzling is a must.
    • We've made plenty of swizzle-based accessors, but I haven't seen any research done into swizzle-based constructors. Macros could be helpful here.
  3. Genericism over dimensionality is nice to have, but shouldn't sacrifice point 1.
    • We can potentially use an nalgebra-like Deref<Target=Coordinates, since despite nalgebra's complexity that's actually a really good solution.
    • Const generics are exciting here but relegate us to nightly.
  4. The documentation should be easy to understand. nalgebra has done a lot of really interesting work with embedding math into the type system, but the result is that you need a comprehensive understanding of both to make sense of it.
    • To that extent, we should minimize the usage of traits, since traits tend to fragment documentation and make things harder to understand.

There are also a few design points that I think are worth discussing, but aren't as cut-and-dry as I've been making the above points out to be. Namely,

  • Should we integrate units into a standard math library? The main library I'm aware of that does that is euclid, but I'm not a huge fan of their API, and I think there's a way of adding units without being as intrusive as euclid makes it.
  • Should we split Vector and Point types into a different type, or should they be the same type? There's points to be made on both sides here, and I haven't managed to convince myself that either approach is better.

@Lokathor
Copy link
Member

(I typed this up while Osspial was typing their first reply, so i'll comment on that a bit separately in a moment)

How do we feel about glam? Or at least "the glam approach" as we'll perhaps call it.

  • No traits, no generics, just plain structs and functions/methods.

It's very easy to think about.

Now, currently glam is incomplete in some areas:

  • there's only f32 types not f64, i32, and i64
  • there's a fixed opinion on rotations and coordinate systems which might not match all systems people might want to use, but that's also a matter of perhaps renaming some functions to be clear that they're left-handed and adding a right-handed version or something like that (like GLM / nalgebra-GLM does).

but besides the fact that things are missing how do we feel about the approach?

@Lokathor
Copy link
Member

reply to Osspial's proposal

  • We kinda have a "standard math" lib, it's called mint, and most of the other math libs have optional interop with it. Well, except that mint has no operations, only data types. Still, a starting point perhaps.
  • I disagree on (1) that it has to be field accessors as a must. Particularly, that means that you can't just use a SIMD lane as the rep of a Vec or Mat, which means a performance hit. I'll honestly take the faster version without direct field access, and I bet a lot of others would too.
  • (2) Swizzling is a must? Really? I mean it's neat, but a must? But, sure, it's easy enough to make them all, and it doesn't matter if they're accessors or constructors, you can do both, it's just a lot of copy and paste work basically.

@icefoxen
Copy link
Contributor

icefoxen commented Jul 15, 2019

Like Lokathor said, we already have the solution thanks to kvark, it's mint. The lack of operations is a feature, not a bug; it's not a math library, it's a math TYPE library. The intended usage is what ggez 0.5 does, just have all library functions take Into<mint::Whatever> instead of nalgebra::Whatever or cgmath::Whatever. The user uses whatever library they feel like and their types silently vanish into the maw of Into impl's and it's like the library is using whatever math library they prefer. Works great, all the math libraries that matter already have feature support for it, and it's not hard to add to new libraries.

Sorry! I feel like the problem's already been solved. It's not perfect, since my measurements suggest there IS some run-time overhead to mint's conversions that doesn't get entirely optimized out, but it's negligible for my purposes.

@Osspial
Copy link
Author

Osspial commented Jul 15, 2019

@Lokathor

How do we feel about glam? Or at least "the glam approach" as we'll perhaps call it.

* No traits, no generics, just plain structs and functions/methods.

It's very easy to think about.

On a high level, I quite like glam's approach. However, I'm hesitant to say that we should fully adopt what glam is doing, since I disagree with their SIMD-centric approach:

  • Universally using SIMD-backed storage makes it extremely difficult to cleanly implement generics in a coherent and unsurprising way (at least on stable Rust, without specialization). Having separate Vector/Point/Matrix types for each primitive is unacceptable in a library that's designed for everyone.
  • Forcing 16-byte alignment on 3-element types makes it significantly harder to do unsafe operations on the types. You couldn't, for instance, re-interpret a &[f32] as a &[Vector<3, f32>], since you'd skip every fourth float and floats are 4-byte aligned.
    • glam's current approach of "optional SIMD" also means that types don't have a reliable layout, which breaks unsafe code for reasons that I don't feel I need to describe. I realize that inconsistent layout isn't inherent to the idea of SIMD-based types, but it's still an issue I have with the current implementation.
  • SIMD-backed storage makes GPU interop much more awkward. The space wastage is unquestionably detrimental in that context since there's no speed increase to be had, and VRAM can still be constrained enough that it's an actual problem. I'd like to be able to use standard math types when creating Vertex types, and SIMD storage makes that infeasible.

I suppose the base problem I have there is that SIMD-backed types improves speed at the cost causing direct usability issues in every other context, and I don't think that sort of tradeoff is acceptable for a base, standard math library that everyone can build upon.

We kinda have a "standard math" lib, it's called mint, and most of the other math libs have optional interop with it. Well, except that mint has no operations, only data types. Still, a starting point perhaps.

The fact that mint is designed as an interop library, and not a base library for direct usage, makes it infeasible for direct use. Admittedly, it still has gotten ecosystem traction so adopting it and expanding upon it may be a decent idea, but I don't hugely like the high-level design of its API and changing it would induce significant breakage.

I disagree on (1) that it has to be field accessors as a must. Particularly, that means that you can't just use a SIMD lane as the rep of a Vec or Mat, which means a performance hit. I'll honestly take the faster version without direct field access, and I bet a lot of others would too.

Eh, different priorities I guess. I've made my point on SIMD so I'm not going to reiterate that, but direct field access makes the API significantly nicer to use in pretty much every way and I don't want to throw that out without extensively proving that it's worth it.

(2) Swizzling is a must? Really? I mean it's neat, but a must? But, sure, it's easy enough to make them all, and it doesn't matter if they're accessors or constructors, you can do both, it's just a lot of copy and paste work basically.

Swizzing is a feature that I've consistently found useful when implementing code, and it's something you notice when it's missing. I don't want to have to think about about using it and explicitly import it - I just want to have it available, and there's really only one way to do it.

@Osspial
Copy link
Author

Osspial commented Jul 15, 2019

@icefoxen

Sorry! I feel like the problem's already been solved. It's not perfect, since my measurements suggest there IS some run-time overhead to mint's conversions that doesn't get entirely optimized out, but it's negligible for my purposes.

I think that the fact mint needs to exist shows how much of a problem there is in the ecosystem. Saying that mint is a solution is like saying "The Rust standard library doesn't have an array type, but everything takes Into<GenericArrayWithoutImplementations<T>>, so it's okay" (yes I know that Rust has arrays but that isn't the point). Mint clunkily solves the problem that you can't use some libraries with others, but it doesn't solve the issue is that nobody's designed a solution that everyone can actually use. Maybe that's impossible, but I have a hard time believing that.

Besides, since Mint does so little, it means that you can't actually build higher-level constructs around Mint without sacrificing the ability to sanely design an internal and external API. I can't, say, build a higher-level geometry library around Mint (e.g. a library that provides rectangles, circles, etc. and the ability to transform them) without either throwing away all the math functions every other library provides or clumsily doing conversions internally every time I want to perform any trivial transformation on my datatypes.

I don't particularly care about single-nanosecond losses of performance here. I care about designing APIs that people enjoy using, and standardizing around Mint makes it extremely difficult to do that.

@Lokathor
Copy link
Member

Hot Take: Generics are a necessary evil, not an inherent good. The ideal vec/mat library for sizes 1-4 is probably one that is fully written out over time for all possible combinations and 100% non-generic. There are sufficiently few combinations that it's actually possible to write them all down, so you might as well write them all down (and your re-compilation times will actually improve if you do this).

Now, you have a bit of a point with the SIMD stuff, but I do wonder how often you're doing math operations on all the vertexes in a model, and not just on its transforms (presumably a much smaller amount of data overall). It might not be insane to make a set of model/vertex data and then have the faster SIMD types for CPU-side usage.

@icefoxen
Copy link
Contributor

icefoxen commented Jul 15, 2019

I think that the fact mint needs to exist shows how much of a problem there is in the ecosystem.

Contrast with C/C++, or C#, or Python, or literally anything else, where there is either ONE library that everyone uses even when it kinda sucks, or there is >1 library and nobody EVER lets ANYTHING interoperate between different libraries. The fact that mint even can exist is heckin' amazing.

... it doesn't solve the issue is that nobody's designed a [single] solution that everyone can actually use. Maybe that's impossible, but I have a hard time believing that.

It's impossible to have a single solution that everyone can use, because different people have different goals. Different people just think in different ways. This is okay. People will naturally agglomerate towards the de-facto standard because that's how humans work, but there will always be the outliers that do things differently that work better for some people or use cases. Currently the de-facto standard is nalgebra, which wouldn't have been my first choice, but mint means that it is POSSIBLE to have a heterogeneous ecosystem that still works together.

Edit: The current state of math libraries, as I see it, is that there's lots of good choices and they all work together. Go us! 🎉

@Lokathor
Copy link
Member

I mean even my wanting the Fast SIMD version and Osspial wanting the Slow Plain version shows that we need Mint somewhere in the system.

@kvark
Copy link

kvark commented Jul 15, 2019

@Osspial Trying to make that ideal API math library is an interesting task for sure, attempted by many in the past. Maybe you'll get better luck at it, who knows. I'd be interested to watch the progress and potentially contribute.

What I don't expect though is for that effort to eliminate the other solutions. People using nphysics will always use nalgebra, no matter how good the new hotness is. People would always disagree on proper SIMD usage, on Y up versus down, on million other things (looks like @icefoxen just brought this point as well while I was writing).

So mint will always be needed.... but this isn't a concern, it works fine.

When I initially raised the concern during the call, it was specifically about cgmath: it's in a sad state, both in terms of API and maintenance. Writing an entirely new math library may be helpful in the longer term, but in the shorter - we need to find a way to maintain it or declare bankrupsy.

@Osspial
Copy link
Author

Osspial commented Jul 15, 2019

Hot Take: Generics are a necessary evil, not an inherent good. The ideal vec/mat library for sizes 1-4 is probably one that is fully written out over time for all possible combinations and 100% non-generic.

I'm going to have to disagree with you there. Generics are a powerful tool for reducing API surface area, letting you see what APIs work everywhere and which only work in some places at a quick glance without having to go through pages of API docs on different types. Different types for everything fundamentally sacrifices that.

Currently the de-facto standard is nalgebra, which wouldn't have been my first choice, but mint means that it is POSSIBLE to have a heterogeneous ecosystem that still works together.

image

🤨

...that may have been more snarky than needed, but I guess in general my goal isn't to replace nalgebra. nalgebra has a place, and it's certainly widely used, but the complexity it introduces is a deal-breaker for many, including myself.

I'll put some extra emphasis on the following point, since this post is currently a cluttered wall of text: Replacing nalgebra is an infeasible task, and something we probably don't want to do. Replacing cgmath, on the other hand, is absolutely feasible, as abandoning it leaves a hole in the ecosystem that currently goes unfilled.

My goal would be to provide a replacement to cgmath that has a sane API. Maybe the solution to that is to overhaul cgmath in-tree, pulling out the cruft and simplifying the API in a way that lets other people expand upon it without being overly opinionated.

Edit: The current state of math libraries, as I see it, is that there's lots of good choices and they all work together. Go us! 🎉

I guess my problem is that there are a lot of different math libraries, but none of them actually do what I need them to do. They either sacrifice usability for functionality, aren't actually usable, or are used by so few people that exposing them in a public API is a burden upon everyone else.

I mean even my wanting the Fast SIMD version and Osspial wanting the Slow Plain version shows that we need Mint somewhere in the system.

You could certainly do it cleaner than via Mint. There's no reason a single library couldn't provide both SIMD and non-SIMD types and provide clean interop between the two, so that you can do Vector2 + Vector2SIMD and have everything Just Work.

When I initially raised the concern during the call, it was specifically about cgmath: it's in a sad state, both in terms of API and maintenance. Writing an entirely new math library may be helpful in the longer term, but in the shorter - we need to find a way to maintain it or declare bankruptcy.

Ideally, we can get it to the point where it doesn't need active maintenance. If it's got a clearly defined scope and lets more complex design problems get evolved and solved out-of-tree, I think it'd be in a pretty decent person. It just doesn't do that right now, and the design problems are there and left unsolved.


Out of curiosity, why do we need Mint's non-matrix types when the standard arrays exist and have wider ecosystem compatibility?

@Lokathor
Copy link
Member

Think of them as newtype over arrays

@AlexEne
Copy link
Member

AlexEne commented Jul 15, 2019

Anyhow, time for my opinions: I think we should take the standard library's approach of building a small, fast, and relatively uncontroversial core that the rest of the ecosystem can build around.

I guess my problem is that there are a lot of different math libraries, but none of them actually do what I need them to do. They either sacrifice usability for functionality, aren't actually usable, or are used by so few people that exposing them in a public API is a burden upon everyone else.

I don't have the full context of what you actually need them to do and I think it would be helpful for readers of this thread if you would be describing what these things are in a bit more detail and how these things will be achieved by the new library (and cannot be achieved by current libraries like nalgebra/glam/etc.).

For example, the initial description mentions fast, but then SMID approach that glam seems to be critiqued. While the critique might be fair, how does that affect speed for this new library if we don't use SIMD? Do you propose another approach to SIMD ? Do we want to not use it at all? What's the proposal? I already highlighted the lack of a --ffast-math equivalent in Rust as far as I can tell.

What are the speed targets? glam-level speed? slower/faster. By how much?

@Lokathor
Copy link
Member

Note that not all platforms support the same SIMD, or even SIMD at all. Particularly, WASM doesn't yet support SIMD, though there's progress in this area with a proposal written out, and there's experimental implementations, so we might have something in probably a year or two.

So, not all libs want to try to express themselves as SIMD operations, and then they'll just have very different runtime profiles based on what LLVM can divine about what's going on or not.

Other libs want to try to express themselves as SIMD and have fallbacks when it's not available for that platform (get it together ASM intrinsics team!!).

Two points I want to highlight:

  • Any answer that's "for everyone" needs to work on Stable rust. Anything that's Nightly only is a 2nd rate citizen in the Rust world. Nightly optional for some bonus is okay, but Nightly only is no good.
  • The glam person made a benchmark suite, mathbench, and I haven't looked into it to see how realistic the benchmarks are and such, but that could be an area to investigate

While I'm here I guess I'll give a ping to @bitshifter and see if they even want is swooping in on their lib to drown them with issues and PRs and such. If they want to just do their work in peace that's okay too.

@Lokathor
Copy link
Member

Bonus link! LLVM Floating Point Docs

@Osspial
Copy link
Author

Osspial commented Jul 16, 2019

@AlexEne Thank you for that feedback. I'll post some concrete responses later, but first I'm going to do some more research into the problem space here and come up with more specific issues and solutions. That'll help me come up with healthier conversation points, since aggressively spouting opinions without diving into the full context around those opinions hasn't entirely worked out so far :P.

@bitshifter
Copy link

While I'm here I guess I'll give a ping to @bitshifter and see if they even want is swooping in on their lib to drown them with issues and PRs and such. If they want to just do their work in peace that's okay too.

@Lokathor do you mean mathbench, glam or both? In any case I'm happy to receive issues for either, probably worth discussing anything in an issue before making a PR.

glam's current approach of "optional SIMD" also means that types don't have a reliable layout, which breaks unsafe code for reasons that I don't feel I need to describe.

That's not exactly true. If SSE2 is not available then types that would have been 16 byte aligned are #[repr(align(16))] so that size and layout remains consistent. If you use the scalar-math feature flag then no SIMD is used and no alignment is forced. It would be best to either always use scalar-math or never use scalar-math feature with glam. Primarily it's there to test the scalar code path but also if people don't want SIMD/16 byte alignment/unsafe.

@kvark
Copy link

kvark commented Jul 16, 2019

Out of curiosity, why do we need Mint's non-matrix types when the standard arrays exist and have wider ecosystem compatibility?

For things like vectors it's obvious. For matrices - there become choices of having nested fixed-size arrays or just flatten everything. For quaternions - there is a disagreement about the order of W with regards to the other components.

All in all, it's hard to draw the line where fixed-size arrays should be used, so we went ahead and had dedicated types for everything.

@kvark
Copy link

kvark commented Jul 24, 2019

Condensed consensus from #24 call:

  1. cgmath is widely used still, we should probably not try to rewrite it
  2. SIMD is nice, but not ultimately a requirement. Math library with SIMD has to make trade-offs. Hand-rolled SIMD with layout we control is strictly better for demanding applications anyway.
  3. "nalgebra-glm" is a nice to work with, but it suffers from being built on top of "nalgebra". Looks like a good target API for a library.
  4. Traits and rich types (like Vector versus Point) are in the way. Simple free-standing functions work better for documentation, read-ability, and implementation.
  5. GPU representation doesn't necessarily need to match the math one (e.g. for Vec3). Yet, it would be useful to have fixed-function math available at some point, which maps to GPU.

@Lokathor
Copy link
Member

I contest point 2 a little bit. A library that includes SIMD support will generally already do it by hand.

@icefoxen
Copy link
Contributor

icefoxen commented Jul 24, 2019 via email

@Lokathor
Copy link
Member

The compiler is only good at auto-vectorization with plain "array like" code, it's easy to out perform when matrix ops are involved because you have to do things like shuffle around lanes that isn't obvious to its auto-vectorization system.

@msiglreith
Copy link

To extend a bit on the points I mentioned in the last meeting:

nalgebra-glm really makes it easy to write the necessary operations used for example when writing a simple rendering engine (perspective, matrix mul, rotation, a few vec ops, ..). The freestanding function approach also wins over methods in terms of readability for more complex operations IMO (cgmath sample: https://github.com/msiglreith/panopaea/blob/master/panopaea/src/ocean/empirical.rs#L118-L147)

As can be seen in mathbench nalgebra seems to perform quite poorly (compilation settings?). glam on the other hand outperforms the other libraries, but I would expect higher and more consistent performance improvements when trying to introduce SIMD to a user application. Intrinsics and data layout should give a higher performance boost. Libraries like pathfinder seem to also have their own SIMD abstractions for this. Once more complex SIMD operations are required, 'escaping' from a math library which internally handles all the data layout seems tricky to me.

Therefore I feel that there are ~3 different use cases for math libraries in the gamedev ecosystem:

  • Easy and Simple with basic ops: e.g nalgebra-glm
  • In depth math library with more theoretical backing (ie. more research-y.): e.g nalgebra
  • High throughput operations/SIMD/hardware specialized: e.g simdeez? other libraries?

@bitshifter
Copy link

I'm not sure why nalgebra performs poorly on some operations, I haven't investigated it at all. Compilation settings wise, it's just whatever cargo bench defaults to (just release I think). I did try with full LTO once but it didn't make a huge difference.

Escaping the math library is easy. glam types that use SSE2 can convert (cast) to and from __m128.

I completely agree that you will get better performance by layout your data in a SIMD friendly manner and using SIMD intrinsics directly (or via a wrapper like packed_simd) however as glam demonstrates you can get a good performance improvement over scalar code with a SIMD backed math library.

There's also the middle ground of loading a f32 vector into SIMD registers for operations, so size and layout is standard but potentially performance is better. I haven't tried this myself, @Lokathor has with hektor, I don't know how performance compares.

One thing I've tried to do with glam unrelated to SIMD is follow the Rust API guidelines https://rust-lang-nursery.github.io/api-guidelines/. In particular they recommend methods over functions, see https://rust-lang-nursery.github.io/api-guidelines/predictability.html#functions-with-a-clear-receiver-are-methods-c-method for rational.

@Lokathor
Copy link
Member

the default profile sections do not include LTO for benchmarks, but yeah it doesn't always make a difference (that's why it's off by default even for release and benchmark mode).

@Lokathor
Copy link
Member

oh, yeah, the other points:

on my machine hektor fell just behind nalgebra with all the storing and loading, but i only really checked mat4*mat4. for others it went faster. so the results were confused and i just decided to use nalgebra-glm instead of bothering to check much more at the time.

the rust style guidelines are just a few people's opinions and do not particularly lead to anything other than "it followed the style guide". feel free to ignore them when it makes an api better

@sebcrozet
Copy link

sebcrozet commented Jul 25, 2019

I've not investigated yet why nalgebra and cgmath perform worse than glam in mathbench but both could likely be improved to match glam. This could imply to add the right SIMD routines when auto-vectorization does not do the trick, which is possible even in generic code by doing some kind of "pseudo-specialization" (like that, those are if statements that are extremely straightforward for the compiler to remove in release mode). The only case where both nalgebra and cgmath can't expect to beat glam is for 3D vectors and matrices because of the fact we don't use 4 components means an extra load must be performed by the processor to get the components into an XMM register. But the waste of space in glam has its own drawbacks already discussed by other comments on this issue.

In any case, while I agree performance is extremely important, I think it should not be so significant regarding the design of an ideal math lib for gamedev. If we were to design such a matrix/vector lib, we should first focus on the features and the API. Performance is only a matter of putting enough time into it so things get auto-vectorized better, or by adding manual vectorization in hotspots. It should only rarely affect the actual API.

Now, an ideal API is extremely difficult to come up with since different people require different features. Perhaps we could start by listing what are the solutions on other languages to see where others have converged in term of gamedev math. We should also take a look at matrix/vector modules from popular frameworks like Unity. An ideal lib for gamedev should probably cover most of the low-level linalg features from those frameworks. Assembling a state of the art of popular gamedev matrix/vector frameworks would be very valuable to get some ideas and directions.

Regarding the API of nalgebra, it is quite complicated because of generics. It is designed so it will get much better as the Rust language evolves with const-generics and with specialization, but we will probably not have both features before at least a couple of years. Though one way to very significantly improve the quality of nalgebra's doc in a short term is by fixing that four-years-old cargo bug: rust-lang/rust#32077

@Lokathor
Copy link
Member

Lokathor commented Jul 25, 2019

So, as the nalgebra-glm docs state: the GLM C lib uses a lot of overloaded function calls, which rust doesn't have. nalgebra-glm uses little name changes to evade that problem and stay with a free function oriented code base, which is often preferred for some kinds of math. At the same time some folks want methods, and I agree that they read better for some things.

I think the easiest way to do this is just do both. It's not hard it just takes up some time to set up.

example with abs since it's near the top of the alphabetical list of things in the nalgebra-glm docs:

impl Vec3 {
  pub fn abs(self) - Self {
    Self::new(self.x().abs(),self.y().abs(),self.z().abs())
  }
}

pub trait CanAbs {
    fn abs(self) -> Self;
}

// internal use only, so ignore the funny capitalization
macro_rules! impl_CanAbs {
  ($t:ty) => {
    impl CanAbs for $t {
      fn abs(self) -> Self {
        Self::abs(self)
      }
    }
  }
}

impl_CanAbs!(Vec3);
impl_CanAbs!(f32);

pub fn abs<T: CanAbs>(t: T) -> T {
  t.abs()
}

// both of these "just work"
abs(-3.9_f32);
abs(my_vec3);
  • You might wonder why we're making an abs method and also a trait impl that calls the same method. This way people get all the functionality without having to muck about with trait imports.
  • You might wonder about the macro instead of using a default method impl. unfortunately Self resolves to the trait in a default trait impl so it's just recursive when you write the default method. Using a written out impl makes Self be the type, so it works out to use the same expression every time, and then the macro just speeds it up.

Not all functions are so easy to do like this, but many are. I'll give it a try with more examples using the Hektor repo tonight or tomorrow.

@hecrj
Copy link

hecrj commented Jul 26, 2019

@Lokathor I personally like it when there is only one way to do things. I don't want to spend time thinking about whether I should use abs(my_vec3) or my_vec3.abs() when coding. Maybe one day I decide to go for the first one, the next day for the second one, and I end up with inconsistent code.

The problem is choice.

@Lokathor
Copy link
Member

I'll make it a feature flag :P

@hadronized
Copy link

hadronized commented Jul 26, 2019

I don’t know whether the splines has its place here, but it exists and can be very useful for people doing animation / video games.

@Ralith
Copy link

Ralith commented Jul 26, 2019 via email

@Lokathor
Copy link
Member

The API design constraints so far:

  1. All the float ops for f32 and f64 are methods. This is just a fact of how core is designed and we can't change it.
  2. Many people will naturally assume that if you can do my_float.abs() then you can also do my_vec.abs() as well.
  3. Many other people want to be able to write abs(val) so that their code looks properly like math instead of seeing val.abs() all over the place.
  4. One person said that they don't want to have possibly two ways to do things

The only way that I see to cater to all of this is:

  • Provide capability as methods first
  • Provide free functions with trait bounded overload that simply call the method.
  • Make the free functions only exist when a feature flag is enabled so that if you choose you can prevent yourself from inter-mixing both styles.
  • I would like such a feature to default to on, because I programmed Haskell for a while and I'm very comfortable with having piles of top level functions that magically do the right thing when you apply them to different kinds of arguments.
  • Since features flags are purely additive, if some other crate turns off the flag for how they use the crate, then your crate isn't affected.

@kvark
Copy link

kvark commented Jul 26, 2019

Might be worth noting that Rust has a limited form of UFCS: https://doc.rust-lang.org/1.5.0/book/ufcs.html
This means one can have this:

trait Abs {
  fn abs(&self) -> Self;
}
fn foo(x: &impl Abs) {
  x.abs(); // method syntax
  Abs::abs(x); // function syntax
}

The sad part is that Abs:: needs to present, we can't import a trait method in scope otherwise. But technically still a function syntax.

@fintelia
Copy link

Personally, having only one way of doing each operation (counting optional ones enabled by feature flags) is more important than exactly following how the standard library scalar types work. It is also worth thinking through how comprehensible the generated rustdoc documentation will be for the library

@Lokathor
Copy link
Member

Alright, I dusted off the hektor repo and rebooted it and gave it a very very simple setup to try out and see how abs might feel to look at in the docs.

https://docs.rs/hektor/0.0.3/hektor/index.html

  • click a free function: it shows the docs and then the controlling trait is a hyperlink of course
  • click that trait and it shows a list of every single type that implements it.
    I think it reads clearly enough, of course with 350+ functions perhaps it gets harder to follow later on.

@fintelia
Copy link

Searching for 'abs' shows a whole bunch of different ways to compute the absolute value which is as expected but doesn't really give any hint as to the "recommended" way to do the operation.

It is also a bit unfortunate that the [src] links are kind of unhelpful. The free function has basically a pass-through body that just calls the trait and otherwise tells you nothing, the trait is implemented inside a macro (which makes it slightly harder to understand) and ultimately just calls Self::abs(), and then only once you check the method on Vec2 do you actually get a function body that actually does something.

@Lokathor
Copy link
Member

Some of that could certainly be better explained in the top level documentation. However, that's basically how it works out with core and libm as well:

it's, to an extent, something we just kinda have to live with I think.

@distransient
Copy link

Searching for 'abs' shows a whole bunch of different ways to compute the absolute value which is as expected but doesn't really give any hint as to the "recommended" way to do the operation.

Math library documentation should not explain how to use Rust. If a user sees in a library's docs a trait for a functionality, a method on a type implementing that functionality, and a free function for using that trait's functionality freely, they should be able to understand this themselves and choose how to access the implementation for their type. There is no one "recommended" way to choose between these three routes of executing a single exposed functionality, some of them work better in some cases than others. Documentation can show how to write out each of these forms, but it can't possibly explain to the user what to do with their application code with any efficacy without completely drowning out actually pertinent information.

@mooman219
Copy link

I think it's important to note that support all number types is important. It's not enough to only provide f32. Glam and a lot of other vector libraries fall short here.

@aclysma
Copy link
Contributor

aclysma commented Aug 3, 2019

I read through the thread. There is a lot of great stuff here already, so I made notes as I went:

  • I like a lot of what nalgebra does, but I think the way it uses the type system introduces a lot of complexity, and the benefits from it in my opinion don't outweigh the cost in this space.
  • Generic across N-dimensions in my opinion similarly is not worth the cost
  • Building upon and “blessing” a simple, non-controversial math library sounds interesting. GLM is potentially a good reference point on this. I’ve been using nalgebra_glm and like a lot of it.
  • I think switching back and forth between SIMD/non-SIMD has performance implications. Sometimes, it’s not worth moving something into SIMD. So the ideal library in my mind would provide SIMD-aware and non-SIMD-aware types. It should be clear, but explicit, moving between the representations.
  • I’m not convinced that f64 or i32/i64 support is necessary. I haven’t seen these in a commercial game engine personally, and if f32 is starting to cause problems for someone, they’re likely doing something “unusual” and they’ll have to do more work besides just switching to f64 everywhere.
    ** I’m not against supporting f64, but I’d like a math library to be as simple as possible. Making it generic to include support for f64 may increase complexity in an area where minimal complexity is heavily desired
  • 16-byte alignment for vectors is industry standard. Happy for there to be easy conversions from a compact form to 16-byte aligned, but I think it’s out of scope for a math library.
  • I don’t see swizzling as all that valuable really. I think it can be added on later in a separate crate, and this use-case shouldn't drive the design of the math library.
  • I respect and appreciate what mint offers, but I don't want to use something like this unless it's invisible in downstream code and zero-cost. (From my understanding it's neither)
  • +1 on lokator’s comment that the ideal math library has fully-explicit implementations with no generics, even if it means code duplication.
  • +1 for free standing functions over member functions
  • I’m hesitant toward the “just do both” for methods vs free-standing functions.
  • I’d ideally want something like math::normalize(n) but being able to overload a function makes that hard.

Are any of the ideas here being prototyped yet?

@Lokathor
Copy link
Member

Lokathor commented Aug 3, 2019

hektor has 0.2.1 out today. Mostly just impls for getting data into and out of the types, as well as trait impls for operations.

Actual "graphics math" ops aren't implemented yet, though you can see in the issues I've got all sorts of roadmap notes.

  • Totally non-generic
  • SIMD-oriented
  • Will have free functions added later on but is currently just methods.
  • Has all sorts of swizzles, like 3000 lines of swizzle code. I have to regenerate it at some point since i didn't get all the coverage I wanted first time around, but most of it is there and working right now.
  • I put it in to part of mathbench and it performed basically the same as glam on euler 2d and euler 3d. more benchmarks to come.

@bitshifter
Copy link

To add to @aclysma's comment on the f32 versus other types, my personal experience over the last 15 years in gamedev has been f32 is used almost exclusively. The only exception was one time porting to some early mobile hardware that didn't have a FPU. I did poll some of my workmates also, some used int for doing screen coords for simple 2d stuff at home. Another said on one title they used int for some stuff because it uses a different part of the CPU which meant they could get better throughput using both float and int. That game was on PS3 and Xbox 360, not sure what that particular optimisation was for. The engine I've worked on the most at my current job was designed for building WoW scale MMORPG's and it uses f32, not f64. In any case, the usecases for non f32 is kind of niche from what I've seen. The main thing I think is that the kind of operations you will perform on int will be a subset of float, so being generic on type needs to deal with those differences somehow. Making things generic in general increases complexity of implementation and interface, which is unnecessary for the most common use case.

@Ralith
Copy link

Ralith commented Aug 6, 2019

Conversely, in my hobby project I'm making very heavy use of linear algebra on f64 due to my worlds being on the order of 1:1 scale earthlike planets. My project would likely not be possible if not for nphysics being generic on float type. Of course, I'm also very happy with nalgebra and have no plans to change in the forseeable future.

@mooman219
Copy link

The original complaint sounds like the existing solutions have quirks, are hard to understand, or are missing functionality. This doesn't sound like it would be fixed at all by introducing additional libraries that are also missing functionality, and have quirks, like not supporting more than just f32 and having opaque sizes. This sounds like it'll just introduce the situation where at best, you're using cgmath/nalgebra in addition to some specialized library like glam/hektor. What will probably happen is most people will just stick to using cgmath/nalgebra because they just already work for the float case.

I'm working on fast TrueType font rendering and parsing which is entirely in fixed point and integer types (and integer matrix math). Even doing the actual rasterization is significantly faster with integer approximations (non-SIMD integer math outperforming SIMD float math in this case). cgmath and nalgebra are the only viable libraries for this.

I'm specifically using cgmath right now for my crates, and will for future crates, so I'm probably on the wrong side of history here, but it's nice to get the free interoperability.

@bitshifter
Copy link

I'd argue that professional gamedev requires a specialist library, so it depends on what problem people are trying to solve here.

@mooman219
Copy link

mooman219 commented Aug 6, 2019

  • Weird, idiosyncratic design that simultaneously does too much and too little, and tries to shoehorn in genericism with complex traits that are difficult to internalize.
  • Documentation [that] is nigh impossible to understand without extreme patience and a deep understanding of generics.

@bitshifter those were the problems outlined. I'm concerned that the takeaway from that was only supporting f32 is fine. I'm also not saying that only supporting f32 is wrong, there's certainly a place for those libraries. I think for an eco-system to grow around something new, it should at least match what already exists to some degree.

@Lokathor
Copy link
Member

Lokathor commented Aug 6, 2019

  • The two bullet points in the OP are about the library mentioned in that bullet point being too complex to understand.
  • hektor/glam "solve" (bug huge scare-quotes here) the complexity issue by slimming functionality down until the majority of cases for the majority of people are still covered.
  • Some people fall into the last fraction of cases where "just f32 of dimensions 4 or less" doesn't cut it.
  • It's entirely reasonable for something to be a good default and then some of the time you use some variation on that theme. People use HashMap as their default map, but sometimes you'd use BTreeMap in some cases, that's okay.

@bitshifter
Copy link

To be clear I'm not against supporting other types. I'm of the opinion that supporting other types via generics is going to introduce complexity which for the majority of users who only need f32 is unnecessary cognitive overhead.

I'm interested in what people consider specialist, glam uses simd storage and has alignment requirements so that I can understand, but AFAIK hektor uses scalar types for storage and only uses simd internally for some operations, is that really specialist?

@Lokathor
Copy link
Member

Lokathor commented Aug 6, 2019

Uh, so I started hektor a long time ago and got some bad benchmarks and so I just did other stuff for a while until I saw you post about glam and then i was driven to pick it up again and here we are.

hektor and glam are ultimately like 98% identical libs in their approach to things. It is in some sense goofy that we don't just team up directly but I work on hektor as a way to learn the math involved so I'm gonna keep chipping away at my roadmap for as long as I need to get it all done.

the biggest difference between hektor and glam is that hektor is no_std

@bitshifter
Copy link

Ah ok, I guess it's changed since I last looked at it :)

@aclysma
Copy link
Contributor

aclysma commented Aug 7, 2019

Still been thinking about this! :)

  • glam looks like a really good solution for targetting f32 <=4 axis vectors. (Hektor looks promising too! but it doesn't look as far along yet.)
    • It looks like glam is using code from here (https://gitlab.com/kornelski/ffast-math/blob/master/src/lib.rs) marked as "No license. All rights reserved" in the project it's within. @bitshifter are you certain this can be shared as MIT/apache2?
    • it might be worth exposing the non-simd types in glam as a separate name, even if not disabling SSE via "scalar-math."
    • Glam (or something else) being f32 only doesn't preclude someone from making a "glam64" crate. (and it's possible getting the "best" implementation for f64 might not be just a matter of doubling widths and doing the same basic ops.. i.e. there could be fast paths f32 can take that f64 cannot)
  • nalgebra still exists for the general case
  • nalgebra and something like hektor/glam are different tools.. along the lines of what @Lokathor said about HashMap/BTreeMap, [f32;4]/[T;n], and __m128 have completely different tradeoffs and some problems call for one and some the other. The OP mentioned "developing enough consensus that we can potentially replace both of the current libraries" and I think after reading everyone's comments and thinking about it, I don't think we should pursue that.
  • But it may be possible to have similar APIs (i.e. glam/hektor could look like nalgebra_glm)
  • If someone has a case where glam isn't general enough, and nalgebra isn't fast enough, likely their situation calls for a tailor-fit solution.

BTW in addition to glm as a reference, there is https://github.com/microsoft/DirectXMath as well. (MIT licensed) SIMD implementations there go beyond SSE2 which is a nice perk.

@bitshifter
Copy link

@aclysma that code is unused, it's something I wanted to play with but never got around to it. Sort of forgot about it but given the licence I should just remove it.

Glam could support f64, it's just far down my priority list.

@Lokathor
Copy link
Member

Lokathor commented Aug 7, 2019

So we talked about this issue a little at the meeting today, and @AlexEne wanted us to try and form some conclusions here so that we don't get long running issues that ramble too far away from what the issue was first about.

There has been quite a bit of discussion about this here and also in the Rust Community Discord, and perhaps other places too. I'll attempt to summarize the major points.

Big Conclusion

The issue started by asking if there's some way we can make a single, easy to use math library for everyone to rally around as a standard.

  • No, this cannot be done. At least, not in the near term (eg: by the end of 2019).

Reasoning

Developing a complete math library is quite a bit of work. While it might be possible to some day unify the math world, the limiting factor is time.

  • Some people want to design around "actual math" and generality of operations, with full, complete support for any numeric type and any dimension, covering every possible math situation that may arise. This is the path of nalgebra and nalgebra-glm.
  • Some people want to focus on "common case" performance and design the library internals around what usual hardware will actually be able to perform as quickly as possible. This leads to intense focus on f32 and dimensions of 4 or less, and use of hand selected SIMD intrinsics in each particular method, checking micro-benchmarks, etc. This is the path of glam and hektor.
  • Rust currently doesn't have any form of specialization, so if you want to have the individual intrinsics treatment then you must give up the generics. This sort of system will doubtless be added eventually in some form, but definitely not in 2019.

Smaller Conclusions

  • If you're happy with the math library you're currently using, keep using it. No one is having their existing libraries taken away. No one has to switch to a new library.
  • If you'd like to contribute to the development of any math library please do so! I'm sure that all of the math libraries have parts that can use some extra help regardless of your current math skill level.
  • The gamedev-wg will revive the effort on the "ecosystem guide" in the next few weeks and we will attempt to make clear that some math libs are focusing on hardware optimizations while others are staying as general as possible. People can decide for themselves how they want to go.
  • Some of us at the meeting were a little unhappy with a purely linear discussion presentation. Here's a reminder that, among the many ways to talk about this stuff, there's a rust gamedev reddit, which allows for "tree" style commenting and discussion. This fits some topics better than others. I'm sure most folks have heard of reddit, but maybe people didn't know that there's a dedicated rust gamedev reddit, so give that a look if you didn't know.

Bonus Reminder

The meetings are just a way to improve communication, they do not form binding decisions, decisions are reached in the github issues so that everyone has a chance to participate regardless of time zone.

Please feel free to continue to post in this issue, but please also attempt to keep it to new points, not just re-hashing the same points we've gone over.

If you'd like to have side discussions that aren't quite about how to unify (or not) the math ecosystem please open distinct issues for that.

@AlexEne
Copy link
Member

AlexEne commented Aug 18, 2019

I am closing this one with the idea that some sort of conclusion can be incorporated in the ecosystem page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests