-
-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement all System.Math functions for single precision floating point values in addition to double precision #312
Comments
This is a high impact change with relevant benefits on the image size and positive medium impact on execution performance. BackgroundAccording to STM32 AN4044 we have FPU single precision on most F's and L's series and double precision on some F7's and H7's. The latter is valid for ESP32 targets too, as it features an SP FPU. The .NET Math API uses doubles for all their arguments. Working hypothesis
Discussion
Analysis
|
Why so tight difference between float and double test ?! Use of the hardware FPU should show a (much) more important gap, to me. |
I probably haven't explain myself with enough clarity. All the numbers above are using FPU. The differences in discussion are with using single or double precision. |
Then the same test without FPU enabled would have been useful, although I already know the result (it will be worse). |
The discussion is around sp and dp. No doubts about FPU... On your comment above, aren't you overlooking the image size gain? |
No, I did not neglect that gain in size. I was just trying to see if there were any "cons" that would void this gain. |
Oh! 😄 I didn't understood that you were in favor. That's why I insisted. Apologies. |
I'm not sure if I understand José's suggestion fully: Do you suggest applications should still use double-typed variables and math function calls, and have the function convert them to single precision, pass to the FPU, and convert the result back from single to double? If so, wouldn't this impose an (unexpected, from the application's point of view) precision loss? Also, would converting from and to single/double not add a performance penalty which would not exist if we use single precision variables and function calls in the first place? |
A couple of clarifications:
Alternatives to tackle the latter:
I would prefer 1. because of the platform constrains and because it's more efficient on all aspects (class library, native, usability). |
As it seems, I have understood your suggestion now ;-) And then, I would argue against it for the following reasons: a) Offering Math.Sqrt(double) (for example), but calculating only with single precision, is a lie, and could lead customers into strange bugs which they don't understand, if they happen to actually need double precision. We can assume that this will be seldom, but we don't know it. b) If they discover that sqrt(double) computes only SP, they would blame the framework. c) I can't believe that there is no (even small) processing overhead when passing a double into a single parameter. The structs should have different sizes and structures (6 vs. 8 bytes, I guess), so even if C# implicitly converts the types, it has to do so by inserting some instructions to actually do it. This will run on the CPU, be it fast or not. So, I strongly suggest to be clear and honest here. I could imagine several ways:
If the memory savings using only SP and the SP FPU is so charming, couldn't there be an optional nuget package containing the DP overloads, software-computed if needed, that we can use if we need DP? This could be a way if the software emulation does not have to be on or off during the build of the firmware. Anyway, I would say: If I need DP, and there are overloads taking DP parameters and returns, then I should get DP and not SP. If we only support SP, then let the method signatures reflect that by clearly taking SP parameters and return values. |
@steffalk I understand your concerns and point of view with providing a clear and honest implementation of the API. We sure don't won't to look like we are hiding obscure details and trying to be "smart" about this! 😄 There is an extra IL instruction to do the implicit conversion from float to double. That's negligible but exists. The other way around has to have a cast. As for providing mscorlib with either float or double Math, just like I've pointed above, it would require also to duplicate ALL the other class libraries (because they reference it). That's something that I would rather stay away of for the obvious reasons! 😓 Trying to wrap this up:
|
Details:
System.Math functions currently only available for double precision values (such as sqrt, trigonometric functions, logarithmic and exponential functions, Min() and Max()) should be implemented also for single-precision values.
Motivation:
Some CPUs (such as the STM type on Netduino 3 boards) have a floating-point unit capable of computations on single precision values, but not double precision. We cannot make use of those FPUs if System.Math does not offer single precision overloads of its functions and thus have to use double-precision values and computations, with less performance. This may be important for applications where single precision would be sufficient, and we have to perform many computations or have to perform then in IRQ event handlers.
nanoFramework area: Hardware/target board | Nuget packages | Community targets
Detailed repro steps so we can see the same problem
See the object browser window in Visual Studio and navigate to mscorlib.System.Math. There are many functions offered for double, but not for single precision values.
The text was updated successfully, but these errors were encountered: