Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release/8.0] Vectorize TensorPrimitives APIs #93746

Merged
merged 30 commits into from
Oct 20, 2023

Commits on Oct 20, 2023

  1. Configuration menu
    Copy the full SHA
    bd57689 View commit details
    Browse the repository at this point in the history
  2. Simplify TensorPrimitive's AbsoluteOperator (dotnet#92577)

    Vector{128/256/512} all provide Abs; no need to do this manually.
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    4afeb64 View commit details
    Browse the repository at this point in the history
  3. Reduce some boilerplate in TensorPrimitive's IBinaryOperator (dotnet#…

    …92576)
    
    Change a few of the static abstract interface methods to be virtual, as most implementations throw from these methods; we can consolidate that throwing to the base.
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    3f246e3 View commit details
    Browse the repository at this point in the history
  4. Minor code cleanup in TensorPrimitives tests (dotnet#92575)

    * Normalize some test naming
    
    * Alphabetize tests
    
    * Improve mistmatched length tests with all positions of the shorter tensor
    
    * Alphabetize methods in TensorPrimitives.cs
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    a0706c9 View commit details
    Browse the repository at this point in the history
  5. Vectorize TensorPrimitives.Min/Max{Magnitude} (dotnet#92618)

    * Vectorize TensorPrimitives.Min/Max{Magnitude}
    
    * Use AdvSimd.Max/Min
    
    * Rename some parameters/locals for consistency
    
    * Improve HorizontalAggregate
    
    * Move a few helpers
    
    * Avoid scalar path for returning found NaN
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    b55a315 View commit details
    Browse the repository at this point in the history
  6. Update TensorPrimitives aggregations to vectorize handling of remaini…

    …ng elements (dotnet#92672)
    
    * Update TensorPrimitives.CosineSimilarity to vectorize handling of remaining elements
    
    * Vectorize remainder handling for Aggregate helpers
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    dabae03 View commit details
    Browse the repository at this point in the history
  7. Flesh out TensorPrimitives XML docs (dotnet#92749)

    * Flesh out TensorPrimitives XML docs
    
    * Address PR feedback
    
    - Remove use of FusedMultiplyAdd from all but CosineSimilarity
    - Remove comments about platform/OS-specific behavior from Add/AddMultiply/Subtract/Multiply/MultiplyAdd/Divide/Negate
    - Loosen comments about NaN and which exact one is returned
    
    * Address PR feedback
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    50e3948 View commit details
    Browse the repository at this point in the history
  8. Configuration menu
    Copy the full SHA
    4088f05 View commit details
    Browse the repository at this point in the history
  9. Enable TensorPrimitives to perform in-place operations (dotnet#92820)

    Some operations would produce incorrect results if the same span was passed as both an input and an output.  When vectorization was employed but the span's length wasn't a perfect multiple of a vector, we'd do the standard trick of performing one last operation on the last vector's worth of data; however, that relies on the operation being idempotent, and if a previous operation has overwritten input with a new value due to the same memory being used for input and output, some operations won't be idempotent.  This fixes that by masking off the already processed elements.  It adds tests to validate in-place use works, and it updates the docs to carve out this valid overlapping.
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    02416c2 View commit details
    Browse the repository at this point in the history
  10. Vectorize TensorPrimitives.ConvertToSingle (dotnet#92779)

    * Vectorize TensorPrimitives.ConvertToSingle
    
    * Address PR feedback
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    fdff01f View commit details
    Browse the repository at this point in the history
  11. Configuration menu
    Copy the full SHA
    bbd26a2 View commit details
    Browse the repository at this point in the history
  12. This vectorizes TensorPrimitives.Log2 (dotnet#92897)

    * Add a way to support operations that can't be vectorized on netstandard
    
    * Updating TensorPrimitives.Log2 to be vectorized on .NET Core
    
    * Update src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netstandard.cs
    
    Co-authored-by: Stephen Toub <stoub@microsoft.com>
    
    * Ensure we do an arithmetic right shift in the Log2 vectorization
    
    * Ensure the code can compile on .NET 7
    
    * Ensure that edge cases are properly handled and don't resolve to `x`
    
    * Ensure that Log2 special results are explicitly handled.
    
    ---------
    
    Co-authored-by: Stephen Toub <stoub@microsoft.com>
    2 people authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    b92402b View commit details
    Browse the repository at this point in the history
  13. Configuration menu
    Copy the full SHA
    13ee491 View commit details
    Browse the repository at this point in the history
  14. [wasm] Disable TensorPrimitivesTests.ConvertToHalf_SpecialValues (d…

    …otnet#92953)
    
    Failing test: `System.Numerics.Tensors.Tests.TensorPrimitivesTests.ConvertToHalf_SpecialValues`
    
    Issue: dotnet#92885
    radical authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    2091662 View commit details
    Browse the repository at this point in the history
  15. Adding a vectorized implementation of TensorPrimitives.Log (dotnet#92960

    )
    
    * Adding a vectorized implementation of TensorPrimitives.Log
    
    * Make sure to hit Ctrl+S
    tannergooding authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    ec9762c View commit details
    Browse the repository at this point in the history
  16. Configuration menu
    Copy the full SHA
    74c7e7a View commit details
    Browse the repository at this point in the history
  17. Vectorize TensorPrimitives.Exp (dotnet#93018)

    * Vectorize TensorPrimitives.Exp
    
    * Update src/libraries/System.Numerics.Tensors/src/System/Numerics/Tensors/TensorPrimitives.netstandard.cs
    tannergooding authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    f48d8b0 View commit details
    Browse the repository at this point in the history
  18. Vectorize TensorPrimitives.Sigmoid and TensorPrimitives.SoftMax (dotn…

    …et#93029)
    
    * Vectorize TensorPrimitives.Sigmoid and TensorPrimitives.SoftMax
    
    - Adds a SigmoidOperator that just wraps the ExpOperator
    - Vectorizes both passes of SoftMax, on top of ExpOperator. Simplest way to do this was to augment the existing InvokeSpanScalarIntoSpan to take a transform operator.
    - In doing so, found some naming inconsistencies I'd previously introduced, so I did some automatic renaming to make things more consistent.
    - Added XML comments to all the internal/private surface area.
    - Fleshes out some tests (and test values).
    
    * Disable tests on mono
    
    * Address PR feedback
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    6c63ae7 View commit details
    Browse the repository at this point in the history
  19. Vectorize TensorPrimitives.Tanh/Cosh/Sinh (dotnet#93093)

    * Vectorize TensorPrimitives.Tanh/Cosh/Sinh
    
    Tanh and Cosh are based on AOCL-LibM.
    
    AOCL-LibM doesn't appear to have a sinh implementation, so this Sinh is just based on the sinh formula based on exp(x).
    
    I also augmented the tests further, including:
    - Added more tests for sinh/cosh/tanh
    - Add an equality routine that supports comparing larger values with a tolerance
    - Tightened the tolerance for most functions
    - Changed some tests to be theories to be consistent with style elsewhere in the tests
    - Fixed some use of Math to be MathF
    
    * Remove unnecessary special-handling path from cosh
    
    * Remove unnecessary special-handling path from tanh
    
    * Redo sinh based on cosh
    
    * Address PR feedback
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    bc4d0cd View commit details
    Browse the repository at this point in the history
  20. Configuration menu
    Copy the full SHA
    e9b29c0 View commit details
    Browse the repository at this point in the history
  21. Configuration menu
    Copy the full SHA
    8db0a9b View commit details
    Browse the repository at this point in the history
  22. Configuration menu
    Copy the full SHA
    cd02aa5 View commit details
    Browse the repository at this point in the history
  23. Fix TensorPrimitives.IndexOfXx corner-case when first element is seed…

    … value (dotnet#93169)
    
    * Fix TensorPrimitives.IndexOfXx corner-case when first element is seed value
    
    Found as part of adding more tests for Min/Max{Magnitude} to validate they match their IndexOfXx variants.
    
    * Address PR feedback
    stephentoub authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    fffa1d4 View commit details
    Browse the repository at this point in the history
  24. Improve a vector implementation to support alignment and non-temporal…

    … tores (dotnet#93296)
    
    * Improve a vector implementation to support alignment and non-temporal stores
    
    * Fix a build error and mark a couple methods as AggressiveInlining
    
    * Fix the remaining block count computation
    
    * Ensure overlapping for small data on the V256/512 is handled
    
    * Ensure we only go down the vectorized path when supported for netstandard
    tannergooding authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    3a970a8 View commit details
    Browse the repository at this point in the history
  25. Configuration menu
    Copy the full SHA
    b0dd6ca View commit details
    Browse the repository at this point in the history
  26. Use the improved vectorization algorithm for binary and ternary Tenso…

    …rPrimitives operations (dotnet#93409)
    
    * Update InvokeSpanSpanIntoSpan<TBinaryOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Update InvokeSpanScalarIntoSpan<TTransformOperator, TBinaryOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Update InvokeSpanSpanSpanIntoSpan<TTernaryOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Update InvokeSpanSpanScalarIntoSpan<TTernaryOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Update InvokeSpanScalarSpanIntoSpan<TTernaryOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Improve codegen slightly by using case 0, rather than default
    
    * Adjust the canAlign check to be latter, to reduce branch count for data under the threshold
    
    * Add a comment explaining the NonTemporalByteThreshold
    
    * Make sure xTransformOp.CanVectorize is checked on .NET Standard
    tannergooding authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    8a7a6bb View commit details
    Browse the repository at this point in the history
  27. Use the improved vectorization algorithm for aggregate TensorPrimitiv…

    …es operations (dotnet#93695)
    
    * Improve the handling of the IAggregationOperator implementations
    
    * Update Aggregate<TTransformOperator, TAggregationOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Update Aggregate<TBinaryOperator, TAggregationOperator> for TensorPrimitives to use the better SIMD algorithm
    
    * Respond to PR feedback
    tannergooding authored and michaelgsharp committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    1c2126e View commit details
    Browse the repository at this point in the history
  28. Configuration menu
    Copy the full SHA
    13b47f4 View commit details
    Browse the repository at this point in the history
  29. Configuration menu
    Copy the full SHA
    f86414a View commit details
    Browse the repository at this point in the history
  30. Vectorizes IndexOfMin/Max/Magnitude (dotnet#93469)

    * resolved merge conflicts
    
    * net core full done
    
    * minor code cleanup
    
    * NetStandard and PR fixes.
    
    * minor pr changes
    
    * Fix IndexOfMaxMagnitudeOperator
    
    * Fix IndexOfMaxMagnitudeOperator on netcore
    
    * updates from PR comments
    
    * netcore fixed
    
    * net standard updated
    
    * add reference assembly exclusions
    
    * made naive approach better
    
    * resolved PR comments
    
    * minor comment changes
    
    * minor formatting fixes
    
    * added inlining
    
    * fixes from PR comments
    
    * comments from pr
    
    * fixed spacing
    
    ---------
    
    Co-authored-by: Eric StJohn <ericstj@microsoft.com>
    michaelgsharp and ericstj committed Oct 20, 2023
    Configuration menu
    Copy the full SHA
    cb48e75 View commit details
    Browse the repository at this point in the history