Skip to content

Conversation

@yiwu0b11
Copy link

@yiwu0b11 yiwu0b11 commented Dec 15, 2025

This patch adds mid-end support for vectorized min/max reduction operations for half floats. It also includes backend AArch64 support for these operations.
Both floating point min/max reductions don’t require strict order, because they are associative.

It will generate NEON fminv/fmaxv reduction instructions when max vector length is 8B or 16B. On SVE supporting machines with vector lengths > 16B, it will generate the SVE fminv/fmaxv instructions.
The patch also adds support for partial min/max reductions on SVE machines using fminv/fmaxv.

Ratio of throughput(ops/ms) > 1 indicates the performance with this patch is better than the mainline.

Neoverse N1 (UseSVE = 0, max vector length = 16B):

Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      3.69   6.44
ReductionMaxFP16   512       thrpt 9      3.71   7.62
ReductionMaxFP16   1024      thrpt 9      4.16   8.64
ReductionMaxFP16   2048      thrpt 9      4.44   9.12
ReductionMinFP16   256       thrpt 9      3.69   6.43
ReductionMinFP16   512       thrpt 9      3.70   7.62
ReductionMinFP16   1024      thrpt 9      4.16   8.64
ReductionMinFP16   2048      thrpt 9      4.44   9.10

Neoverse V1 (UseSVE = 1, max vector length = 32B):

Benchmark         vectorDim  Mode   Cnt     8B    16B    32B
ReductionMaxFP16   256       thrpt 9      3.96   8.62   8.02
ReductionMaxFP16   512       thrpt 9      3.54   9.25  11.71
ReductionMaxFP16   1024      thrpt 9      3.77   8.71  14.07
ReductionMaxFP16   2048      thrpt 9      3.88   8.44  14.69
ReductionMinFP16   256       thrpt 9      3.96   8.61   8.03
ReductionMinFP16   512       thrpt 9      3.54   9.28  11.69
ReductionMinFP16   1024      thrpt 9      3.76   8.70  14.12
ReductionMinFP16   2048      thrpt 9      3.87   8.45  14.70

Neoverse V2 (UseSVE = 2, max vector length = 16B):

Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      4.78  10.00
ReductionMaxFP16   512       thrpt 9      3.74  11.33
ReductionMaxFP16   1024      thrpt 9      3.86   9.59
ReductionMaxFP16   2048      thrpt 9      3.94   8.71
ReductionMinFP16   256       thrpt 9      4.78  10.00
ReductionMinFP16   512       thrpt 9      3.74  11.29
ReductionMinFP16   1024      thrpt 9      3.86   9.58
ReductionMinFP16   2048      thrpt 9      3.94   8.71

Testing:
hotspot_all, jdk (tier1-3) and langtools (tier1) all pass on Neoverse N1/V1/V2.


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8373344: Add support for min/max reduction operations for Float16 (Enhancement - P4)

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/28828/head:pull/28828
$ git checkout pull/28828

Update a local copy of the PR:
$ git checkout pull/28828
$ git pull https://git.openjdk.org/jdk.git pull/28828/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 28828

View PR using the GUI difftool:
$ git pr show -t 28828

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/28828.diff

Using Webrev

Link to Webrev Comment

This patch adds mid-end support for vectorized min/max reduction
operations for half floats. It also includes backend AArch64 support
for these operations.
Both floating point min/max reductions don’t require strict order,
because they are associative.

It will generate NEON fminv/fmaxv reduction instructions when
max vector length is 8B or 16B. On SVE supporting machines
with vector lengths > 16B, it will generate the SVE fminv/fmaxv
instructions.
The patch also adds support for partial min/max reductions on
SVE machines using fminv/fmaxv.

Ratio of throughput(ops/ms) > 1 indicates the performance with
this patch is better than the mainline.

Neoverse N1 (UseSVE = 0, max vector length = 16B):
Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      3.69   6.44
ReductionMaxFP16   512       thrpt 9      3.71   7.62
ReductionMaxFP16   1024      thrpt 9      4.16   8.64
ReductionMaxFP16   2048      thrpt 9      4.44   9.12
ReductionMinFP16   256       thrpt 9      3.69   6.43
ReductionMinFP16   512       thrpt 9      3.70   7.62
ReductionMinFP16   1024      thrpt 9      4.16   8.64
ReductionMinFP16   2048      thrpt 9      4.44   9.10

Neoverse V1 (UseSVE = 1, max vector length = 32B):
Benchmark         vectorDim  Mode   Cnt     8B    16B    32B
ReductionMaxFP16   256       thrpt 9      3.96   8.62   8.02
ReductionMaxFP16   512       thrpt 9      3.54   9.25  11.71
ReductionMaxFP16   1024      thrpt 9      3.77   8.71  14.07
ReductionMaxFP16   2048      thrpt 9      3.88   8.44  14.69
ReductionMinFP16   256       thrpt 9      3.96   8.61   8.03
ReductionMinFP16   512       thrpt 9      3.54   9.28  11.69
ReductionMinFP16   1024      thrpt 9      3.76   8.70  14.12
ReductionMinFP16   2048      thrpt 9      3.87   8.45  14.70

Neoverse V2 (UseSVE = 2, max vector length = 16B):
Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      4.78  10.00
ReductionMaxFP16   512       thrpt 9      3.74  11.33
ReductionMaxFP16   1024      thrpt 9      3.86   9.59
ReductionMaxFP16   2048      thrpt 9      3.94   8.71
ReductionMinFP16   256       thrpt 9      4.78  10.00
ReductionMinFP16   512       thrpt 9      3.74  11.29
ReductionMinFP16   1024      thrpt 9      3.86   9.58
ReductionMinFP16   2048      thrpt 9      3.94   8.71

Testing:
hotspot_all, jdk (tier1-3) and langtools (tier1) all pass on
Neoverse N1/V1/V2.
@bridgekeeper
Copy link

bridgekeeper bot commented Dec 15, 2025

👋 Welcome back yiwu0b11! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk
Copy link

openjdk bot commented Dec 15, 2025

❗ This change is not yet ready to be integrated.
See the Progress checklist in the description for automated requirements.

@openjdk openjdk bot added hotspot-compiler hotspot-compiler-dev@openjdk.org core-libs core-libs-dev@openjdk.org labels Dec 15, 2025
@openjdk
Copy link

openjdk bot commented Dec 15, 2025

@yiwu0b11 The following labels will be automatically applied to this pull request:

  • core-libs
  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing lists. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the rfr Pull request is ready for review label Dec 15, 2025
@mlbridge
Copy link

mlbridge bot commented Dec 15, 2025

Webrevs

@yiwu0b11 yiwu0b11 changed the title 8373344: Add support for FP16 min/max reduction operations 8373344: Add support for min/max reduction operations for Float16 Dec 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core-libs core-libs-dev@openjdk.org hotspot-compiler hotspot-compiler-dev@openjdk.org rfr Pull request is ready for review

Development

Successfully merging this pull request may close these issues.

1 participant