-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
JIT: Unblock Vector###<long> intrinsics on x86 #112728
base: main
Are you sure you want to change the base?
Conversation
Tagging subscribers to this area: @JulieLeeMSFT, @jakobbotsch |
197fac5
to
628d4f8
Compare
628d4f8
to
3a130c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is ready for review.
cc @tannergooding
// Keep casts with operands usable from memory. | ||
if (castOp->isContained() || castOp->IsRegOptional()) | ||
{ | ||
return op; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This condition, added in #72719, made this method effectively useless. Removing it was a zero-diff change. I can look in future at containing the casts rather than removing them.
@@ -4677,19 +4539,16 @@ GenTree* Lowering::LowerHWIntrinsicCreate(GenTreeHWIntrinsic* node) | |||
return LowerNode(node); | |||
} | |||
|
|||
GenTree* op2 = node->Op(2); | |||
|
|||
// TODO-XArch-AVX512 : Merge the NI_Vector512_Create and NI_Vector256_Create paths below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The churn in this section is just taking care of this TODO
|
||
assert(comp->compIsaSupportedDebugOnly(InstructionSet_SSE2)); | ||
|
||
tmp2 = InsertNewSimdCreateScalarUnsafeNode(TYP_SIMD16, op2, simdBaseJitType, 16); | ||
LowerNode(tmp2); | ||
|
||
node->ResetHWIntrinsicId(NI_SSE_MoveLowToHigh, tmp1, tmp2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing this to UnpackLow
shows up as a regression in a few places, because movlhps
is one byte smaller, but it enables other optimizations since unpcklpd
takes a memory operand plus mask and embedded broadcast.
Vector128.Create(double, 1.0)
:
- vmovups xmm0, xmmword ptr [reloc @RWD00]
- vmovlhps xmm0, xmm1, xmm0
+ vunpcklpd xmm0, xmm1, qword ptr [reloc @RWD00] {1to2}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be peepholed back to vmovlhps
if both are from register.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the same but would rather save that for a followup. llvm has a replacement list of equivalent instructions that have different sizes, and unpcklpd
is on it, as are things like vpermilps
, which is replaced by pshufd
.
It's worth having a discussion about whether we'd also want to do replacements that switch between float and integer domains. I'll open an issue.
if (varDsc->lvIsParam) | ||
{ | ||
// Promotion blocks combined read optimizations for SIMD loads of long params | ||
return; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In isolation, this change produced a small number of diffs and was mostly an improvement. A few regressions show up in the SPMI reports, but the overall impact is good, especially considering the places we can load a long to vector with movq
It occurred to me the optimization to emit |
assert(m_compiler->compIsaSupportedDebugOnly(InstructionSet_SSE2)); | ||
|
||
GenTree* thirtyTwo = m_compiler->gtNewIconNode(32); | ||
GenTree* shift = m_compiler->gtNewSimdBinOpNode(GT_RSZ, op1->TypeGet(), simdTmpVar, thirtyTwo, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this missing ToScalar()
to get the 32-bit integer out of the simd result?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ToScalar is the original intrinsicId
, and it's built on the next line down. Full SSE2 codegen for Vector128<ulong>.ToScalar()
:
; Method Program:ToScalar(System.Runtime.Intrinsics.Vector128`1[ulong]):ulong (FullOpts)
G_M34649_IG01: ;; offset=0x0000
push ebp
mov ebp, esp
movups xmm0, xmmword ptr [ebp+0x08]
;; size=7 bbWeight=1 PerfScore 4.25
G_M34649_IG02: ;; offset=0x0007
movd eax, xmm0
psrlq xmm0, 32
movd edx, xmm0
;; size=13 bbWeight=1 PerfScore 4.50
G_M34649_IG03: ;; offset=0x0014
pop ebp
ret 16
;; size=4 bbWeight=1 PerfScore 2.50
; Total bytes of code: 24
src/coreclr/jit/lowerxarch.cpp
Outdated
} | ||
else | ||
else if (op1->OperIs(GT_IND)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why IND
in particular and not other types of containable memory ops?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, will fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added LCL_FLD as well. I believe LCL_VAR will have INT type, so even if marked DNE, it can't be handled with this case.
src/coreclr/jit/decomposelongs.cpp
Outdated
// * STOREIND long | ||
|
||
GenTree* next = tree->gtNext; | ||
if ((user != next) && !m_compiler->gtTreeHasSideEffects(next, GTF_SIDE_EFFECT)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gtTreeHasSideEffects
is for HIR, not LIR. For LIR you should use OperEffects
. But also, relying on the execution order in this way is an anti-pattern. It means optimizations will subtly break from unrelated changes that people may make in the future. Can the whole thing be changed to use an appropriate IsInvariantInRange
check?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, yeah, I wasn't happy with this but wasn't sure the best way to handle it. I started down the path of using IsInvariantInRange
, but that's private to Lowering
, so I'd have to move it to the public surface and then have lowering pass itself to DecomposeLongs
. If that change is ok, I'll go ahead and do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds ok to me. Alternatively this could be a static method on SideEffectSet
or LIR
and then each of Lower
and DecomposeLongs
would have a cached SideEffectSet
to use for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. I used IsSafeToContainMem
instead of IsInvariantInRange
because the name and arg order make more sense (to me, at least) in this context.
This resolves a large number of TODOs around HWIntrinsic expansion involving scalar longs on x86.
The most significant change here is in promoting
CreateScalar
andToScalar
to be code generating intrinsics instead of converting them to other intrinsics at lowering. This was necessary in order to handle emittingmovq
for scalar long loads/stores but also unlocks several other optimizations since we can now allowCreateScalar
andToScalar
to be contained and can specialize codegen depending on whether they end up loading/storing from/to memory or not. Some example improvements on x64:Vector128.CreateScalar(ref float)
:Vector128.CreateScalar(ref double)
:ref byte = Vector128<byte>.ToScalar()
:Vector<byte>.ToScalar()
And the less realistic, but still interesting
Sse.AddScalar(Vector128.CreateScalar(ref float), Vector128.CreateScalar(ref float)).ToScalar()
:This also removes some redundant casts for
CreateScalar
of small types. Previously, a zero-extending cast was inserted unconditionally and was sometimes removed by peephole opt on x64 but often wasn't.Vector128.CreateScalar(short)
:Vector128.CreateScalar(checked((byte)val))
:Vector128.CreateScalar(ref sbyte)
:x86 diffs are much more significant, because of the newly-enabled intrinsic expansion: