-
Notifications
You must be signed in to change notification settings - Fork 13.5k
[InstCombine] Make indexed compare fold GEP source type independent #71663
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@llvm/pr-subscribers-llvm-transforms Author: Nikita Popov (nikic) ChangesThe indexed compare fold converts comparisons of GEPs with same (indirect) base into comparisons of offset. Currently, it only supports GEPs with the same source element type. This change makes the transform operate on offsets instead, which removes the type dependence. To keep closer to the scope of the original implementation, this keeps the limitation that we should only have at most one variable index per GEP. This addresses the main regression from #68882. TBH I have some doubts that this is really a useful transform (at least for the case where there are extra pointer users, so we have to rematerialize pointers at some point). I can only assume it exists for a reason... Full diff: https://github.com/llvm/llvm-project/pull/71663.diff 3 Files Affected:
diff --git a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
index 7c2ad92f919a3cc..c4b8ed7be3f97fe 100644
--- a/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
+++ b/llvm/lib/Transforms/InstCombine/InstCombineCompares.cpp
@@ -18,6 +18,7 @@
#include "llvm/Analysis/CmpInstAnalysis.h"
#include "llvm/Analysis/ConstantFolding.h"
#include "llvm/Analysis/InstructionSimplify.h"
+#include "llvm/Analysis/Utils/Local.h"
#include "llvm/Analysis/VectorUtils.h"
#include "llvm/IR/ConstantRange.h"
#include "llvm/IR/DataLayout.h"
@@ -413,11 +414,12 @@ Instruction *InstCombinerImpl::foldCmpLoadFromIndexedGlobal(
/// Returns true if we can rewrite Start as a GEP with pointer Base
/// and some integer offset. The nodes that need to be re-written
/// for this transformation will be added to Explored.
-static bool canRewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
+static bool canRewriteGEPAsOffset(Value *Start, Value *Base,
const DataLayout &DL,
SetVector<Value *> &Explored) {
SmallVector<Value *, 16> WorkList(1, Start);
Explored.insert(Base);
+ uint64_t IndexSize = DL.getIndexTypeSizeInBits(Start->getType());
// The following traversal gives us an order which can be used
// when doing the final transformation. Since in the final
@@ -447,11 +449,11 @@ static bool canRewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
return false;
if (auto *GEP = dyn_cast<GEPOperator>(V)) {
- // We're limiting the GEP to having one index. This will preserve
- // the original pointer type. We could handle more cases in the
- // future.
- if (GEP->getNumIndices() != 1 || !GEP->isInBounds() ||
- GEP->getSourceElementType() != ElemTy)
+ // Only allow GEPs with at most one variable offset.
+ APInt Offset(IndexSize, 0);
+ MapVector<Value *, APInt> VarOffsets;
+ if (!GEP->collectOffset(DL, IndexSize, VarOffsets, Offset) ||
+ VarOffsets.size() > 1)
return false;
if (!Explored.contains(GEP->getOperand(0)))
@@ -528,7 +530,7 @@ static void setInsertionPoint(IRBuilder<> &Builder, Value *V,
/// Returns a re-written value of Start as an indexed GEP using Base as a
/// pointer.
-static Value *rewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
+static Value *rewriteGEPAsOffset(Value *Start, Value *Base,
const DataLayout &DL,
SetVector<Value *> &Explored,
InstCombiner &IC) {
@@ -539,8 +541,8 @@ static Value *rewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
// 3. Add the edges for the PHI nodes.
// 4. Emit GEPs to get the original pointers.
// 5. Remove the original instructions.
- Type *IndexType = IntegerType::get(
- Base->getContext(), DL.getIndexTypeSizeInBits(Start->getType()));
+ uint64_t IndexSize = DL.getIndexTypeSizeInBits(Start->getType());
+ Type *IndexType = IntegerType::get(Base->getContext(), IndexSize);
DenseMap<Value *, Value *> NewInsts;
NewInsts[Base] = ConstantInt::getNullValue(IndexType);
@@ -559,29 +561,22 @@ static Value *rewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
// Create all the other instructions.
for (Value *Val : Explored) {
-
if (NewInsts.contains(Val))
continue;
if (auto *GEP = dyn_cast<GEPOperator>(Val)) {
- Value *Index = NewInsts[GEP->getOperand(1)] ? NewInsts[GEP->getOperand(1)]
- : GEP->getOperand(1);
- setInsertionPoint(Builder, GEP);
- // Indices might need to be sign extended. GEPs will magically do
- // this, but we need to do it ourselves here.
- if (Index->getType()->getScalarSizeInBits() !=
- NewInsts[GEP->getOperand(0)]->getType()->getScalarSizeInBits()) {
- Index = Builder.CreateSExtOrTrunc(
- Index, NewInsts[GEP->getOperand(0)]->getType(),
- GEP->getOperand(0)->getName() + ".sext");
- }
+ APInt Offset(IndexSize, 0);
+ MapVector<Value *, APInt> VarOffsets;
+ GEP->collectOffset(DL, IndexSize, VarOffsets, Offset);
- auto *Op = NewInsts[GEP->getOperand(0)];
+ setInsertionPoint(Builder, GEP);
+ Value *Op = NewInsts[GEP->getOperand(0)];
+ Value *OffsetV = emitGEPOffset(&Builder, DL, GEP);
if (isa<ConstantInt>(Op) && cast<ConstantInt>(Op)->isZero())
- NewInsts[GEP] = Index;
+ NewInsts[GEP] = OffsetV;
else
NewInsts[GEP] = Builder.CreateNSWAdd(
- Op, Index, GEP->getOperand(0)->getName() + ".add");
+ Op, OffsetV, GEP->getOperand(0)->getName() + ".add");
continue;
}
if (isa<PHINode>(Val))
@@ -609,23 +604,14 @@ static Value *rewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
}
}
- PointerType *PtrTy = PointerType::get(
- Base->getContext(), Start->getType()->getPointerAddressSpace());
for (Value *Val : Explored) {
if (Val == Base)
continue;
- // Depending on the type, for external users we have to emit
- // a GEP or a GEP + ptrtoint.
setInsertionPoint(Builder, Val, false);
-
- // Cast base to the expected type.
- Value *NewVal = Builder.CreateBitOrPointerCast(
- Base, PtrTy, Start->getName() + "to.ptr");
- NewVal = Builder.CreateInBoundsGEP(ElemTy, NewVal, ArrayRef(NewInsts[Val]),
- Val->getName() + ".ptr");
- NewVal = Builder.CreateBitOrPointerCast(
- NewVal, Val->getType(), Val->getName() + ".conv");
+ // Create GEP for external users.
+ Value *NewVal = Builder.CreateInBoundsGEP(
+ Builder.getInt8Ty(), Base, NewInsts[Val], Val->getName() + ".ptr");
IC.replaceInstUsesWith(*cast<Instruction>(Val), NewVal);
// Add old instruction to worklist for DCE. We don't directly remove it
// here because the original compare is one of the users.
@@ -637,28 +623,18 @@ static Value *rewriteGEPAsOffset(Type *ElemTy, Value *Start, Value *Base,
/// Looks through GEPs in order to express the input Value as a constant
/// indexed GEP. Returns a pair containing the GEPs Pointer and Index.
-static std::pair<Value *, Value *>
-getAsConstantIndexedAddress(Type *ElemTy, Value *V, const DataLayout &DL) {
- Type *IndexType = IntegerType::get(V->getContext(),
- DL.getIndexTypeSizeInBits(V->getType()));
-
- Constant *Index = ConstantInt::getNullValue(IndexType);
+static std::pair<Value *, APInt>
+getAsConstantIndexedAddress(Value *V, const DataLayout &DL) {
+ APInt Offset = APInt(DL.getIndexTypeSizeInBits(V->getType()), 0);
while (GEPOperator *GEP = dyn_cast<GEPOperator>(V)) {
// We accept only inbouds GEPs here to exclude the possibility of
// overflow.
- if (!GEP->isInBounds())
+ if (!GEP->isInBounds() || !GEP->accumulateConstantOffset(DL, Offset))
break;
- if (GEP->hasAllConstantIndices() && GEP->getNumIndices() == 1 &&
- GEP->getSourceElementType() == ElemTy &&
- GEP->getOperand(1)->getType() == IndexType) {
- V = GEP->getOperand(0);
- Constant *GEPIndex = static_cast<Constant *>(GEP->getOperand(1));
- Index = ConstantExpr::getAdd(Index, GEPIndex);
- continue;
- }
- break;
+
+ V = GEP->getPointerOperand();
}
- return {V, Index};
+ return {V, Offset};
}
/// Converts (CMP GEPLHS, RHS) if this change would make RHS a constant.
@@ -675,14 +651,14 @@ static Instruction *transformToIndexedCompare(GEPOperator *GEPLHS, Value *RHS,
if (!GEPLHS->hasAllConstantIndices())
return nullptr;
- Type *ElemTy = GEPLHS->getSourceElementType();
- Value *PtrBase, *Index;
- std::tie(PtrBase, Index) = getAsConstantIndexedAddress(ElemTy, GEPLHS, DL);
+ Value *PtrBase;
+ APInt Offset;
+ std::tie(PtrBase, Offset) = getAsConstantIndexedAddress(GEPLHS, DL);
// The set of nodes that will take part in this transformation.
SetVector<Value *> Nodes;
- if (!canRewriteGEPAsOffset(ElemTy, RHS, PtrBase, DL, Nodes))
+ if (!canRewriteGEPAsOffset(RHS, PtrBase, DL, Nodes))
return nullptr;
// We know we can re-write this as
@@ -691,13 +667,14 @@ static Instruction *transformToIndexedCompare(GEPOperator *GEPLHS, Value *RHS,
// can't have overflow on either side. We can therefore re-write
// this as:
// OFFSET1 cmp OFFSET2
- Value *NewRHS = rewriteGEPAsOffset(ElemTy, RHS, PtrBase, DL, Nodes, IC);
+ Value *NewRHS = rewriteGEPAsOffset(RHS, PtrBase, DL, Nodes, IC);
// RewriteGEPAsOffset has replaced RHS and all of its uses with a re-written
// GEP having PtrBase as the pointer base, and has returned in NewRHS the
// offset. Since Index is the offset of LHS to the base pointer, we will now
// compare the offsets instead of comparing the pointers.
- return new ICmpInst(ICmpInst::getSignedPredicate(Cond), Index, NewRHS);
+ return new ICmpInst(ICmpInst::getSignedPredicate(Cond),
+ IC.Builder.getInt(Offset), NewRHS);
}
/// Fold comparisons between a GEP instruction and something else. At this point
diff --git a/llvm/test/Transforms/InstCombine/indexed-gep-compares.ll b/llvm/test/Transforms/InstCombine/indexed-gep-compares.ll
index 4681fdc59a99693..e34fd032b17d98b 100644
--- a/llvm/test/Transforms/InstCombine/indexed-gep-compares.ll
+++ b/llvm/test/Transforms/InstCombine/indexed-gep-compares.ll
@@ -6,14 +6,15 @@ target datalayout = "e-p:32:32:32-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:32:32-f3
define ptr@test1(ptr %A, i32 %Offset) {
; CHECK-LABEL: @test1(
; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i32 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[OFFSET:%.*]], [[ENTRY:%.*]] ]
-; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 1
-; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 100
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[ENTRY:%.*]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 400
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
-; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i32 [[RHS_IDX]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A:%.*]], i32 [[RHS_IDX]]
; CHECK-NEXT: ret ptr [[RHS_PTR]]
;
entry:
@@ -34,15 +35,16 @@ bb2:
define ptr@test2(i32 %A, i32 %Offset) {
; CHECK-LABEL: @test2(
; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i32 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[OFFSET:%.*]], [[ENTRY:%.*]] ]
-; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 1
-; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 100
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[ENTRY:%.*]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 400
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
; CHECK-NEXT: [[A_PTR:%.*]] = inttoptr i32 [[A:%.*]] to ptr
-; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i32, ptr [[A_PTR]], i32 [[RHS_IDX]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A_PTR]], i32 [[RHS_IDX]]
; CHECK-NEXT: ret ptr [[RHS_PTR]]
;
entry:
@@ -99,16 +101,17 @@ bb2:
define ptr@test4(i16 %A, i32 %Offset) {
; CHECK-LABEL: @test4(
; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i32 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[OFFSET:%.*]], [[ENTRY:%.*]] ]
-; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 1
-; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 100
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[ENTRY:%.*]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 400
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
; CHECK-NEXT: [[TMP0:%.*]] = zext i16 [[A:%.*]] to i32
; CHECK-NEXT: [[A_PTR:%.*]] = inttoptr i32 [[TMP0]] to ptr
-; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i32, ptr [[A_PTR]], i32 [[RHS_IDX]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A_PTR]], i32 [[RHS_IDX]]
; CHECK-NEXT: ret ptr [[RHS_PTR]]
;
entry:
@@ -137,14 +140,15 @@ define ptr@test5(i32 %Offset) personality ptr @__gxx_personality_v0 {
; CHECK-NEXT: [[A:%.*]] = invoke ptr @fun_ptr()
; CHECK-NEXT: to label [[CONT:%.*]] unwind label [[LPAD:%.*]]
; CHECK: cont:
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i32 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[OFFSET:%.*]], [[CONT]] ]
-; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 1
-; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 100
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[CONT]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 400
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
-; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i32, ptr [[A]], i32 [[RHS_IDX]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A]], i32 [[RHS_IDX]]
; CHECK-NEXT: ret ptr [[RHS_PTR]]
; CHECK: lpad:
; CHECK-NEXT: [[L:%.*]] = landingpad { ptr, i32 }
@@ -181,15 +185,16 @@ define ptr@test6(i32 %Offset) personality ptr @__gxx_personality_v0 {
; CHECK-NEXT: [[A:%.*]] = invoke i32 @fun_i32()
; CHECK-NEXT: to label [[CONT:%.*]] unwind label [[LPAD:%.*]]
; CHECK: cont:
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i32 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[OFFSET:%.*]], [[CONT]] ]
-; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 1
-; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 100
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i32 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[CONT]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i32 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i32 [[RHS_IDX]], 400
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
; CHECK-NEXT: [[A_PTR:%.*]] = inttoptr i32 [[A]] to ptr
-; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i32, ptr [[A_PTR]], i32 [[RHS_IDX]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A_PTR]], i32 [[RHS_IDX]]
; CHECK-NEXT: ret ptr [[RHS_PTR]]
; CHECK: lpad:
; CHECK-NEXT: [[L:%.*]] = landingpad { ptr, i32 }
diff --git a/llvm/test/Transforms/InstCombine/opaque-ptr.ll b/llvm/test/Transforms/InstCombine/opaque-ptr.ll
index 900d3f142a6ff79..4448a49ad92bf5d 100644
--- a/llvm/test/Transforms/InstCombine/opaque-ptr.ll
+++ b/llvm/test/Transforms/InstCombine/opaque-ptr.ll
@@ -382,14 +382,15 @@ define <4 x i1> @compare_geps_same_indices_scalar_vector_base_mismatch(ptr %ptr,
define ptr @indexed_compare(ptr %A, i64 %offset) {
; CHECK-LABEL: @indexed_compare(
; CHECK-NEXT: entry:
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i64 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i64 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[OFFSET:%.*]], [[ENTRY:%.*]] ]
-; CHECK-NEXT: [[RHS_ADD]] = add nsw i64 [[RHS_IDX]], 1
-; CHECK-NEXT: [[COND:%.*]] = icmp sgt i64 [[RHS_IDX]], 100
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i64 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[ENTRY:%.*]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i64 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i64 [[RHS_IDX]], 400
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
-; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i64 [[RHS_IDX]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A:%.*]], i64 [[RHS_IDX]]
; CHECK-NEXT: ret ptr [[RHS_PTR]]
;
entry:
@@ -410,16 +411,16 @@ bb2:
define ptr @indexed_compare_different_types(ptr %A, i64 %offset) {
; CHECK-LABEL: @indexed_compare_different_types(
; CHECK-NEXT: entry:
-; CHECK-NEXT: [[TMP:%.*]] = getelementptr inbounds i32, ptr [[A:%.*]], i64 [[OFFSET:%.*]]
+; CHECK-NEXT: [[TMP_IDX:%.*]] = shl nsw i64 [[OFFSET:%.*]], 2
; CHECK-NEXT: br label [[BB:%.*]]
; CHECK: bb:
-; CHECK-NEXT: [[RHS:%.*]] = phi ptr [ [[RHS_NEXT:%.*]], [[BB]] ], [ [[TMP]], [[ENTRY:%.*]] ]
-; CHECK-NEXT: [[LHS:%.*]] = getelementptr inbounds i64, ptr [[A]], i64 100
-; CHECK-NEXT: [[RHS_NEXT]] = getelementptr inbounds i32, ptr [[RHS]], i64 1
-; CHECK-NEXT: [[COND:%.*]] = icmp ult ptr [[LHS]], [[RHS]]
+; CHECK-NEXT: [[RHS_IDX:%.*]] = phi i64 [ [[RHS_ADD:%.*]], [[BB]] ], [ [[TMP_IDX]], [[ENTRY:%.*]] ]
+; CHECK-NEXT: [[RHS_ADD]] = add nsw i64 [[RHS_IDX]], 4
+; CHECK-NEXT: [[COND:%.*]] = icmp sgt i64 [[RHS_IDX]], 800
; CHECK-NEXT: br i1 [[COND]], label [[BB2:%.*]], label [[BB]]
; CHECK: bb2:
-; CHECK-NEXT: ret ptr [[RHS]]
+; CHECK-NEXT: [[RHS_PTR:%.*]] = getelementptr inbounds i8, ptr [[A:%.*]], i64 [[RHS_IDX]]
+; CHECK-NEXT: ret ptr [[RHS_PTR]]
;
entry:
%tmp = getelementptr inbounds i32, ptr %A, i64 %offset
|
d248f7d
to
9c7db50
Compare
ping |
|
||
auto *Op = NewInsts[GEP->getOperand(0)]; | ||
Value *Op = NewInsts[GEP->getOperand(0)]; | ||
Value *OffsetV = emitGEPOffset(&Builder, DL, GEP); | ||
if (isa<ConstantInt>(Op) && cast<ConstantInt>(Op)->isZero()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: match(Op, m_Zero())
LGTM. That being said, I'm not particularly familiar with these functions, so can you wait a day to push to give others time to review. |
I saw performance regressions with this patch. Please give me some time to check the artifacts. |
In
It seems like converting GEP into shl+add may cause regression :( |
The indexed compare fold converts comparisons of GEPs with same (indirect) base into comparisons of offset. Currently, it only supports GEPs with the same source element type. This change makes the transform operate on offsets instead, which removes the type dependence. To keep closer to the scope of the original implementation, this keeps the limitation that we should only have at most one variable index per GEP.
e04064e
to
af6cc8f
Compare
@dtcxzyw I'm not seeing something exactly like that (using usub.sat) in my own IR diffs, but I do see something quite similar in pcmul.c.ll (https://gist.github.com/nikic/805ec1c7a79b8a94ff6c1a60a4a19d3e). As far as I can tell, what happens there is that a loop that stores 0 gets converted into a memset by LIR now. Nominally this is a positive change, but it can also be negative if the number of stores is actually low (as the trip count calculation is complex). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
The indexed compare fold converts comparisons of GEPs with same (indirect) base into comparisons of offset. Currently, it only supports GEPs with the same source element type.
This change makes the transform operate on offsets instead, which removes the type dependence. To keep closer to the scope of the original implementation, this keeps the limitation that we should only have at most one variable index per GEP.
This addresses the main regression from #68882.
TBH I have some doubts that this is really a useful transform (at least for the case where there are extra pointer users, so we have to rematerialize pointers at some point). I can only assume it exists for a reason...