Skip to content

Conversation

@AmrDeveloper
Copy link
Member

Backporting renaming ComplexAttr to ConstComplexAttr from the upstream

felixdaas and others added 30 commits August 10, 2025 21:32
…llvm#1660)

Fixes llvm#1405

as far as I understand it eraseIfSafe should intuativly check if all
memref load/store ops are created, which obtain offsets from the
memref.reinterpret_cast in the eraseList. If so the operations in the
eraseList are erased, otherwise they are kept until all cir.load/store
ops relying on them are lowered.

One challenge here is that we can't actually do this by checking the
uses of memref.reinterpret_cast operations, as their results aren't
actually used in the created memref load/store ops (the base alloca
result found via findBaseAndIndices is used). Because of this, this base
alloca result is passed as the newAddr Value to eraseIfSafe in the
[cir.load](https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/lib/CIR/Lowering/ThroughMLIR/LowerCIRToMLIR.cpp#L236C5-L242C6)/[cir.store](https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/lib/CIR/Lowering/ThroughMLIR/LowerCIRToMLIR.cpp#L266C1-L271C6)
lowerings.

Currently the eraseIfSafe function counts all memref.load/store values
that use this base address:

https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/lib/CIR/Lowering/ThroughMLIR/LowerCIRToMLIR.cpp#L215-L218

The problem here is that this also counts all the other
memref.load/store ops, which store/load to/from the base address, but
don't use the memref.reinterpret_cast ops to obtain the offsets. Because
of this the lowering fails if multiple store/loads to/from the same
array are performed in the original C code as in the example of issue
memref.load/store ops, the newUsedNum is (for the later stores) larger
than oldUsedNum (only the uses of the cir.ptr_stride op) and the
memref.reinterpret_cast ops are not removed.

This PR contains a first attempt to fix this (i.e only count the
memref.load/store ops, which obtain offsets from the
memref.reinterpret_cast in the eraseList). I only count
memref.load/store ops, if the first offset value, corresponds to the
offset value in the last memref.reinterpret_cast.

Limitations of this PR:
This fixes the indirect lowering of the example in issue llvm#1405 and also
works for other test I made where multiple store/loads to/from the same
array, but assumes two thing to be the case:

1. The cir.const used as the stride in the cir.ptr_str is not reused in
other cir.ptr_stride ops
2. Only the last cir.ptr_stride can have multiple uses (for multidim
arrays)

Both of these assumptions seem to be true for the C-Code I testet (for
the translation of accesses to C/C++ Arrays to cir ops). But the
eraseIfSafe function might need to be changed/further improved in the
future to support cases, where those assumptions fail.

For example if an optimization is run on cir where the cir.const ops
with the same value are reused for the different cir.ptr_stride ops, the
indirect lowering would still fail. Or if in a multidimensional array a
subarray is accessed, e.g.
```c
int arr[3][4];
int *row = arr[1];
```
(Note: I pretty sure for this it isn't suffiicient to just extend the
function to check if all offset value, corresponds to the offset value
in the all memref.reinterpret_cast, but we would probably need to
seperatly check for each memref.reinterpret_cast if it can be removed
(instead of removing all or non in the eraseList))

While debugging issue llvm#1405 I noticed a few thing that I think could be
improved in the canonical ForOp lowering:

1. There is one edge case, where the forOp should not be marked as
canonical in my opinion:
```c
  int i;
  for (i = 0; i < 100; i++);
  i += 10;
```
(with the current lowering this for is marked canonical but since i is
replaced by the induction var of the scf.for op and the actual memory
representing i is not updated i has a wrong value after the for. This is
avoided when we lower this for as a non canonical for.)
2. I think we can directly replace the loads to the CIR.IV with the
scf.IV and not create the dummy arith.add IV, 0 op (I think this might
be a relic from previous MLIR version where the replaceOp only worked
with operations (not values). This make the IR more readable and easier
to understand. If I'm missing somethink here and the arith.add IV, 0 has
a purpose I'm not seeing let me know.
3. When implementing the change in 1, we know that in a canonical for
the induction variable is definied inside the for and is only valid
here. Because of this and since we replace the loads of the cir IV with
the scf.IV we can remove the unneccessary alloca and store op created
for the cir.IV

(These changes only show up in an non-optimized binary, but aren't
relevant when running clang with optimations, I still think they improve
the readability + understandability of the core ir)

I also noticed, that we are currently only running the SCFPreparePass
when we are printing the result of the cir to core dialect translation.

https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/lib/CIR/CodeGen/CIRPasses.cpp#L84-L85
Because of this compiling to an object file (or llvm IR) with the
indirect lowering path fails, if the code contains a canonical for. I
suggest always running this pass, when were going throughMLIR.

 ## Passing through is_nontemporal in loads/store lowerings:
Since the corresponding memref ops also have this attribute it's
basically just passing through a boolean (and doesn't need any special
handling, I think). Even tho there is probably no practical application
now I think this might avoid bugs/confusion in the future. If there is
any reason not to do this let me know.

I also added a new test case for arrays, adjusted the canonical forOp
test to reflect the made changes and combined the non canonical forOp
tests into one file and added a test case for the edge case describe
before.

(Note: if I find the time I will try to run the SingleSource test suite
with the throughMLIR lowering in the next week to get a better idea,
where we are with this pipeline. In general I agree with everything
discussed in issue llvm#1219, but I think we probably can already add more
support in regard to arrays (and maybe pointers) with the existing mlir
core constructs)
)

This PR introduces
[`TryMarkNoThrow`](https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/lib/CodeGen/CodeGenFunction.cpp#L1394).
[`isInterposable`](https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/lib/CodeGen/CodeGenFunction.cpp#L1397C10-L1397C26)
isn't fully implemented and I'm not quite sure we need it? Anyway, I
have introduced a missing feature `getSemanticInterposition` relevant
for its completion.

I have also updated an old test --
[`foo()`](https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/clang/test/CIR/CodeGen/try-catch-dtors.cpp#L313)
should be marked as unwind/nothrow. I have compared with the original
CodeGen and attached the llvm output for verification.

One concern I have is if the cases I have to mimic
[`mayThrow`](https://github.com/llvm/clangir/blob/6e5fa09550c98f84d017873ed3e5667fd5fd909c/llvm/lib/IR/Instruction.cpp#L1158)
from the OG are enough, please let me know your thoughts.
This PR adds support for the `-fdump-record-layouts` flag. It enables
printing both the `CIRGenRecordLayout` and the `ASTRecordLayout`,
similar to what is done in CodeGen.
Backport support for Complex value initialization from the empty
InitList.

Backported from llvm/llvm-project#143192
)

Currently we can't handle continues nested under `IfOp`, because if we
replace it with a yield, then it only breaks out of that `if`-statement,
rather than continuing the whole loop.
Perhaps that should be done by changing the whole structure of the while
loop.

Co-authored-by: Yue Huang <yue.huang@terapines.com>
…llvm#1670)

Backport the VecShuffleOp verifier to catch invalid index

Implemented in llvm/llvm-project#143262
…llvm#1673)

When we process a completed Enum type, we were checking to see if the
completed type was in the type cache and clearing the cache if the
completed and converted underlying type for the enum doesn't pass an
`isInteger(32)` check. Unfortunately, this checks to see if the type is
the MLIR builtin 32-bit integer type, whereas it will always be a CIR
integer type, so the check always fails.

I don't believe there can ever be a case where the forward declared type
for the enum doesn't match the completed type, so we should never need
to clear the cache. This change replaces the previous check with an
assert that compares the actual completed type to the cached type.
…vm#1672)

This removes unnecessary parens from the assembly format of
BaseClassAddrOp, DerivedClassAddrOp, BaseDataMemberOp,
DerivedDataMemberOp, BaseMethodOp, and DerivedMethodOp to bring them
into conformance with the CIR ASM Style Guide.

The is no function change beyond the assembly format change.
- Replace std::map with llvm::StringMap and std::vector with
llvm::SmallVector for improved performance.
 - Preserve the behavior
 - Remove unused headers
Backport creating Array type with ComplexType as element type
As the scf dialect does not support early exits, it might be necessary
to change the body of WhileOp to implement the semantics of ContinueOp.
I choose to add a guard `if (!cond)` for everything following the
`continue`.

Co-authored-by: Yue Huang <yue.huang@terapines.com>
This PR is related to llvm#1685 and adds some basic support for the printf
function.

Limitations:
1. It only works if all variadic params are of basic interger/float type
(for more info why memref type operands don't work see llvm#1685)
2. Only works if the format string is definied directly inside the
printf function

The downside of this PR is also that the handling this edge case adds
significant code bloat and reduces readability for the cir.call op
lowering (I tried to insert some meanigful comments to improve the
readability), but I think its worth to do this so we have some basic
printf support (without adding an extra cir operation) until upstream
support for variadic functions is added to the func dialect. Also a few
more test (which use such a basic form of printf) in the llvm Single
Source test suite are working with this PR:

before this PR:

Testing Time: 4.00s

Total Discovered Tests: 1833
Passed : 420 (22.91%)
Failed : 10 (0.55%)
Executable Missing: 1403 (76.54%)

with this PR:

Testing Time: 10.29s

Total Discovered Tests: 1833
  Passed            :  458 (24.99%)
  Failed            :    6 (0.33%)
  Executable Missing: 1369 (74.69%)
This PR addresses the feedback from
llvm/llvm-project#142041 (comment).
Our algorithm for accumulating bitfields has diverged from CodeGen since
Clang 19.

There is one key difference: in CIR, we use the function
`getBitfieldStorageType`, which checks whether the bit width of the
current accumulation run is a valid fundamental width (i.e., a power of
two: 8, 16, 32, 64). If it is, it returns a CIR type of that size
otherwise, it returns an array with the closest alignment.

For example, given the following struct:

```c
struct S {
  int a : 4;
  int b : 27;
  int c : 17;
  int d : 2;
  int e : 15;
  unsigned f;
};
```

The CodeGen output is:

```llvm
%struct.S = type { i64, i16, i32 }
```

Whereas the new CIR algorithm produces:

```mlir
!cir.record<struct "S" {!cir.array<!u8i x 7>, !u16i, !u32i}>
```
In CIR, the algorithm accumulates up to field `d`, resulting in 50
accumulated bits. Since 50 is not a fundamental width, the closest
alignment is 56 bits, which leads to the type `!cir.array<!u8i x 7>`.
The algorithm stops before accumulating field `e` because including it
would exceed the register size (64), which is not ideal.

At this point, it's unclear whether this divergence from CodeGen
represents an improvement. If we wanted to match CodeGen exactly, we
would need to replace the use of `getBitfieldStorageType` with
`getUIntNType`. The difference is that `getUIntNType` always returns the
closest power-of-two integer type instead of falling back to an array
when the size is not a fundamental width.

With this change, CIR would match CodeGen's layout exactly. It would
require the following small code change:
```diff
diff --git a/clang/lib/CIR/CodeGen/CIRRecordLayoutBuilder.cpp b/clang/lib/CIR/CodeGen/CIRRecordLayoutBuilder.cpp
index 7c1802b..17538b191738 100644
--- a/clang/lib/CIR/CodeGen/CIRRecordLayoutBuilder.cpp
+++ b/clang/lib/CIR/CodeGen/CIRRecordLayoutBuilder.cpp
@@ -616,7 +616,7 @@ CIRRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
       if (!InstallBest) {
         // Determine if accumulating the just-seen span will create an expensive
         // access unit or not.
-        mlir::Type Type = getBitfieldStorageType(astContext.toBits(AccessSize));
+        mlir::Type Type = getUIntNType(astContext.toBits(AccessSize));
         if (!astContext.getTargetInfo().hasCheapUnalignedBitFieldAccess())
           llvm_unreachable("NYI");

@@ -674,12 +674,12 @@ CIRRecordLowering::accumulateBitFields(RecordDecl::field_iterator Field,
         // remain there after a stable sort.
         mlir::Type Type;
         if (BestClipped) {
-          assert(getSize(getBitfieldStorageType(
+          assert(getSize(getUIntNType(
                      astContext.toBits(AccessSize))) > AccessSize &&
                  "Clipped access need not be clipped");
           Type = getByteArrayType(AccessSize);
         } else {
-          Type = getBitfieldStorageType(astContext.toBits(AccessSize));
+          Type = getUIntNType(astContext.toBits(AccessSize));
           assert(getSize(Type) == AccessSize &&
                  "Unclipped access must be clipped");
         }
```
You can see a comparison between the two functions
https://godbolt.org/z/qjx1MaEWG.

I'm currently unsure whether using one function over the other has
performance implications. Regarding the **ARM error I mentioned in the
meeting: it was an `assert` I had forgotten to update. It's now fixed
sorry for the confusion.**
- Create CIR specific EnumAttr bases and prefix enum attributes with `CIR_` that automatically puts enum to `cir` namespace

- Removes unnecessary enum case definitions

- Unifies naming of enum values to use capitals consistently and make enumerations to start from 0

- Remove now unnecessary printers/parsers that gets to be generated automatically
…vm#1683)

No test cases provided for `lzcnt_u16` as presented in the OG codegen
equivalent: `test/CodeGen/X86/lzcnt-builtins.c`.

related: llvm#1404
Implement base-2 exponential intrinsic as part of llvm#1192
…lvm#1671)

Hi, This is my first here! Tried to mirror some of the patterns already
presented in both the codegen lib and its tests

I'm very excited to start contributing and potentially making an impact
in this project! feedback is much appreciated.
convert from codegen
```c++
      assert(!Base.isVirtual() && "should not see vbases here");
      auto *BaseRD = Base.getType()->getAsCXXRecordDecl();
      Address V = CGF.GetAddressOfDirectBaseInCompleteClass(
          Dest.getAddress(), CXXRD, BaseRD,
          /*isBaseVirtual*/ false);
      AggValueSlot AggSlot = AggValueSlot::forAddr(
          V, Qualifiers(),
          AggValueSlot::IsDestructed,
          AggValueSlot::DoesNotNeedGCBarriers,
          AggValueSlot::IsNotAliased,
          CGF.getOverlapForBaseInit(CXXRD, BaseRD, Base.isVirtual()));
      CGF.EmitAggExpr(InitExprs[curInitIndex++], AggSlot);

      if (QualType::DestructionKind dtorKind =
              Base.getType().isDestructedType())
        CGF.pushDestroyAndDeferDeactivation(dtorKind, V, Base.getType());
```
Moved rd related intrinsic tests, to a different file similar to
`clang/test/CodeGen/X86/rd-builtins.c`. Let me know if that's the right
call.

related: llvm#1404
Update `__real__` operation to use ComplexRealOp and act directly on the
complex value.


Ref:
llvm/llvm-project#144235 (review)
Update `__imag__` operation to use ComplexRealOp and act directly on the
complex value.

Ref:
llvm/llvm-project#144235 (review)
… tzcnt_u64 (llvm#1691)

Related: llvm#1404

Implements codegen for the X86 builtins `tzcnt_u16`, `tzcnt_u32`, and
`tzcnt_u64`.

While adding tests for both the Intel and AMD variants of BMI
intrinsics, I ran into issues when placing them in the same file.

Both `_tzcnt_u16` (Intel) and `__tzcnt_u16`(AMD) map to the same inline
wrapper in <immintrin.h>. Whether they're isolated or both are present
in a test file, Clang emits only one definition (`__tzcnt_u16`) which I
think causes FileCheck mismatches i.e., the CHECK lines for the Intel
version (`test_tzcnt_u16`) would fail when looking for `_tzcnt_u16`.

I tried updating the CHECK lines for the Intel test to match the emitted
symbol (`__tzcnt_u16`), but it still failed unless the Intel test was
run in isolation, and only when CHECK was updated to `_tzcnt_u16` even
though `__tzcnt_u16` is what is emitted. I also experimented with
split-file to isolate the tests, but that didn’t resolve the issue
either.

To keep the tests independent, I split the Intel and AMD tests into
separate files. Was wondering if this was fine as in OG clang, both
Intel and AMD variants are in the same file
(https://github.com/llvm/clangir/blob/main/clang/test/CodeGen/X86/bmi-builtins.c)
As we need to preserve the ContinueOp for inner loops when we convert
for outer while-loops, we must not mark cir dialect as illegal.
Otherwise, MLIR rejects this kind of preservation and considers it as a
pass failure.
It seems we need another way to check whether the CIR is fully lowered.

Co-authored-by: Yue Huang <yue.huang@terapines.com>
Backporting the VecCreateOp Folder from the upstream
Backporting the VecSplatOp simplifier from the upstream
Lancern and others added 23 commits October 10, 2025 10:39
…1940)

This patch adds support for the builtin functions
`__atomic_test_and_set` and `__atomic_clear`.
Currently, in the case of an empty try block, we emit tryOp in scope and
revisit the catchers, and in the lowering pass, we have a check to
delete empty scopes, but we end up with scopes that contains `cir.yeild`
or an unconditioned jump to another scope which will not be deleted,
also for catchers we emits globals for type info.


But we can already ignore the try-catch statement if the try block is
empty, because that means also all catchers are dead code, which will
lead to not emitting any scopes that we know will be removed later and
also not emitting any unnecessary type info

Example of current emitted IR: https://godbolt.org/z/5d3jEe5K8
Backporting support ChooseExpr for AggregateExpr from upstream
) (llvm#1946)

Documents how ClangIR handles vec3 array element loads using the
vec3->vec4 optimization for aligned memory access. The test shows the
pointer cast to vec4, load as vec4, and shuffle back to vec3 pattern in
both CIR and LLVM IR.

Closes llvm#685
Backporting support the GenericSelectionExpr for AggregateExpr from
upstream
Backporting UnaryExtension for AggregateExpr from the upstream
This patch extends `emitDirectCallee` to resolve HIP host launches to
the correct kernel stub (`__device_stub__...`), matching CUDA semantics
While upstreaming the code for handling exact dynamic casts, I noticed
that we were not checking to see if the source pointer was null before
using it to load the vtable. This change adds that check.
Backport ComplexType support in CallExpr args from the upstream
…ion (llvm#1945)

This PR introduces the `cir.indirectbr` operation to support GCC’s
labels-as-values extension, which allows `goto` statements to jump to
computed block addresses. The implementation creates a dedicated block
that hosts the `indirectbr`, where the target address is provided
through a PHI Node, similar to how classic code generation handles
indirect gotos.
Part of llvm#1912

This PR only fixes 1 file. Also added a CI file to see if it'd run
correctly
PR for llvm#1791

---------

Co-authored-by: yotto3s <yupon.tysm@gmail.com>
Backport global initializer for ComplexType from the upstream
Use ZeroInitAttr to initialize a null value for a ComplexType with 0
number of inits, not building ConstComplexType similar to the upstream
closes llvm#1794 

This PR adds support for the `__sync_lock_set_and_set` builtin.

Signed-off-by: vishruth-thimmaiah <vishruththimmaiah@gmail.com>
Backport the VisitCXXDefaultArgExpr support for ComplexType from the
upstream
A simple clean up, just some small changes for the use explicitly types
instead of `auto`. In the case of `BI__builtin_ia32_pslldqi*_byteshift`.

Following the suggestion of this comment:
llvm#1886 (comment)
…lvm#1965)

The goal of this PR is to prepare the code for backporting the new
implementations of Arith operations for ComplexType and other
expressions support

- Reorder the functions in CIRGenExprComplex similar to upstream and
classical codegen.
- Remove unnecessary functions `emitAddrOfRealComponent` and
`emitAddrOfImagComponent`.
- The updated code style is to be similar to upstream.
Use the op create function in the CirGenExprComplex file after the
restructure
Copy link
Collaborator

@xlauko xlauko left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this change introduced in the upstream in the first place?

From my point of view it is rendundant information on the attribute as it is from the difenition of an attribute a constant. I would expect us going the other way and removevingconst_ from other attributes.

Also then when used as ssa value you already mark it as constant in the operation name. Having cir.const #cir.const_complex with repeated const comes to me as unnecessary.

For instance other arithmetic constants we keep as #cir.int and #cir.fp still.

Comment on lines +2061 to +2063
return cir::ConstComplexAttr::get(builder.getContext(), complexType,
cir::IntAttr::get(complexElemTy, real),
cir::IntAttr::get(complexElemTy, imag));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for complexType here as we can infer it in builder?

cir::FPAttr::get(complexElemTy, imag));
const llvm::APFloat &real = Value.getComplexFloatReal();
const llvm::APFloat &imag = Value.getComplexFloatImag();
return cir::ConstComplexAttr::get(builder.getContext(), complexType,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need for complexType here as we can infer it in builder?

@AmrDeveloper
Copy link
Member Author

Why was this change introduced in the upstream in the first place?

From my point of view it is rendundant information on the attribute as it is from the difenition of an attribute a constant. I would expect us going the other way and removevingconst_ from other attributes.

Also then when used as ssa value you already mark it as constant in the operation name. Having cir.const #cir.const_complex with repeated const comes to me as unnecessary.

For instance other arithmetic constants we keep as #cir.int and #cir.fp still.

I did that to be consistent with other const for example const_vector, const_record and also there is a case for example globals, it will not have const

cir.global external %g = #cir.const_complex<#cir.int<1> : !s32i, #cir.int<2> : !s32i> : !cir.complex<!s32i>

But for locals, yes it will have unnecessary const, I am okey with removing the const from the name too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.