Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dialects: (arith, core) Constant op to accept tensor and memref #2969

Merged
merged 3 commits into from
Aug 1, 2024

Conversation

n-io
Copy link
Collaborator

@n-io n-io commented Jul 31, 2024

Upstream mlir accepts the following arith.constant:

%t_const = arith.constant dense<1.234500e-01> : tensor<16xf32>
%m_const = arith.constant dense<1.678900e-01> : memref<64xf32>

As for tensors, xdsl currently accepts it in generic form and print it in both generic and non-generic form, but will fail to parse the non-generic form.

As for memrefs, xdsl will not accept it and requires an extensive change to the attribute parser to add support for memrefs in RankedVectorOrTensorOf. Some code had to be moved after the definition of MemRefType to support this change.

@n-io n-io added bug Something isn't working dialects Changes on the dialects labels Jul 31, 2024
@n-io n-io self-assigned this Jul 31, 2024
Copy link

codecov bot commented Jul 31, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 89.85%. Comparing base (ddbe80c) to head (13c5808).
Report is 5 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2969   +/-   ##
=======================================
  Coverage   89.84%   89.85%           
=======================================
  Files         408      408           
  Lines       51002    51028   +26     
  Branches     7909     7912    +3     
=======================================
+ Hits        45825    45850   +25     
- Misses       3927     3928    +1     
  Partials     1250     1250           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@@ -896,147 +892,6 @@ def verify(self, attr: Attribute, constraint_context: ConstraintContext) -> None
constraint.verify(attr, constraint_context)


@irdl_attr_definition
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why move this down? What's the change here? Git isn't being very helpful

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As briefly mentioned in the PR description, though likely not explained in sufficient detail, I had to move RankedVectorOrTensorOf after MemRefType, as I changed it to now also accept MemRefType. As a follow-up, DenseIntOrFPElementsAttr also had to be moved down since it uses RankedVectorOrTensorOf.

As far as I remember, the only change required to DenseIntOrFPElementsAttr was to change the signature of the type argument in from_list from RankedVectorOrTensorOf[AnyFloat | IntegerType | IndexType] to:

RankedVectorOrTensorOf[AnyFloat | IntegerType | IndexType]
            | RankedVectorOrTensorOf[AnyFloat]
            | RankedVectorOrTensorOf[IntegerType]
            | RankedVectorOrTensorOf[IndexType]

Not fully clear why, but it does make pyright happy.

BTW, accepting new name suggestions for RankedVectorOrTensorOf.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes a lot of sense, thank you

xdsl/dialects/builtin.py Outdated Show resolved Hide resolved
Copy link
Member

@superlopuh superlopuh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good to me, I would probably wait for someone more familiar with MLIR than I am to comment on how similar this is to how things happen over there. My understanding is that some of the names follow the MLIR API.

Mabe @math-fehr or @zero9178 can advise?

@n-io n-io requested a review from superlopuh August 1, 2024 10:05
@n-io
Copy link
Collaborator Author

n-io commented Aug 1, 2024

Code looks good to me, I would probably wait for someone more familiar with MLIR than I am to comment on how similar this is to how things happen over there. My understanding is that some of the names follow the MLIR API.

Mabe @math-fehr or @zero9178 can advise?

I'd appreciate that, not super familiar with that part of the API myself, but it appears RankedVectorOrTensorOf was an xdsl concept, where MLIR if I'm not mistaken has no similar concept and would more generically use Attribute

@n-io n-io requested a review from zero9178 August 1, 2024 10:09
@n-io n-io changed the title dialects: (arith) Constant op to accept tensor and memref dialects: (arith, core) Constant op to accept tensor and memref Aug 1, 2024
@n-io n-io requested a review from PapyChacal August 1, 2024 12:12
Copy link
Collaborator

@PapyChacal PapyChacal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!

@AntonLydike
Copy link
Collaborator

I don't think memref makes all that much sense to be accepted here? Seeing as arith does not work on memref anyway? (big debate on this happening rn)...

@AntonLydike
Copy link
Collaborator

It appears that arith.constant can instantiate memrefs (although I don't know how it would even lower that?), but other arith ops can't operate on memrefs...

@PapyChacal
Copy link
Collaborator

I don't think memref makes all that much sense to be accepted here? Seeing as arith does not work on memref anyway? (big debate on this happening rn)...

I don't think it makes much sense either; but this is about matching MLIR's implementation, not changing it, IMO?

@n-io
Copy link
Collaborator Author

n-io commented Aug 1, 2024

It appears that arith.constant can instantiate memrefs (although I don't know how it would even lower that?), but other arith ops can't operate on memrefs...

It's lowered during bufferization, e.g.:

%0 = arith.constant dense<1.234500e-01> : tensor<8xf32>
mlir-opt %s -allow-unregistered-dialect --eliminate-empty-tensors --one-shot-bufferize="allow-unknown-ops"
module {
  memref.global "private" constant @__constant_8xf32 : memref<8xf32> = dense<1.234500e-01> {alignment = 64 : i64}
  %0 = memref.get_global @__constant_8xf32 : memref<8xf32>
}

@n-io n-io merged commit 09e3157 into main Aug 1, 2024
10 checks passed
@n-io n-io deleted the nicolai/arith-const-tensors branch August 1, 2024 15:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working dialects Changes on the dialects
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants