-
Notifications
You must be signed in to change notification settings - Fork 543
Initial commit does not build #4082
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Initial commit does not build #4082
Conversation
Current to do list:
|
@@ -1616,6 +1616,223 @@ class ConvertAtenAdaptivePoolOp : public OpConversionPattern<OpTy> { | |||
}; | |||
} // namespace | |||
|
|||
namespace { | |||
template <typename OpTy, typename PoolingOpTy, int Dim> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove the other two template parameters, since they are unused.
I took a short glance through and I don't have any immediate comments yet. Let's get something built so we can test correctness and iterate from there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll also need to add this pattern to populatePoolingPatternsAndLegality
at the bottom of the file.
7ba849f
to
31140fb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs tests.
auto forLoop = b1.create<scf::ForOp>( | ||
loc, lb, ub1, step, ValueRange{}, | ||
[&](OpBuilder &b, Location loc, Value iv1, ValueRange args) { | ||
// Step 1: Extract bounds for region of interest (roi) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Use proper punctuation in comments
// Step 1: Extract bounds for region of interest (roi) | |
// Step 1: Extract bounds for region of interest (roi). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also in other comments in this file
SmallVector<Value, 2> strideInts = {scaleH, scaleW}; | ||
SmallVector<Value, 2> paddingInts = {zeroAttr, zeroAttr}; | ||
SmallVector<Value, 2> dilationInts(oneAttr, 2); | ||
SmallVector<Value, 4> outTensorShape; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you need two values only you can use a plain C array
roiSampleWidth}; | ||
SmallVector<Value> dims = | ||
getTensorSizesUntilDim(b, loc, extractRoi, 1); | ||
for (unsigned i = 2; i < inputRank; i++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use pre-increment:
for (unsigned i = 2; i < inputRank; i++) { | |
for (unsigned i = 2; i < inputRank; ++i) { |
See https://llvm.org/docs/CodingStandards.html#prefer-preincrement
Also in other loops
Value lowY_int = b.create<math::FloorOp>(loc, lowY); | ||
Value lowX_int = b.create<math::FloorOp>(loc, lowX); | ||
Value highY_int = b.create<math::CeilOp>(loc, highY); | ||
Value highX_int = b.create<math::CeilOp>(loc, highX); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
llvm doesn't use underscores in variable names, see https://llvm.org/docs/CodingStandards.html#name-types-functions-variables-and-enumerators-properly and https://mlir.llvm.org/getting_started/DeveloperGuide/#style-guide
|
||
|
||
Type elementType = cast<RankedTensorType>(self.getType()).getElementType(); | ||
if (!isa<mlir::FloatType>(elementType) && !supportNonFPInput) | ||
return op->emitError("unimplemented: non-floating point type"); | ||
|
||
Value initValue = | ||
rewriter.create<arith::ConstantOp>(loc, cast<TypedAttr>(initValueAttr)); | ||
|
||
paddedInput = padInputTensor(op, rewriter, self, ceilMode, dimensionality, | ||
strideInts, paddingInts, initValue); | ||
|
||
auto outTensorInitialized = computeOutputTensor( | ||
op, rewriter, self, dimensionality, ceilMode, strideInts, paddingInts, | ||
dilationInts, kernelSizeIntValues, outTensorShape, initValue); | ||
|
||
|
||
|
||
auto stridesAttr = rewriter.getI64VectorAttr(strideInts); | ||
auto dilationAttr = rewriter.getI64VectorAttr(dilationInts); | ||
auto shape = castIntVectorToIndexVector(rewriter, loc, kernelSizeIntValues); | ||
|
||
Value windowTensor = rewriter.create<tensor::EmptyOp>( | ||
loc, getAsOpFoldResult(shape), elementType); | ||
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please undo these unrelated changes. If you want to reformat the whole file, do that before landing this patch.
@@ -187,7 +194,7 @@ static LogicalResult createPoolingOp( | |||
return rewriter.notifyMatchFailure( | |||
op, "failed to perform permutation of tensor"); | |||
} | |||
|
|||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also here
using OpConversionPattern::OpConversionPattern; | ||
|
||
static SmallVector<Value> | ||
coordinateTransform(OpBuilder &b, Torch::TorchvisionRoiAlignOp op, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function names should start with a verb. (Similar with PR subject line.)
Location loc, SmallVector<Value> outputSizes, Value input, | ||
SmallVector<Value> inputSizes, | ||
SmallVector<Value> scaleValues, std::string coordStr, | ||
bool alignCornersBool, SmallVector<Value> indices, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Passing vectors by value will result in copies. Instead, you can pass them as ArrayRef
|
||
unsigned dimOffset = 2; | ||
auto inputType = cast<RankedTensorType>(input.getType()); | ||
auto inputRank = inputType.getRank(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not use auto
when the type is not obvious based on the context: https://llvm.org/docs/CodingStandards.html#use-auto-type-deduction-to-make-code-more-readable
RankedTensorType inputType = dyn_cast_or_null<RankedTensorType>( | ||
this->getTypeConverter()->convertType(op.getInput().getType())); | ||
|
||
if (inputType == nullptr) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (inputType == nullptr) { | |
if (!inputType) { |
llvm prefers unary negation / explicit cast to bool over comparisons with nullptr
rewriter.create<arith::ConstantOp>(loc, rewriter.getF32FloatAttr(0.0)); | ||
RankedTensorType resultType = dyn_cast_or_null<RankedTensorType>( | ||
this->getTypeConverter()->convertType(result.getType())); | ||
if (resultType == nullptr) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (resultType == nullptr) { | |
if (!resultType) { |
RankedTensorType resultType = dyn_cast_or_null<RankedTensorType>( | ||
this->getTypeConverter()->convertType(result.getType())); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RankedTensorType resultType = dyn_cast_or_null<RankedTensorType>( | |
this->getTypeConverter()->convertType(result.getType())); | |
auto resultType = getTypeConverter()->convertType<RankedTensorType>(result.getType()); |
Also, you should check that this conversion succeeded and emit an error otherwise
loc, getAsOpFoldResult(finalOutputShape), resultElementType); | ||
rewriter.create<scf::ForOp>( | ||
loc, lb, ub0, step, ValueRange{}, | ||
[&](OpBuilder &b, Location loc, Value iv0, ValueRange args) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because this is a long lambda, I'd either make it a free function or at least write out the list of captured variables
@@ -1665,4 +2125,5 @@ void mlir::torch::torch_to_linalg::populatePoolingPatternsAndLegality( | |||
typeConverter, context); | |||
patterns.add<ConvertAtenAdaptivePoolOp<AtenAdaptiveMaxPool3dOp>>( | |||
typeConverter, context); | |||
patterns.add<ConvertRoiAlignOp>(typeConverter, context); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can add all of the patterns with the standard constructor to the same add
call:
patterns.add<P0, P1, P2, P3>(typeConverter, context);
@Muzammiluddin-Syed-ECE make sure you click the 'Resolve conversation' button under comments you believe are addressed. This makes it much easier to iterate on pull requests. |
Pausing work on this issue as a ramp up task. This task has proven to be more involved than expected for something issued as part of an onboarding process. Work done:
Both these conditions were added to avoid the dependence on data in execution (for example the loop bounds and tensor indexing being dependent on the data inside the inputs) 2. Identified an algorithm to implement base case To do:
2. Add this unit test in an appropriate location:
3. Create a test to verify numerics |
Feel free to ping me for additional context and detail |
Draft PR For initial review.
Adding lowering support for RoiAlign ops with static shapes and known sampling ratios.