Skip to content
Nicholas Smith edited this page Jun 24, 2024 · 1 revision

ttir.add (tt::ttir::AddOp)

Eltwise add.

Eltwise add operation.

Traits: AttrSizedOperandSegments, Elementwise

Interfaces: DestinationStyleOpInterface, TTIROpInterface

Attributes:

Attribute MLIR Type Description
operand_constraints ::mlir::ArrayAttr

Operands:

Operand Description
inputs variadic of ranked tensor of any type values
outputs variadic of ranked tensor of any type values

Results:

Result Description
results variadic of ranked tensor of any type values

ttir.alloc (tt::ttir::AllocOp)

Alloc op.

Tensor Alloc operation

Attributes:

Attribute MLIR Type Description
address ::mlir::IntegerAttr 64-bit signless integer attribute
size ::mlir::IntegerAttr 64-bit signless integer attribute
memory_space ::mlir::tt::MemorySpaceAttr
TT MemorySpace{{% markdown %}}Enum cases: * system (`System`) * mmio (`SystemMMIO`) * dram (`DeviceDRAM`) * l1 (`DeviceL1`){{% /markdown %}}

Results:

Result Description
result ranked tensor of any type values

ttir.dealloc (tt::ttir::DeallocOp)

Dealloc op.

Tensor Dealloc operation

Operands:

Operand Description
result ranked tensor of any type values

ttir.generic (tt::ttir::GenericOp)

Generically dispatch work to a grid of cores.

This generic op carries a region that represents the work each core does. The region is expected to have the same signature as the op itself. The op is expected to be lowered to a backend specific form by a consuming backend. This op is heavily inspired by the linalg.generic op so it can be useful to refer to linalg.generic documentation for more details.

%5 = "ttir.generic"(%1, %3, %4) <{ grid = #tt.grid<1x1>, // The grid range of cores to dispatch work to. indexing_maps = [#map, #map, #map], // Affine maps for indexing into the input/output tensors. See linalg.generic iterator_types = [#parallel, #parallel], // Iterator types for the input/output tensors. See linalg.generic operandSegmentSizes = array<i32: 2, 1>, // Sizes of the operand segments, i.e. 2 inputs and 1 output. ({ ^bb0(%arg2: memref<64x128xf32, #l1_>, %arg3: memref<64x128xf32, #l1_>, %arg4: memref<64x128xf32, #l1_>): // Region body, would contain some computation that represents the work each core does. }) : (tensor<64x128xf32, #layout1>, tensor<64x128xf32, #layout1>, tensor<64x128xf32, #layout1>) -> tensor<64x128xf32, #layout1>

Traits: AttrSizedOperandSegments

Interfaces: DestinationStyleOpInterface, TTIROpInterface

Attributes:

Attribute MLIR Type Description
grid ::mlir::tt::GridAttr
TT grid attribute{{% markdown %}} TT grid attribute {{% /markdown %}}
indexing_maps ::mlir::ArrayAttr AffineMap array attribute
iterator_types ::mlir::ArrayAttr
operand_constraints ::mlir::ArrayAttr

Operands:

Operand Description
inputs variadic of ranked tensor of any type values
outputs variadic of ranked tensor of any type values

Results:

Result Description
results variadic of ranked tensor of any type values

ttir.kernel (tt::ttir::KernelOp)

Kernel call.

A generic kernel call operation. This operation is used to pattern match by some consuming backend.

Traits: AttrSizedOperandSegments

Interfaces: DestinationStyleOpInterface

Attributes:

Attribute MLIR Type Description
op ::mlir::FlatSymbolRefAttr flat symbol reference attribute
kind ::mlir::FlatSymbolRefAttr flat symbol reference attribute

Operands:

Operand Description
inputs variadic of ranked tensor of any type values or non-0-ranked.memref of any type values
outputs variadic of ranked tensor of any type values or non-0-ranked.memref of any type values

Results:

Result Description
results variadic of ranked tensor of any type values or non-0-ranked.memref of any type values

ttir.layout (tt::ttir::LayoutOp)

Layout op.

Layout operation, transition tensors from one layout to another. Some examples include:

  • Transitioning between different memory spaces, e.g. DRAM to L1.
  • Transitioning between different data types, e.g. f32 to f16.
  • Transitioning between different tile sizes, e.g. 1x16 to 32x32
  • Transitioning between different tensor sharding
  • Some combination of the above

#layout = #tt.layout<8192x128x1, undef, <1x1>, memref<64x128xf32, #system>> #layout1 = #tt.layout<8192x128x1, undef, <1x1>, memref<64x128xf32, #l1_>> %1 = "ttir.layout"(%arg0, %0) : (tensor<64x128xf32, #layout>, tensor<64x128xf32, #layout1>) -> tensor<64x128xf32, #layout1>

Interfaces: DestinationStyleOpInterface

Operands:

Operand Description
input ranked tensor of any type values
output ranked tensor of any type values

Results:

Result Description
result ranked tensor of any type values

ttir.multiply (tt::ttir::MultiplyOp)

Eltwise multiply.

Eltwise multiply operation.

Traits: AttrSizedOperandSegments, Elementwise

Interfaces: DestinationStyleOpInterface, TTIROpInterface

Attributes:

Attribute MLIR Type Description
operand_constraints ::mlir::ArrayAttr

Operands:

Operand Description
inputs variadic of ranked tensor of any type values
outputs variadic of ranked tensor of any type values

Results:

Result Description
results variadic of ranked tensor of any type values

ttir.yield (tt::ttir::YieldOp)

Yield op.

Yield operation, this is required by MLIR to mark the end of a dispatch region.

Traits: AlwaysSpeculatableImplTrait, ReturnLike, Terminator

Interfaces: ConditionallySpeculatable, NoMemoryEffect (MemoryEffectOpInterface), RegionBranchTerminatorOpInterface

Effects: MemoryEffects::Effect{}

Operands:

Operand Description
values variadic of ranked tensor of any type values or non-0-ranked.memref of any type values