Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[UMA] UMA v1.0 #12087

Merged
merged 112 commits into from
Aug 9, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
112 commits
Select commit Hold shift + click to select a range
5ef8b02
Add minimal working structure for generic interface
PaulPalomeroBernardo Dec 13, 2021
c22f3e5
Separate target definition from codegen
PaulPalomeroBernardo Dec 15, 2021
f63e28a
Update file structure to support multiple NPU targets
PaulPalomeroBernardo Jan 10, 2022
adefb48
Add scheduling and pass support to codegen
PaulPalomeroBernardo Jan 11, 2022
47c5578
Update schedule function and pass registration
PaulPalomeroBernardo Jan 13, 2022
a34e015
Add generic partitioner for relay graph partitioning
PaulPalomeroBernardo Jan 13, 2022
5889fee
Add pattern-based relay graph partitioning and AOT codegen
PaulPalomeroBernardo Jan 17, 2022
be8048b
Update API
PaulPalomeroBernardo Jan 19, 2022
340ddd8
Add UltraTrail relay passes and schedule function
PaulPalomeroBernardo Feb 2, 2022
86e851a
Update UltraTrail relay passes
PaulPalomeroBernardo Feb 2, 2022
3303ae3
Add tir_to_runtime hook for UltraTrail
PaulPalomeroBernardo Feb 2, 2022
d220c5f
Add operator strategy registration to lowering
PaulPalomeroBernardo Feb 4, 2022
1003b3f
Add option to pass constants as attributes
PaulPalomeroBernardo Feb 4, 2022
b9de8c1
Refactor naming: Generic to UMA
PaulPalomeroBernardo Feb 4, 2022
3335e20
Change API to single user-facing backend class UMABackend
PaulPalomeroBernardo Feb 9, 2022
03398f9
Add initial codegen API
PaulPalomeroBernardo Mar 1, 2022
6cd220e
[UMA] add a generic packed function to register targets
cgerum Mar 14, 2022
0b6cccd
Restructure files and add initial codegen
PaulPalomeroBernardo Mar 14, 2022
bd44fa8
Minor code cleanup
cgerum Mar 15, 2022
bf641ee
Add UMA config and MergeCompilerRegion example
PaulPalomeroBernardo Mar 17, 2022
222cbb6
Move UMA configuration to init parameters
PaulPalomeroBernardo Mar 17, 2022
54d7c52
Add python hooks for C-codegen. Still has known restrictons
PaulPalomeroBernardo Mar 21, 2022
67940e1
Fix relay_to_tir hook to keep virtual device in main function
PaulPalomeroBernardo Mar 24, 2022
83476e1
Remove register schedules, scheduling is moved to passes for now
PaulPalomeroBernardo Mar 25, 2022
93f7d94
Remove extract constants since non-scalar constants are now supported…
PaulPalomeroBernardo Mar 25, 2022
008070d
API documentation and some code fixes and cleanup
PaulPalomeroBernardo Mar 25, 2022
684ff70
Fix typo
PaulPalomeroBernardo Mar 25, 2022
aabb90d
Fix UMA lowering
PaulPalomeroBernardo May 12, 2022
8092a44
Prototype for UMA-based target attribute registration
PaulPalomeroBernardo May 13, 2022
5ca3c73
Add default option and type deduction to register_target_attr
PaulPalomeroBernardo May 16, 2022
52f1fdb
Change pass phases to enum
PaulPalomeroBernardo May 31, 2022
d5ca776
[Relay] Plumb external codegen target via Target.current() for all ex…
mbs-octoml May 23, 2022
8b299ad
Use current target in lowering
cgerum Jun 1, 2022
f147087
Use attr:kRelayToTIR
cgerum Jun 1, 2022
a1ec13d
Remove erronousely commited quick fix
Jun 1, 2022
445b538
Towards test cases for uma
cgerum Jun 1, 2022
36f07ee
Add test_uma
cgerum Jun 1, 2022
73012aa
Initial UMA structure for version 1
PaulPalomeroBernardo May 31, 2022
cd8dcbb
[UMA]: conv2d unit test
MichaelJKlaiber Jun 23, 2022
618e83a
[UMA] update of tutorial
MichaelJKlaiber Jun 24, 2022
efc788f
[UMA] update of pass format, still issue with conv2d c code
MichaelJKlaiber Jun 24, 2022
0cf3332
[UMA] refactoring of test_uma_lowering_with_umalower.py
MichaelJKlaiber Jun 27, 2022
2c8f94e
[UMA] refactoring of test_uma_lowering_with_umalower.py
MichaelJKlaiber Jun 27, 2022
3e58dfb
[UMA] Adding backend, codegen, patterns, strategies and run file for …
MichaelJKlaiber Jun 27, 2022
120f32c
[UMA] update towards my_ai_hw usecase
MichaelJKlaiber Jun 28, 2022
1b5cff3
[UMA] working testcase for conv2d with uma
MichaelJKlaiber Jun 29, 2022
eeb0516
[UMA] testcase
MichaelJKlaiber Jun 29, 2022
5f9680d
[UMA] uma lower.py: replaced outdated function create_prim_func_from_…
MichaelJKlaiber Jun 29, 2022
d2cae76
UMA: Move torch import to top to avoid free(): invalid pointer error
cgerum Jul 5, 2022
ecc4a04
Add stub files for targets
cgerum Jul 5, 2022
40dd820
Add tests for ultratrail codegen
cgerum Jul 5, 2022
d86fd96
Adopt my_ai_hw accelerator for new target definition
cgerum Jul 5, 2022
2f88286
Add unit test for target attributes
cgerum Jul 6, 2022
9a6e020
Test string arguments
cgerum Jul 6, 2022
79577f2
Extend target test
cgerum Jul 6, 2022
f48e3f0
[UMA] tutorial first versin
MichaelJKlaiber Jul 1, 2022
e117b7c
[UMA] moved unit tests to contrib
MichaelJKlaiber Jul 5, 2022
5758c7b
[UMA] renaming interfaces
MichaelJKlaiber Jul 6, 2022
688f4c8
Fix umalower_tests in ci
cgerum Jul 6, 2022
7551a0e
make uma a python module
cgerum Jul 6, 2022
8c4f065
[UMA] Update of UMAv1 API + added testcases + tutorialV1
MichaelJKlaiber Jul 7, 2022
4a0a8c5
[UMA] UMAv1
MichaelJKlaiber Jul 13, 2022
8767b4a
[UMA] cmake file updated
MichaelJKlaiber Jul 13, 2022
4a59b38
AOT test infrastructure adapted
MichaelJKlaiber Jul 14, 2022
eb5935c
UMA: add __init__.py for uma.api
cgerum Jul 13, 2022
d29e2f1
Finish uma tests
cgerum Jul 13, 2022
d038b9f
Use upstream version of dmlc-core
cgerum Jul 14, 2022
a47ebfd
[UMA] tir_to_runtime documentation update
MichaelJKlaiber Jul 14, 2022
25562b2
[UMA] cleanup
MichaelJKlaiber Jul 14, 2022
0b9f951
[UMA] fix for test_partition
MichaelJKlaiber Jul 14, 2022
02079cf
[UMA] lint fix
MichaelJKlaiber Jul 14, 2022
e346754
[UMA] lint fix
MichaelJKlaiber Jul 14, 2022
e9e8d00
[UMA] lint fix
MichaelJKlaiber Jul 14, 2022
5c4eeae
[UMA] lint fix
MichaelJKlaiber Jul 14, 2022
7fda363
[UMA] fix of build scripts for arm and i386
MichaelJKlaiber Jul 15, 2022
16306dd
Fix remaining linter errors
cgerum Jul 14, 2022
e8a45ca
[UMA] CMakeLists.txt added UMA tvm_option
MichaelJKlaiber Jul 18, 2022
6c2fb04
[UMA] added UMA tvm_option
MichaelJKlaiber Jul 18, 2022
646b94c
[UMA] guard against multiple registrations
MichaelJKlaiber Jul 19, 2022
a920007
[UMA] fixed comments as pointed out in PR 12087
MichaelJKlaiber Jul 20, 2022
6ce6fa0
[UMA] fixed comments as pointed out in PR 12087
MichaelJKlaiber Jul 21, 2022
efb6e56
[UMA] skip uma tests if uma is not available
cgerum Jul 15, 2022
a679672
[UMA] added UMA rst
MichaelJKlaiber Jul 22, 2022
33791fc
[UMA] Moved tutorial to RST file in gallery
MichaelJKlaiber Jul 22, 2022
9c38cd8
[UMA] moved uma cli to apps
MichaelJKlaiber Jul 22, 2022
a8fa294
[UMA] change requests according to PR-12087
MichaelJKlaiber Jul 25, 2022
afd88f2
[UMA] update and sync of uma_cli and tutorial
MichaelJKlaiber Jul 25, 2022
ffecd7a
[UMA] update of template passe: remove Pad block of Conv2D
MichaelJKlaiber Jul 25, 2022
69ebe59
[UMA] lint updates
MichaelJKlaiber Jul 25, 2022
f2480d9
[UMA] Test updates
MichaelJKlaiber Jul 26, 2022
3dab3f7
[UMA] fixes according to comments from PR 12087 discussion
MichaelJKlaiber Jul 26, 2022
fa02920
[UMA] lint updates
MichaelJKlaiber Jul 27, 2022
54160fb
[UMA] moved UMA _template file to apps
MichaelJKlaiber Jul 27, 2022
f9aa26e
[UMA] lint
MichaelJKlaiber Jul 27, 2022
960fef6
[UMA] Remove exceptions when dispatching over targets
cgerum Jul 27, 2022
734d265
[UMA] vanilla pattern update
MichaelJKlaiber Jul 27, 2022
6f100b9
[UMA] added mobilenet integration test
MichaelJKlaiber Jul 27, 2022
e941421
[UMA] clang lint
MichaelJKlaiber Jul 27, 2022
011d2f2
Remove tir to runtime
cgerum Jul 27, 2022
4875ed2
[UMA] Use sequential for UMA relay passes
cgerum Jul 28, 2022
ccb4f4c
Use comparison against BYOC flow in test_partition
cgerum Jul 28, 2022
1797ae4
[UMA] tutorial update: moved code blocks to RST
MichaelJKlaiber Jul 27, 2022
b1f02e1
[UMA] tutorial update and lint fixes
MichaelJKlaiber Jul 28, 2022
0cbe625
[UMA] removing UMA from i386 build, as there is a fail in the CI pip…
MichaelJKlaiber Aug 1, 2022
af16ae2
[BYOC-DNNL] covered case for sum node without attr
MichaelJKlaiber Aug 2, 2022
ab670d1
[UMA] pylint
MichaelJKlaiber Aug 2, 2022
ad61655
[UMA] pylint
MichaelJKlaiber Aug 2, 2022
34a3b69
[UMA] aot fix
MichaelJKlaiber Aug 2, 2022
ac2379e
[UMA] Changes PR review
MichaelJKlaiber Aug 5, 2022
cd536cc
[UMA] cc lint
MichaelJKlaiber Aug 5, 2022
0418ad8
[UMA] cc lint
MichaelJKlaiber Aug 5, 2022
0f0b1bf
Use better function name for te_lowering and annotate current target …
cgerum Aug 5, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -113,6 +113,7 @@ tvm_option(USE_VITIS_AI "Build with VITIS-AI Codegen support" OFF)
tvm_option(SUMMARIZE "Print CMake option summary after configuring" OFF)
tvm_option(USE_CLML "Build with CLML Codegen support" OFF)
tvm_option(USE_CLML_GRAPH_EXECUTOR "Build with CLML graph runtime" OFF)
tvm_option(USE_UMA "Build with UMA support" OFF)

# include directories
include_directories(${CMAKE_INCLUDE_PATH})
Expand Down Expand Up @@ -497,6 +498,7 @@ include(cmake/modules/contrib/TensorRT.cmake)
include(cmake/modules/contrib/VitisAI.cmake)
include(cmake/modules/contrib/Verilator.cmake)
include(cmake/modules/contrib/CLML.cmake)
include(cmake/modules/contrib/UMA.cmake)
include(cmake/modules/Git.cmake)
include(cmake/modules/LibInfo.cmake)
include(cmake/modules/RustExt.cmake)
Expand Down
22 changes: 22 additions & 0 deletions apps/uma/_template/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""

Template files for UMA tutorial


"""
45 changes: 45 additions & 0 deletions apps/uma/_template/backend.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""UMA backend for the my_ai_hw accelerator"""
from passes import MyAiHwConv2dPass
from tvm.relay.backend.contrib.uma.api.utils import PassPhase
from tvm.relay.backend.contrib.uma.backend import UMABackend
from codegen import gen_includes
from patterns import conv2d_pattern


class MyAiHwBackend(UMABackend):
"""UMA backend for the MyAiHw accelerator."""

def __init__(self):
super().__init__()

# Target configuration
self._register_target_attr("dimension")

# Relay Pattern registration
self._register_pattern("conv2d", conv2d_pattern())

# Relay to TIR function registration
self._register_tir_pass(PassPhase.TIR_PHASE_0, MyAiHwConv2dPass())

# TIR to runtime function registration
self._register_codegen(fmt="c", includes=gen_includes)

@property
def target_name(self):
return "my_ai_hw"
28 changes: 28 additions & 0 deletions apps/uma/_template/codegen.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""UMA codegen for the my_ai_hw accelerator"""

import tvm
import pathlib


def gen_includes() -> str:
topdir = pathlib.Path(__file__).parent.absolute()

includes = ""
includes += f'#include "{topdir}/conv2dnchw.cc"'
return includes
96 changes: 96 additions & 0 deletions apps/uma/_template/conv2dnchw.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
/*
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
*/
#include <stdlib.h>

// TODO(mjklaiber): leverage pragma import_c in the future
#ifdef __cplusplus
extern "C"
#endif

MichaelJKlaiber marked this conversation as resolved.
Show resolved Hide resolved
/*!
* \brief Conv2D function for mock-accelerator examples. Limited to same-padded Conv2D with
* stride (1,1) and datatype float. \param ifmap Pointer to input feature map data of size
* iw*ih*ic*sizeof(float). \param weights Pointer to weight data of size
* kh*kw*ic**oc*sizeof(float). \param result Pointer to output feature map data of size
* iw*ih*oc*sizeof(float). \param oc Number of channels of output feature map. \param iw Width
* of input feature map, ifmap. \param ih Height of input feature map, ifmap. \param ic Number
* of channels of input feature map. \param kh Height of convolution kernels. \param kw Width of
* convolution kernels.
*
* \return error code
*
*/
int
my_ai_hw_conv2dnchw(float* ifmap, float* weights, float* result, int oc, int iw, int ih, int ic,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This source looks like as it was not linted. Maybe we dont lint apps ..
Anyhow, would you be able to run clang-format on this file ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it doesnt look linted. It is what
./docker/lint.sh -i clang_format
gives me

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leandron @areusch @driazati do you know any reason that this file will not be linted ?

int kh, int kw) {

int kw_low = kw / 2;
int kh_low = kh / 2;
int kw_high = iw + kw / 2;
int kh_high = ih + kh / 2;

int padded_iw = iw + 2 * kw_low;
int padded_ih = ih + 2 * kh_low;

// This is only example code. A real hardware accelerator would call a device specific malloc
// function.
float* pad_temp = (float*)malloc(
(((ic * padded_iw * padded_ih) + (padded_ih * padded_iw)) + padded_iw) * sizeof(float));

if (pad_temp == NULL) {
return -1;
}

for (int i1 = 0; i1 < ic; ++i1) {
for (int i2 = 0; i2 < padded_ih; ++i2) {
for (int i3 = 0; i3 < padded_iw; ++i3) {
((float*)pad_temp)[(((i1 * padded_iw * padded_ih) + (i2 * padded_iw)) + i3)] =
(((((kh_low <= i2) && (i2 < kh_high)) && (kw_low <= i3)) && (i3 < kw_high))
? ifmap[((((i1 * iw * ih) + ((i2 - kh_low) * iw)) + i3 - kw_low))]
: 0.000000e+00f);
}
}
}
for (int i11 = 0; i11 < oc; ++i11) {
for (int i21 = 0; i21 < ih; ++i21) {
for (int i31 = 0; i31 < iw; ++i31) {
for (int i4 = 0; i4 < ic; ++i4) {
for (int i5 = 0; i5 < kh; ++i5) {
for (int i6 = 0; i6 < kw; ++i6) {
int cse_var_1 = (((i11 * iw * ih) + (i21 * iw)) + i31);
if (((i4 == 0) && (i5 == 0)) && (i6 == 0)) {
result[cse_var_1] = 0.000000e+00f;
}
result[cse_var_1] =
(result[cse_var_1] +
(((float*)
pad_temp)[i4 * padded_iw * padded_ih + (i21 + i5) * padded_iw + i31 + i6] *
weights[((((i11 * ic * kh * kw) + (i4 * kh * kw)) + (i5 * kw)) + i6)]));
}
}
}
}
}
}

// This is only example code. A real hardware accelerator would call a device specific free
// function.
free(pad_temp);
return 0;
}
136 changes: 136 additions & 0 deletions apps/uma/_template/passes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Transform passes for the my_ai_hw accelerator"""

import tvm
from tvm import tir
from tvm.relay.backend.contrib.uma.api.utils import add_llvm_to_block


@tvm.tir.transform.prim_func_pass(opt_level=2)
class MyAiHwConv2dPass:
_EXTERNAL_FUNCTION_NAME = "my_ai_hw_conv2dnchw"
_TVM_BLOCK_MATCH_NAME = "conv2d_nchw"

def transform_function(
self, func: tvm.tir.PrimFunc, mod: tvm.ir.IRModule, ctx: tvm.ir.transform.PassContext
) -> tvm.tir.PrimFunc:
return self._my_ai_hw_conv2d_pass(func, mod, ctx)

@classmethod
def _my_ai_hw_conv2d_pass(cls, func, mod, ctx):
_loops = dict()
_handles = []
_entry_node = None

def _has_block(name: str, func: tvm.tir.PrimFunc) -> bool:
"""
Determine of a tir.block with `name` exists in `func`
"""

def _hb(op):
if isinstance(op, tvm.tir.Block):
_found_blocks.append(op.name_hint)

_found_blocks = []
tvm.tir.stmt_functor.post_order_visit(func.body, _hb)
return name in _found_blocks

def _detect_and_replace_conv2d(
func: tvm.tir.PrimFunc, mod: tvm.ir.IRModule, ctx: tvm.ir.transform.PassContext
) -> tvm.tir.PrimFunc:
def _replace_conv2d(op):
if op == _entry_node:
irb = tvm.tir.ir_builder.create()
# Collection of buffer address
buffers = [b[1].data for b in _handles]
# extraction of loop offsets
for k, v in _loops.items():
assert v.min.value == 0
offset_order = ["co", "w", "h", "ci", "kh", "kw"]
offsets = [_loops[i].extent.value for i in offset_order]
args = buffers + offsets
irb.emit(tir_call(irb, True, cls._EXTERNAL_FUNCTION_NAME, *args))
irb_result = irb.get()
return irb_result
elif isinstance(op, tvm.tir.SeqStmt):
# Remove that pad block of TOPI's conv2DNCHW by only returning the 2nd statement
return op.seq[1]
return op

sch = tir.Schedule(func)

if _has_block(cls._TVM_BLOCK_MATCH_NAME, func):
conv2d_block = sch.get_block(cls._TVM_BLOCK_MATCH_NAME)
rv_loops = sch.get_loops(conv2d_block)
assert len(rv_loops) == 7
loops = dict(
n=rv_loops[0],
co=rv_loops[1],
h=rv_loops[2],
w=rv_loops[3],
ci=rv_loops[4],
kh=rv_loops[5],
kw=rv_loops[6],
)
_entry_node = sch.get(rv_loops[1])
_loops = {k: sch.get(v) for k, v in loops.items()}
_handles = func.buffer_map.items()

x = tvm.tir.stmt_functor.ir_transform(
func.body, None, _replace_conv2d, ["tir.For", "tir.SeqStmt"]
)
return func.with_body(x)
else:
return func

r = _detect_and_replace_conv2d(func, mod, ctx)
return r


def tir_call(ib: tvm.tir.ir_builder, extern: bool, name: str, *args):
"""
ib: ir_builder
extern: bool
True --> tvm.tir.call_extern
False --> tvm.tir.call_packed
name: str
function name
*args:
arguments for function call
"""

def buf_from_array(ib, arr, dtype):
# Allocate enough memory to store the whole array
var = ib.allocate("int32", (len(arr),), scope="global")
for i, v in enumerate(arr):
var[i] = v
# Declare a buffer, which is basically a view on the chunk of memory that we allocated
buf = tvm.tir.decl_buffer((len(arr),), dtype, data=var, scope="global")
return buf

if extern:
args = [i.data if isinstance(i, tvm.tir.Buffer) else i for i in args]
return tvm.tir.call_extern("int32", name, *args)
else:
args = [
buf_from_array(ib, i, "int32")
if isinstance(i, (tuple, list, tvm.ir.container.Array))
else i
for i in args
]
return tvm.tir.call_packed(name, *args)
25 changes: 25 additions & 0 deletions apps/uma/_template/patterns.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""Relay graph patterns for the my_ai_hw accelerator"""

from tvm.relay.dataflow_pattern import is_op, wildcard


def conv2d_pattern():
pattern = is_op("nn.conv2d")(wildcard(), wildcard())
pattern = pattern.has_attr({"strides": [1, 1], "groups": 1})
return pattern
Loading