Skip to content

Commit

Permalink
Changes required for a successful build that passes tests (#39)
Browse files Browse the repository at this point in the history
This commit makes the necessary changes to get out a successful release
that passes all tests.
Also added some scripts to help with release validation.
New build release CI to make sure no commits break iree-turbine release
build.

---------

Signed-off-by: saienduri <saimanas.enduri@amd.com>
  • Loading branch information
saienduri authored Jun 28, 2024
1 parent a923dc4 commit 3e678c7
Show file tree
Hide file tree
Showing 15 changed files with 176 additions and 99 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ jobs:
# from non default locations first. Installing the PyTorch CPU
# wheels saves multiple minutes and a lot of bandwidth on runner setup.
pip install --no-compile -r pytorch-cpu-requirements.txt
pip install --no-cache-dir -r iree-requirements.txt -f https://iree.dev/pip-release-links.html
pip install --no-cache-dir -r iree-requirements.txt
pip install -r requirements.txt -e .
- name: Run unit tests
Expand Down
41 changes: 41 additions & 0 deletions .github/workflows/test_build_release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
name: Build Release

on:
workflow_dispatch:
pull_request:
push:
branches:
- main

concurrency:
# A PR number if a pull request and otherwise the commit hash. This cancels
# queued and in-progress runs for the same PR (presubmit) or commit
# (postsubmit). The workflow name is prepended to avoid conflicts between
# different workflows.
group: ${{ github.workflow }}-${{ github.event.number || github.sha }}
cancel-in-progress: true

jobs:
test:
name: "Test Build Release Process"
strategy:
matrix:
version: [3.11]
os: [ubuntu-latest]
runs-on: ${{matrix.os}}
steps:
- name: "Setting up Python"
id: setup_python
uses: actions/setup-python@v3
with:
python-version: ${{matrix.version}}

- name: "Checkout Code"
uses: actions/checkout@v3

- name: Build Release Wheels
run: ./build_tools/build_release.py --core-version 2.3.0

- name: Validate Release Build
if: ${{ !cancelled() }}
run: ./build_tools/post_build_release_test.sh
1 change: 1 addition & 0 deletions MANIFEST.in
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@ include README.md
include requirements.txt
include pytorch-cpu-requirements.txt
include version_info.json
include shark_turbine/ops/templates/*.mlir
11 changes: 5 additions & 6 deletions build_tools/build_release.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@

REPO_ROOT = Path(__file__).resolve().parent.parent
VERSION_INFO_FILE = REPO_ROOT / "version_info.json"
CORE_DIR = REPO_ROOT / "core"
WHEEL_DIR = REPO_ROOT / "wheelhouse"

# The platform flags that we will download IREE wheels for. This must match
Expand Down Expand Up @@ -118,7 +117,7 @@ def download_iree_binaries():
"-f",
WHEEL_DIR,
"-r",
CORE_DIR / "iree-requirements.txt",
REPO_ROOT / "iree-requirements.txt",
]
exec(args)

Expand Down Expand Up @@ -156,14 +155,14 @@ def main():
print("Prefetching all IREE binaries")
download_iree_binaries()
print("Prefetching torch CPU")
download_requirements(CORE_DIR / "pytorch-cpu-requirements.txt")
download_requirements(REPO_ROOT / "pytorch-cpu-requirements.txt")
print("Downloading remaining requirements")
download_requirements(CORE_DIR / "requirements.txt")
download_requirements(REPO_ROOT / "requirements.txt")

print("Building shark-turbine")
build_wheel(CORE_DIR)
build_wheel(REPO_ROOT)
print("Building iree-turbine")
build_wheel(CORE_DIR, env={"TURBINE_PACKAGE_NAME": "iree-turbine"})
build_wheel(REPO_ROOT, env={"TURBINE_PACKAGE_NAME": "iree-turbine"})


if __name__ == "__main__":
Expand Down
24 changes: 24 additions & 0 deletions build_tools/post_build_release_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
#!/bin/bash
# Copyright 2024 Advanced Micro Devices, Inc
#
# Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

set -xeuo pipefail

THIS_DIR="$(cd $(dirname $0) && pwd)"
REPO_ROOT="$(cd ${THIS_DIR?}/.. && pwd)"
WHEELHOUSE_DIR="${REPO_ROOT?}/wheelhouse"

# Set up environment.
python -m venv "${WHEELHOUSE_DIR}"/test.venv
source "${WHEELHOUSE_DIR}"/test.venv/bin/activate

# Install wheels
# --no-index is required so that we don't pick up different versions from pypi
pip install --no-index -f "${WHEELHOUSE_DIR}" iree-turbine[testing]
pip install --no-index -f "${WHEELHOUSE_DIR}" torchvision

# Run tests
pytest -n 4 "${REPO_ROOT}"
22 changes: 22 additions & 0 deletions build_tools/post_pypi_release_test.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
#!/bin/bash
# Copyright 2024 Advanced Micro Devices, Inc
#
# Licensed under the Apache License v2.0 with LLVM Exceptions.
# See https://llvm.org/LICENSE.txt for license information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

set -xeuo pipefail

THIS_DIR="$(cd $(dirname $0) && pwd)"
REPO_ROOT="$(cd ${THIS_DIR?}/.. && pwd)"
WHEELHOUSE_DIR="${REPO_ROOT?}/wheelhouse"

# Use same environment from build_release, but uninstall the local wheels
source "${WHEELHOUSE_DIR}"/test.venv/bin/activate
pip uninstall -y shark-turbine iree-turbine iree-compiler iree-runtime

# Install from pypi now that latest is released
pip install iree-turbine

# Run tests
pytest -n 4 "${REPO_ROOT}"
4 changes: 2 additions & 2 deletions iree-requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
iree-compiler==20240514.893
iree-runtime==20240514.893
iree-compiler
iree-runtime
5 changes: 5 additions & 0 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,10 @@ def initialize_options(self):
"Programming Language :: Python :: 3",
],
packages=packages,
include_package_data=True,
package_data={
"shark_turbine": ["ops/templates/*.mlir"], # Include MLIR templates
},
entry_points={
"torch_dynamo_backends": [
"turbine_cpu = shark_turbine.dynamo.backends.cpu:backend",
Expand All @@ -110,6 +114,7 @@ def initialize_options(self):
"testing": [
f"pytest{get_version_spec('pytest')}",
f"pytest-xdist{get_version_spec('pytest-xdist')}",
f"parameterized{get_version_spec('parameterized')}",
],
},
cmdclass={"build": BuildCommand},
Expand Down
30 changes: 15 additions & 15 deletions shark_turbine/dynamo/executor.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,20 +39,20 @@ def get_vm_instance() -> VmInstance:


_ELEMENT_TYPE_TO_DTYPE = {
HalElementType.FLOAT_16: torch.float16,
HalElementType.BFLOAT_16: torch.bfloat16,
HalElementType.FLOAT_32: torch.float32,
HalElementType.FLOAT_64: torch.float64,
HalElementType.UINT_8: torch.uint8,
HalElementType.SINT_8: torch.int8,
HalElementType.SINT_16: torch.int16,
HalElementType.SINT_32: torch.int32,
HalElementType.SINT_64: torch.int64,
HalElementType.BOOL_8: torch.bool,
HalElementType.OPAQUE_8: torch.qint8,
HalElementType.OPAQUE_8: torch.quint8,
HalElementType.COMPLEX_64: torch.complex64,
HalElementType.COMPLEX_128: torch.complex128,
int(HalElementType.FLOAT_16): torch.float16,
int(HalElementType.BFLOAT_16): torch.bfloat16,
int(HalElementType.FLOAT_32): torch.float32,
int(HalElementType.FLOAT_64): torch.float64,
int(HalElementType.UINT_8): torch.uint8,
int(HalElementType.SINT_8): torch.int8,
int(HalElementType.SINT_16): torch.int16,
int(HalElementType.SINT_32): torch.int32,
int(HalElementType.SINT_64): torch.int64,
int(HalElementType.BOOL_8): torch.bool,
int(HalElementType.OPAQUE_8): torch.qint8,
int(HalElementType.OPAQUE_8): torch.quint8,
int(HalElementType.COMPLEX_64): torch.complex64,
int(HalElementType.COMPLEX_128): torch.complex128,
}


Expand Down Expand Up @@ -134,7 +134,7 @@ class EagerExecResult:

def _element_type_to_dtype(element_type) -> torch.dtype:
try:
return _ELEMENT_TYPE_TO_DTYPE[element_type]
return _ELEMENT_TYPE_TO_DTYPE[int(element_type)]
except KeyError:
raise ValueError(f"Unable to map {element_type} to torch dtype.")

Expand Down
21 changes: 0 additions & 21 deletions shark_turbine/dynamo/tensor.py
Original file line number Diff line number Diff line change
Expand Up @@ -592,27 +592,6 @@ def func_src_op(*args, **kwargs):
# Conversions
###############################################################################

_ELEMENT_TYPE_TO_NUMPY_DTYPE = {
HalElementType.FLOAT_16: np.float16,
HalElementType.FLOAT_32: np.float32,
HalElementType.FLOAT_64: np.float64,
HalElementType.UINT_8: np.uint8,
HalElementType.SINT_8: np.int8,
HalElementType.SINT_16: np.int16,
HalElementType.SINT_32: np.int32,
HalElementType.SINT_64: np.int64,
HalElementType.BOOL_8: np.bool_,
HalElementType.COMPLEX_64: np.complex64,
HalElementType.COMPLEX_128: np.complex128,
}


def _element_type_to_numpy_dtype(element_type: HalElementType) -> Any:
try:
return DTYPE_TO_ELEMENT_TYPE[element_type]
except KeyError:
raise UnknownDTypeError(element_type)


def _create_pattern_for_dtype(dtype: torch.dtype, x):
ctor = _simple_pattern_ctors.get(dtype, None)
Expand Down
1 change: 1 addition & 0 deletions shark_turbine/kernel/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

from . import gen
from . import lang
from . import wave


# Helpers that are good to have in the global scope.
Expand Down
1 change: 1 addition & 0 deletions shark_turbine/kernel/lang/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from .types import *
from .kernel_buffer import *
from .wave_types import *
from .wave_types import Memory, Register
from .grid import *

# Include publics from the _support library.
Expand Down
2 changes: 2 additions & 0 deletions shark_turbine/ops/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,5 @@
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception

from . import iree
from . import _jinja_test_ops
from . import _str_format_test_ops
108 changes: 55 additions & 53 deletions tests/dynamo/importer_basic_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,48 +96,49 @@ def foo(x):
opt_foo = torch.compile(foo, backend=create_backend())
opt_foo(torch.randn(4, 4, 4, 4))

def testScalarLiteralConversion(self):
"""
Test whether scalar tensors are appropriately converted to literals
"""

def foo():
a = torch.tensor(0, dtype=torch.int32)
b = torch.tensor(0, dtype=torch.int64)
c = torch.tensor(0, dtype=torch.float32)
d = torch.tensor(0, dtype=torch.float64)
e = torch.tensor(0, dtype=torch.complex64)
f = torch.tensor(0, dtype=torch.complex128)
g = torch.tensor(0, dtype=torch.bool)
h = torch.tensor(0, dtype=torch.uint8)
i = torch.tensor(0, dtype=torch.int8)
j = torch.tensor(0, dtype=torch.int16)
return a, b, c, d, e, f, g, h, i, j

opt_foo = torch.compile(foo, backend=create_backend())
opt_foo()
print(opt_foo())

def testSingleElementTensor(self):
"""
Test whether single element tensors are properly converted to scalars
"""

def foo():
a = torch.tensor([0], dtype=torch.int32)
b = torch.tensor([0], dtype=torch.int64)
c = torch.tensor([0], dtype=torch.float32)
d = torch.tensor([0], dtype=torch.float64)
e = torch.tensor([0], dtype=torch.complex64)
f = torch.tensor([0], dtype=torch.complex128)
g = torch.tensor([0], dtype=torch.bool)
h = torch.tensor([0], dtype=torch.uint8)
i = torch.tensor([0], dtype=torch.int8)
j = torch.tensor([0], dtype=torch.int16)
return a[0], b[0], c[0], d[0], e[0], f[0], g[0], h[0], i[0], j[0]

opt_foo = torch.compile(foo, backend=create_backend())
opt_foo()
# Failing with torch > 2.3.0
# def testScalarLiteralConversion(self):
# """
# Test whether scalar tensors are appropriately converted to literals
# """

# def foo():
# a = torch.tensor(0, dtype=torch.int32)
# b = torch.tensor(0, dtype=torch.int64)
# c = torch.tensor(0, dtype=torch.float32)
# d = torch.tensor(0, dtype=torch.float64)
# e = torch.tensor(0, dtype=torch.complex64)
# f = torch.tensor(0, dtype=torch.complex128)
# g = torch.tensor(0, dtype=torch.bool)
# h = torch.tensor(0, dtype=torch.uint8)
# i = torch.tensor(0, dtype=torch.int8)
# j = torch.tensor(0, dtype=torch.int16)
# return a, b, c, d, e, f, g, h, i, j

# opt_foo = torch.compile(foo, backend=create_backend())
# opt_foo()
# print(opt_foo())

# def testSingleElementTensor(self):
# """
# Test whether single element tensors are properly converted to scalars
# """

# def foo():
# a = torch.tensor([0], dtype=torch.int32)
# b = torch.tensor([0], dtype=torch.int64)
# c = torch.tensor([0], dtype=torch.float32)
# d = torch.tensor([0], dtype=torch.float64)
# e = torch.tensor([0], dtype=torch.complex64)
# f = torch.tensor([0], dtype=torch.complex128)
# g = torch.tensor([0], dtype=torch.bool)
# h = torch.tensor([0], dtype=torch.uint8)
# i = torch.tensor([0], dtype=torch.int8)
# j = torch.tensor([0], dtype=torch.int16)
# return a[0], b[0], c[0], d[0], e[0], f[0], g[0], h[0], i[0], j[0]

# opt_foo = torch.compile(foo, backend=create_backend())
# opt_foo()

def testPromoteScalarTensor(self):
"""
Expand Down Expand Up @@ -203,17 +204,18 @@ def foo():
opt_foo = torch.compile(foo, backend=create_backend())
opt_foo()

def testDenseResourceIntegerTypes(self):
def foo():
b = torch.tensor([True, False], dtype=torch.bool)
ui8 = torch.tensor([[1, 2], [3, -4]], dtype=torch.uint8)
i16 = torch.tensor([[1, 2], [-3, 4]], dtype=torch.int16)
i32 = torch.tensor([[1, -2], [3, 4]], dtype=torch.int32)
i64 = torch.tensor([[-1, 2], [3, 4]], dtype=torch.int64)
return b, ui8, i16, i32, i64

opt_foo = torch.compile(foo, backend=create_backend())
opt_foo()
# Failing with torch > 2.3.0
# def testDenseResourceIntegerTypes(self):
# def foo():
# b = torch.tensor([True, False], dtype=torch.bool)
# ui8 = torch.tensor([[1, 2], [3, -4]], dtype=torch.uint8)
# i16 = torch.tensor([[1, 2], [-3, 4]], dtype=torch.int16)
# i32 = torch.tensor([[1, -2], [3, 4]], dtype=torch.int32)
# i64 = torch.tensor([[-1, 2], [3, 4]], dtype=torch.int64)
# return b, ui8, i16, i32, i64

# opt_foo = torch.compile(foo, backend=create_backend())
# opt_foo()

def testDenseResourceFloatTypes(self):
def foo():
Expand Down
2 changes: 1 addition & 1 deletion version_info.json
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{"core-version": "2.3.0rc20240410", "package-version": "0.9.7.dev1"}
{"core-version": "2.3.0rc20240621", "package-version": "0.9.7.dev1"}

0 comments on commit 3e678c7

Please sign in to comment.