Skip to content

Commit

Permalink
Llvm-backend update with Master (#1)
Browse files Browse the repository at this point in the history
* Reimplemented parts of attribute query lowering and some optimizations

* Add a test case for mixing sparse/dense formats in matrix multiply

The test case does A=BC, and tries all permutations of Dense, CSR, CSC, and COO.
It is disabled for now, enable it once sparse output works.

* Add in hoisted workspace reuse and remove guard for divisible bound and split

* Fix some workspaces tests

* Use CUDA_LIBRARIES instead of hardcoding the path to libcudart

Hardcoded paths don't work when using Debian's packaged version of cuda,
as the library paths don't match.  CMake's find_package(CUDA) sets
CUDA_LIBRARIES to the path of libcudart, so just use that instead.

* Add TACO_NVCC var to complement TACO_NVCCFLAGS

This is useful for passing specific arguments to nvcc.  In my case,
I wanted to force nvcc to use a specific version of g++.

* Updated automated test workflow

* Updated automated test workflow

* fix -s arg parser

* Prototypes automatically generating code to to have sparse iteration over a dense workspace

* don't run autoscheduling commands if manual schedule is provided in command line tool fixes tensor-compiler#336

* fix fuse bound calculation, which was unnecessarily enlarged. Fixes tensor-compiler#337

* Fixes bugs in check for accelerating workspace

* Fixes bug in concreteNotation check. All workspace tests pass.

* Removes print statements

* fix handling of operator precedence in CUDA backend. Fixes tensor-compiler#338

* Only hoists out malloc + free from where statement when possible. Emits loop to zero every element in a temporary when it is hoisted before the producer is called. Changes the codegens to keep pointer names constant

* Fix build failures on ubuntu 16.04

* Fix python bindings when building with clang++-10

Fix a few instances of this build error in pytaco:
.../python_bindings/src/pyTensor.cpp:406:53: error: unknown type name 'nullptr_t'; did you mean 'std::nullptr_t'?

* Use exceptions for error reporting in all cases

Previously, exceptions were used only when the Python bindings were
enabled.  This meant that C++ applications could only handle errors
gracefully when the Python bindings were enabled.

Change it to consistently use exceptions in all cases.

* Adds negation to pytaco tensor interface

* Removes initialization loop from before producer when accelerating a dense workspacE

* Places index list size above the producer loop when accelerating a dense workspace. This should make the transition to multithreading easier and fixes a bug in the original code

* Fixes workspace reset

* If underived variables are used to index a workspace, we allocate space for the workspace based on the size of the sizes of the input tensors

* Relaxes requirements for spmm transformation

* Checks if first mode of last tensor has locate for spmm transform

* Changes SPMM tranform requirement. Unsure about this

* Fix whitespace in tools/taco.cpp.  (No functional changes)

* Report an error properly in the taco CLI tool.

* Use the existing Lexer to parse scheduling directives

Add a schedule parser function.
Add test cases for the schedule parser function.
Use the function in the taco command-line tool.
Return usage messages when the user passes in the wrong number of parameters.

* Silence a warning about cmake policy CMP0054.

* lower,index_notation: fix compilation warnings

Fix a few compilation warnings caused by taking copies of loop variables
instead of references.

* index_notation,error: deduplicate dimension checking routines

Currently, there are two dimension checking methods in TACO. The first
returned a boolean, and the second returned a user readable string
detailing the error. Both methods had nearly identical code. Therefore,
this commit merges them into a single function that returns a boolean
and the error, if it exists.

* lower: fix a bug causing undefined variables when applying fuse

Fixes tensor-compiler#355.

This commit fixes a bug where the fuse transformation would not generate
necessary locator variables when applied to iteration over two dense
variables.

* Revert "lower: fix a bug causing undefined variables when applying fuse"

* Add -help and -help=schedule parameters to CLI

* lower: fix a bug causing undefined variables when applying fuse

Fixes tensor-compiler#355.

This commit fixes a bug where the fuse transformation would not generate
necessary locator variables when applied to iteration over two dense
variables.

Additionally, this commit adds a test for when a dense iteration results
in a transposition of a tensor.

* Emit unsequenced insertion code

* Zeroelss updates

* Emit code to use attribute query results during assembly

* include,src: introduce a true break statement, rename current to continue

The current `ir::Break` statement actually translates to a `continue`.
This commit renames this to `ir::Continue`, and adds a new `ir::Break`
node that actually translates to a `break`. This new node will be used
by upcoming windowing work.

* Don't emit append code if using ungrouped insertion

* Clear the needsCompile flag in tensor->compileSource()

Fixes tensor-compiler#366.

* Add an error message for invalid input tensor names.

* Fix warnings in python bindings

* tensor,codegen: fix a bug where kernel cache could be modified

This commit fixes a bug where upon recompilation of an index statement,
entries in the kernel cache could be inadvertently modified, leading to
confusing segfaults.

An example of the bug is included in the added test, where the second
call to `c(i, j) = a(i, j)` would hit the cache, but then find a module
that had code that corresponded to `c(i, j) = a(i, j) + b(i, j)`.

* Implemented assemble scheduling command + don't sort sparse accelerator if performing reduction

* Assume inputs are zeroless when computing attribute queries

* Replace workspaces in attribute queries

* Enable parallelization of forall statements with results assembled by ungrouped insertion

* Fixed various bugs

* Fixed various bugs

* Deleted redundant code

* Fix workspaces test on ubuntu 16.04

Fixes: tensor-compiler#380

* Add code coverage targets to cmake

* Fix warnings in Release builds

* Fixed attribute query compute code not being emitted + optimize computation of Boolean temporaries when always assigned true

* Emit init_edges code

* Added parallel SpGEMM test

* Fixed heuristic for inserting accelerators for workspaces indexed by derived index variables

* Removed debug print statements

* Updated CMake requirements

* Added correctness checks for ungrouped insertion

* Fix a bug in CLI parsing of bound()

This bug was introduced in tensor-compiler#352.

* Strengthened precondition for assemble command

* Remove pybind11

* Make pybind11 a submodule

* Modify cmake

* fix cmake

* Removes forcecast in function overload

* Add comment to python code explaining when conversion happen

* Don't emit atomic pragma for non-reduction assignments

* *: add support for windowing of tensors

This commit adds support for windowing of tensors in the existing index
notation DSL. For example:

```
A(i, j) = B(i(1, 4), j) * C(i, j(5, 10))
```

causes `B` to be windowed along its first mode, and `C` to be windowed
along its second mode. In this commit any mix of windowed and
non-windowed modes are supported, along with windowing the same tensor
in different ways in the same expression. The windowing expressions
correspond to the `:` operator to slice dimensions in `numpy`.

Currently, only windowing by integers is supported.

Windowing is achieved by tying windowing information to particular
`Iterator` objects, as these are created for each `Tensor`-`IndexVar`
pair. When iterating over an `Iterator` that may be windowed, extra
steps are taken to either generate an index into the windowed space, or
to recover an index from a point in the windowed space.

* Update Cmake to pull python binding during any build

* Add a SpTV+openmp+atomics test case for tensor-compiler#316

* Improve CI test coverage

Add a build step that covers the OpenMP and Python features.

Make it run `make test` to run all available test suites.

* Raise internal error if trying to generate code to assemble sparse accelerator in parallel

* *: add the ability to stride window access

This commit extends the windowing syntax to include an optional third
parameter to a window expression on an index variable:

```
a(i) = b(i(0, n, 5 /* stride */))
```

This stride parameters means that the window should be accessed along
the provided stride, which defaults to 1.

Striding is implemented with a similar idea as windowing, where
coordinates in the stride are mapped to a canonical index space of `[0,n)`.
For compressed modes, coordinates that don't match the stride are
skipped.

* Fixed various bugs

* Fixed removal of redundant loops

* lower: fix a bug when using OpenMP and windowing

Fixes tensor-compiler#409.

This commit fixes a bug where position loops parallelized with OpenMP
that operated over windowed tensor modes would fail to compile.

This commit also fixes some compilation errors compiling windowing tests
on Ubuntu.

* Unbreak cmake build of python bindings

* Remove redundant allocation

* *: add support for using arbitrary indexing sets to window tensors

This commit adds support for using vectors to index arbitrary dimensions
of tensors. It works by packing the vector into a sparse tensor, and
coiterating over the sparse tensor to efficiently filter the chosen
dimensions. The syntax of indexing sets look as follows:

```
A(i) = B(i({1, 3, 5}))
```

which means that only elements 1, 3, and 5 from `B` will be used in the
computation.

* index_notation: implement the `divide` transformation

The divide transformation divides a loop up into `n` equal components,
whereas split breaks a loop up into a components of size `n`.

It also enables support for the transformation in the TACO CLI.

* Enable CI tests for array_algebra branch

* Suppress GCC warnings

* Fixed heuristic for inserting sparse accelerator

* Revert "Fixed heuristic for inserting sparse accelerator"

This reverts commit 4e264ce.

* Fixed heuristic for inserting sparse accelerator

* Fix package version issue in CI tests

Run "apt-get update" to update the package list.

* cuda: fix windowing test with cuda

Fixes tensor-compiler#422.

This commit ensures that the allocation clearing logic is applied to
the CUDA backend as well. The windowing test caught this because TACO
was automatically parallelizing the loop onto the GPU.

* index_notation,tensor: small bugfixes for index sets

* Fixes a runtime error when using index sets on tensors not of integer types
* Fixes a compile error when using a vector typed variable as argument
  for an index set.

* Allow CLI precompute() to specify the workspace name

* Add tracking/reporting of build info

* CLI tool treats double hyphens as a single hyphen

* lowerer_impl: fix some striding bugs

Fixes some formulaic errors in generated striding code along with a test
that revealed them.

* Better error message for guarding unguardable loops

* Use full precision when IR printing float constants

The default precision when printing a floating point value is 6 digits.
This causes a lot of double values to get truncated. Print these with
full precision to avoid losing data.

* Don't emit redundant code to append edges when inserting into result

* index_notation: fix a bug where windows would be dropped through `+=`

Fixes tensor-compiler#451.

This commit fixes a bug where windows applied to index variables would
be dropped when assigned to via the `+=` operator.

* Fixed printing of scheduling commands in command-line tool output

* Fixed precompute transformation and attempt at fixing tensor-compiler#389. Also generate more optimized attribute query code for parallel sparse tensor addition

* Modified MTTKRP test to use schedule with precompute

* assemble command now no longer uses fresh index variables in inserted attribute query computations by default

* Fixed typo in command-line tool usage

* Fixed assemble command with dense arrays + improved heuristics for determining whether result needs to be explicitly zero-initialized

* Fixed how parallelize command checks for races

* Fixing merge issues

Co-authored-by: Stephen Chou <s3chou@csail.mit.edu>
Co-authored-by: Mark Glines <mark@glines.org>
Co-authored-by: Olivia Hsu <owhsu@stanford.edu>
Co-authored-by: Stephen Chou <stephenchouca@users.noreply.github.com>
Co-authored-by: roastduck <rd0x01@gmail.com>
Co-authored-by: Rawn <rawnhenry@gmail.com>
Co-authored-by: Ryan Senanayake <rsen@mit.edu>
Co-authored-by: Rohan Yadav <rohany@alumni.cmu.edu>
Co-authored-by: Changwan Hong <changwan@lanka.csail.mit.edu>
Co-authored-by: Rohan Yadav <rohany@cs.stanford.edu>
Co-authored-by: Sam Kaplan <sam@extreme-scale.com>
  • Loading branch information
12 people authored May 27, 2021
1 parent 92f0a09 commit 731ed20
Show file tree
Hide file tree
Showing 263 changed files with 6,648 additions and 40,417 deletions.
59 changes: 46 additions & 13 deletions .github/workflows/buildandtest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,15 +4,17 @@ on:
push:
branches:
- master
- array_algebra
pull_request:
branches:
- master
- array_algebra

jobs:
build-test-cpu:
name: builds taco for cpu and runs all gtest
name: builds taco with no options for cpu and runs all tests
runs-on: ubuntu-18.04

steps:
- uses: actions/checkout@v2
- name: create_build
Expand All @@ -21,14 +23,17 @@ jobs:
run: cmake ..
working-directory: build
- name: make
run: make -j8
run: make -j2
working-directory: build
- name: test
run: bin/taco-test
run: make test
env:
CTEST_OUTPUT_ON_FAILURE: 1
CTEST_PARALLEL_LEVEL: 2
working-directory: build

build-test-cpu-release:
name: builds taco release for cpu and runs all gtest
name: builds taco release for cpu and runs all tests
runs-on: ubuntu-18.04

steps:
Expand All @@ -39,28 +44,56 @@ jobs:
run: cmake -DCMAKE_BUILD_TYPE=Release ..
working-directory: build
- name: make
run: make -j8
run: make -j2
working-directory: build
- name: test
run: bin/taco-test
run: make test
env:
CTEST_OUTPUT_ON_FAILURE: 1
CTEST_PARALLEL_LEVEL: 2
working-directory: build

build-test-cpu-openmp-python-asserts:
name: builds taco with compile-time asserts, openmp, and python and runs all tests
runs-on: ubuntu-18.04

steps:
- uses: actions/checkout@v2
- name: apt-get update
run: sudo apt-get update
- name: install numpy and scipy
run: sudo DEBIAN_FRONTEND=noninteractive apt-get install -y python3-numpy python3-scipy
- name: create_build
run: mkdir build
- name: cmake
run: cmake -DCMAKE_BUILD_TYPE=RelWithDebInfo -DOPENMP=ON -DPYTHON=ON ..
working-directory: build
- name: make
run: make -j2
working-directory: build
- name: test
run: make test
env:
CTEST_OUTPUT_ON_FAILURE: 1
CTEST_PARALLEL_LEVEL: 2
working-directory: build

build-gpu:
name: build taco for gpu, but does not run tests
runs-on: ubuntu-18.04

steps:
- uses: actions/checkout@v2
- name: download cuda
run: wget http://developer.download.nvidia.com/compute/cuda/10.2/Prod/local_installers/cuda_10.2.89_440.33.01_linux.run
- name: install cuda
run: sudo sh cuda_10.2.89_440.33.01_linux.run --silent --toolkit --installpath="$GITHUB_WORKSPACE/cuda"
- name: add path
run: echo "::add-path::$GITHUB_WORKSPACE/cuda/bin"
run: echo "$GITHUB_WORKSPACE/cuda/bin" >> $GITHUB_PATH
- name: set ld_library_path
run: echo "::set-env name=LD_LIBRARY_PATH::$GITHUB_WORKSPACE/cuda/lib64"
- name: set library_path
run: echo "::set-env name=LIBRARY_PATH::$GITHUB_WORKSPACE/cuda/lib64"
run: echo "LD_LIBRARY_PATH=$GITHUB_WORKSPACE/cuda/lib64" >> $GITHUB_ENV
- name: set library_path
run: echo "LIBRARY_PATH=$GITHUB_WORKSPACE/cuda/lib64" >> $GITHUB_ENV
- name: print environment
run: |
echo ${PATH}
Expand All @@ -72,5 +105,5 @@ jobs:
run: cmake -DCUDA=ON ..
working-directory: build
- name: make
run: make -j8
run: make -j2
working-directory: build
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "python_bindings/pybind11"]
path = python_bindings/pybind11
url = https://github.com/pybind/pybind11
96 changes: 92 additions & 4 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,25 +1,41 @@
cmake_minimum_required(VERSION 2.8.12 FATAL_ERROR)
project(taco)
if(POLICY CMP0048)
cmake_policy(SET CMP0048 NEW)
endif()
if(POLICY CMP0054)
cmake_policy(SET CMP0054 NEW)
endif()
project(taco
VERSION 0.1
LANGUAGES C CXX
)
option(CUDA "Build for NVIDIA GPU (CUDA must be preinstalled)" OFF)
option(PYTHON "Build TACO for python environment" OFF)
option(OPENMP "Build with OpenMP execution support" OFF)
option(LLVM "Build with LLVM backend support" OFF)
option(ENABLE_TESTS "Enable tests" ON)

option(LLVM "Build with LLVM backend support")
option(ENABLE_TESTS "Enable tests" ON)
option(COVERAGE "Build with code coverage analysis" OFF)
set(TACO_FEATURE_CUDA 0)
set(TACO_FEATURE_OPENMP 0)
set(TACO_FEATURE_PYTHON 0)

if(CUDA)
message("-- Searching for CUDA Installation")
find_package(CUDA REQUIRED)
add_definitions(-DCUDA_BUILT)
set(TACO_FEATURE_CUDA 1)
endif(CUDA)
if(OPENMP)
message("-- Will use OpenMP for parallel execution")
add_definitions(-DUSE_OPENMP)
set(TACO_FEATURE_OPENMP 1)
endif(OPENMP)

if(PYTHON)
message("-- Will build Python extension")
add_definitions(-DPYTHON)
set(TACO_FEATURE_PYTHON 1)
endif(PYTHON)

if (LLVM)
Expand Down Expand Up @@ -83,15 +99,26 @@ set(CMAKE_LIBRARY_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/lib")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/bin")

set(OPTIMIZE "-O3" CACHE STRING "Optimization level")

if(CUDA)
set(C_CXX_FLAGS "$ENV{CXXFLAGS} -lcudart -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wmissing-declarations -Woverloaded-virtual -pedantic-errors -Wno-deprecated")
else()
set(C_CXX_FLAGS "$ENV{CXXFLAGS} -Wall -Wextra -Wno-unused-parameter -Wno-missing-field-initializers -Wmissing-declarations -Woverloaded-virtual -pedantic-errors -Wno-deprecated")
endif(CUDA)

if(OPENMP)
set(C_CXX_FLAGS "-fopenmp ${C_CXX_FLAGS}")
endif(OPENMP)

if(COVERAGE)
find_program(PATH_TO_GCOVR gcovr REQUIRED)
# add coverage tooling to build flags
set(C_CXX_FLAGS "${C_CXX_FLAGS} -g -fprofile-arcs -ftest-coverage")
# name the coverage files "foo.gcno", not "foo.cpp.gcno"
set(CMAKE_CXX_OUTPUT_EXTENSION_REPLACE 1)
message("-- Code coverage analysis (gcovr) enabled")
endif(COVERAGE)

set(C_CXX_FLAGS "${C_CXX_FLAGS}")
set(CMAKE_C_FLAGS "${C_CXX_FLAGS}")
set(CMAKE_CXX_FLAGS "${C_CXX_FLAGS} -std=c++17")
Expand All @@ -107,7 +134,7 @@ include_directories(${TACO_INCLUDE_DIR})

set(TACO_LIBRARY_DIR ${CMAKE_LIBRARY_OUTPUT_DIRECTORY})

install(DIRECTORY ${TACO_INCLUDE_DIR}/ DESTINATION include)
install(DIRECTORY ${TACO_INCLUDE_DIR}/ DESTINATION include FILES_MATCHING PATTERN "*.h")

add_subdirectory(src)

Expand All @@ -119,8 +146,69 @@ endif()
add_subdirectory(tools)
add_subdirectory(apps)
string(REPLACE " -Wmissing-declarations" "" CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS}")

find_package(Git QUIET)
if(GIT_FOUND AND EXISTS "${TACO_PROJECT_DIR}/.git")
# Update submodules as needed
option(GIT_SUBMODULE "Check submodules during build" ON)
if(GIT_SUBMODULE)
message(STATUS "Submodule update")
execute_process(COMMAND ${GIT_EXECUTABLE} submodule update --init --recursive
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE GIT_SUBMOD_RESULT)
if(NOT GIT_SUBMOD_RESULT EQUAL "0")
message(FATAL_ERROR "git submodule update --init failed with ${GIT_SUBMOD_RESULT}, please checkout submodules")
endif()
endif()
# get git revision
execute_process(
COMMAND ${GIT_EXECUTABLE} rev-parse --short HEAD
WORKING_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}
RESULT_VARIABLE GIT_REVPARSE_RESULT
OUTPUT_VARIABLE TACO_GIT_SHORTHASH
OUTPUT_STRIP_TRAILING_WHITESPACE
)
if(NOT GIT_REVPARSE_RESULT EQUAL "0")
message(NOTICE "'git rev-parse --short HEAD' failed with ${GIT_REVPARSE_RESULT}, git version info will be unavailable.")
set(TACO_GIT_SHORTHASH "")
endif()
else()
set(TACO_GIT_SHORTHASH "")
endif()

if(NOT EXISTS "${TACO_PROJECT_DIR}/python_bindings/pybind11/CMakeLists.txt")
message(FATAL_ERROR "The submodules were not downloaded! GIT_SUBMODULE was turned off or failed. Please update submodules and try again.")
endif()

if(PYTHON)
add_subdirectory(python_bindings)
message("-- Will build Python extension")
add_definitions(-DPYTHON)
endif(PYTHON)

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wmissing-declarations")
add_custom_target(src DEPENDS apps)

if(COVERAGE)
# code coverage analysis target
add_custom_target(gcovr
COMMAND mkdir -p coverage
COMMAND ${CMAKE_MAKE_PROGRAM} test
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
)
add_custom_command(TARGET gcovr
COMMAND echo "Running gcovr..."
COMMAND ${PATH_TO_GCOVR} -r ${CMAKE_SOURCE_DIR} --html --html-details -o coverage/index.html ${CMAKE_BINARY_DIR}
COMMAND echo "See coverage/index.html for coverage information."
WORKING_DIRECTORY ${CMAKE_BINARY_DIR}
)
add_dependencies(gcovr taco-test)
if(PYTHON)
add_dependencies(gcovr core_modules)
endif(PYTHON)
set_property(DIRECTORY APPEND PROPERTY ADDITIONAL_MAKE_CLEAN_FILES coverage)
endif(COVERAGE)

string(TIMESTAMP TACO_BUILD_DATE "%Y-%m-%d")
configure_file("include/taco/version.h.in" "include/taco/version.h" @ONLY)
install(FILES "${CMAKE_BINARY_DIR}/include/taco/version.h" DESTINATION "include/taco")
19 changes: 19 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,25 @@ To run the Python test suite individually:
python3 build/python_bindings/unit_tests.py


## Code coverage analysis

To enable code coverage analysis, configure with `-DCOVERAGE=ON`. This requires
the `gcovr` tool to be installed in your PATH.

For best results, the build type should be set to `Debug`. For example:

cmake -DCMAKE_BUILD_TYPE=Debug -DCOVERAGE=ON ..

Then to run code coverage analysis:

make gcovr

This will run the test suite and produce some coverage analysis. This process
requires that the tests pass, so any failures must be fixed first.
If all goes well, coverage results will be output to the `coverage/` folder.
See `coverage/index.html` for a high level report, and click individual files
to see the line-by-line results.

# Library example

The following sparse tensor-times-vector multiplication example in C++
Expand Down
5 changes: 4 additions & 1 deletion apps/tensor_times_vector/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
cmake_minimum_required(VERSION 2.8)
cmake_minimum_required(VERSION 2.8.12)
if(POLICY CMP0048)
cmake_policy(SET CMP0048 NEW)
endif()
project(tensor_times_vector)

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
Expand Down
7 changes: 5 additions & 2 deletions include/taco/codegen/module.h
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
#include <vector>
#include <string>
#include <utility>
#include <random>

#include "taco/target.h"
#include "taco/ir/ir.h"
Expand All @@ -21,8 +22,6 @@ class Module {
setJITTmpdir();
}

void reset();

/// Compile the source into a library, returning its full path
std::string compile();

Expand Down Expand Up @@ -82,6 +81,10 @@ class Module {

void setJITLibname();
void setJITTmpdir();

static std::string chars;
static std::default_random_engine gen;
static std::uniform_int_distribution<int> randint;
};

} // namespace ir
Expand Down
6 changes: 0 additions & 6 deletions include/taco/error.h
Original file line number Diff line number Diff line change
Expand Up @@ -57,15 +57,9 @@ struct ErrorReport {
if (condition) {
return;
}
#ifdef PYTHON
explodeWithException();
#else
explode();
#endif
}

void explode();

void explodeWithException();
};

Expand Down
15 changes: 14 additions & 1 deletion include/taco/format.h
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ namespace taco {
class ModeFormat;
class ModeFormatPack;
class ModeFormatImpl;
class AttrQuery;
class IndexVar;


/// A Format describes the data layout of a tensor, and the sparse index data
Expand Down Expand Up @@ -95,7 +97,7 @@ class ModeFormat {
/// Properties of a mode format
enum Property {
FULL, NOT_FULL, ORDERED, NOT_ORDERED, UNIQUE, NOT_UNIQUE, BRANCHLESS,
NOT_BRANCHLESS, COMPACT, NOT_COMPACT
NOT_BRANCHLESS, COMPACT, NOT_COMPACT, ZEROLESS, NOT_ZEROLESS
};

/// Instantiates an undefined mode format
Expand Down Expand Up @@ -126,6 +128,7 @@ class ModeFormat {
bool isUnique() const;
bool isBranchless() const;
bool isCompact() const;
bool isZeroless() const;

/// Returns true if a mode format has a specific capability, false otherwise
bool hasCoordValIter() const;
Expand All @@ -134,6 +137,16 @@ class ModeFormat {
bool hasInsert() const;
bool hasAppend() const;

/// Returns true if a mode format has ungrouped insertion functions with
/// specific attributes, false otherwise
bool hasSeqInsertEdge() const;
bool hasInsertCoord() const;
bool isYieldPosPure() const;

std::vector<AttrQuery> getAttrQueries(
std::vector<IndexVar> parentCoords,
std::vector<IndexVar> childCoords) const;

/// Returns true if mode format is defined, false otherwise. An undefined mode
/// type can be used to indicate a mode whose format is not (yet) known.
bool defined() const;
Expand Down
Loading

0 comments on commit 731ed20

Please sign in to comment.