Skip to content

Commit

Permalink
Add einsum, closes #124 (#363)
Browse files Browse the repository at this point in the history
* WIP commit of `einsum` implementation

- full contraction is working
- transposition is working

- hopefully more? The above two are in tests

* allow contraction of a single axis

* add explicit and implicit matrix vector multiplication

* WIP commit towards a better approach for mapping for idx idents to shapes

* fix transposition by assigning indices in order of appearing in stmt

* WIP commit making all tests work

All features are now working. The code is a mess though :)

* remove a bunch of dead code / debug echoes

* add reference to where tests come from

* only import `tensor`

* move einsum tests to `tensor` subdirectory

* add einsum test to test suite

* fix import path of arraymancer in einsum test

* refactor out statement split

* add a note about `getTensors`

* add check about number of einsum statement

* add a reference test file for tests designed to fail

* add `TensorIdx` object to combine tensor and its indices

Also refactor out shape assertions into its own proc.

* remove unnecessary indexing variable in axes iterations

* add `enumerateIdx` iterator to yield indices of `seq[TensorIdx]`

* only generate `shapesContr` variable, if contracting at least 1 axis

* refactor code further, simplify index / result shape mapping

* use LHS explicit assignment as `let` stmt

Introduces a probably questionable interpretation of the explicit macro usage
by assigning explicit case to a `let` var of the chosen identifier.

Unfortunately the following is invalid syntax:
```nim
einsum(a):
  let b[j,i] = a[i,j]
```
otherwise we could let the user choose.

* remove now useless `findAxes`

* remove dead line in `failed` test file

* perform a type check of all tensors, use that type as result type

The types of all tensors given as arguments to `einsum` must
match. Thus we check whether they are all the same. If they are, we
use the type of the first tensor.

* revert returning a let section for explicit einsum

In both cases `einsum` again just returns a tensor.

* create local contiguous, row major tensors for more eff iteration

Instead of working with the given tensors, we now create local copies,
which are made contiguous and (if required) converted to row major
order. This way our iteration should be more efficient, in case column
major tensors are present.

* add example to docstring

* add some documentation to the top of the file

* fix two links in the documentation

* reverse the order of `idxIdentPairs` for correct loop order

This way the right most indices of the accessor will be the inner most
loops. Since we force row major ordering, those indices will be
closest together.

* use `genSym` for unique symbols in the macro block

This avoids problems, if the user hands a tensor with the
identifier `tmp`, `shape`, `shapeContr` or `res`.

* move type declarations to top of file, add short comment

* clean up sub type gen, use new type for contiguous tensors

* take into account tensor type for scalar result

* comment about scalar result type, arbitrariness of LHS for explicit
  • Loading branch information
Vindaar authored and mratsim committed Jun 30, 2019
1 parent 25cf5e3 commit 686836e
Show file tree
Hide file tree
Showing 6 changed files with 874 additions and 3 deletions.
6 changes: 4 additions & 2 deletions src/arraymancer.nim
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,8 @@ import ./tensor/tensor,
./io/io,
./ml/ml,
./stats/stats,
./nlp/nlp
./nlp/nlp,
./tensor/einsum

export tensor,
nn_primitives,
Expand All @@ -34,7 +35,8 @@ export tensor,
io,
ml,
stats,
nlp
nlp,
einsum

when not defined(no_lapack):
# THe ml module also does not export everything is LAPACK is not available
Expand Down
Loading

0 comments on commit 686836e

Please sign in to comment.