Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to FInAT #86

Merged
merged 119 commits into from
Dec 12, 2016
Merged
Show file tree
Hide file tree
Changes from 118 commits
Commits
Show all changes
119 commits
Select commit Hold shift + click to select a range
150e6a5
gem: nascent Delta support
miklos1 Apr 11, 2016
b04ba80
Merge pull request #32 into firedrakeproject/delta
miklos1 Apr 11, 2016
f7367aa
Merge branch 'master' into finat
miklos1 Apr 25, 2016
e3c3e33
add extent= to gem.Index constructor
miklos1 Apr 25, 2016
a2b54ce
add FInAT interface
miklos1 Apr 25, 2016
d5f9243
WIP: FInAT integration
miklos1 Apr 25, 2016
c5d3ada
AffineIndex and IndexIterator
dham Apr 26, 2016
800ddff
FInAT dependency
dham Apr 27, 2016
a85e6ba
Slightly more ideomatic python
dham Apr 27, 2016
657f58b
Merge pull request #34 from firedrakeproject/affine_indices
miklos1 Apr 28, 2016
c852159
Merge branch 'master' into finat
miklos1 May 3, 2016
aa68adb
Merge branch 'master' into finat
miklos1 May 12, 2016
879ba92
adopt PyOP2 flatten=False
miklos1 May 12, 2016
6ef5819
support arguments with multiple indices
miklos1 May 12, 2016
6eeb6e0
finatinterface: RT, BDM, BDFM
miklos1 May 12, 2016
f980a69
clean up fem.py
miklos1 May 12, 2016
54c6a02
purge mixed elements
miklos1 May 12, 2016
63ae895
fix flake8
miklos1 May 12, 2016
f66e889
FInAT is a project on its own
miklos1 May 12, 2016
e3afef8
WIP: support for facet integrals
miklos1 May 23, 2016
a3d5c76
fix kernel interface for interior facet integrals
miklos1 May 23, 2016
3702039
add FIAT compatibility layer
miklos1 May 23, 2016
fd3cefc
fix RTCF & co
miklos1 May 23, 2016
8562a5a
Merge branch 'master' into finat
miklos1 Jun 30, 2016
563f92c
Merge branch 'master' into finat
miklos1 Jul 25, 2016
2502314
Merge branch 'master' into finat
miklos1 Sep 12, 2016
49d80bb
Merge branch 'master' into finat
miklos1 Sep 12, 2016
040d0e2
Merge branch 'master' into finat-old
miklos1 Sep 15, 2016
1458f67
fix ordering of argument indices
miklos1 Sep 13, 2016
458c851
remove broken references to MixedElement
miklos1 Sep 13, 2016
bf3596c
adopt generic TensorFiniteElement in FInAT
miklos1 Sep 21, 2016
cdccc97
Merge branch 'master' into finat-old
miklos1 Sep 21, 2016
eba153c
Merge branch 'master' into finat-old
miklos1 Sep 22, 2016
7655cf3
re-add FFC-style rounding
miklos1 Sep 22, 2016
629e12e
fix create_element testing
miklos1 Sep 23, 2016
1cbf1e5
Merge branch 'master' into finat
miklos1 Sep 23, 2016
21dbf15
fix facet integral performance issue
miklos1 Sep 26, 2016
40e4ed9
Merge branch 'master' into finat
miklos1 Sep 26, 2016
e646294
Merge branch 'master' into finat
miklos1 Oct 18, 2016
2d7c707
Use FInAT quadrature
dham Aug 11, 2016
1e80121
remove facet quadrature hack
dham Aug 12, 2016
c9567ab
WIP: FInAT API redesign
miklos1 Oct 18, 2016
f831aae
FInAT works?
miklos1 Oct 18, 2016
0d968e0
use restore_shape in CellCoordinate
miklos1 Oct 19, 2016
72f0aa3
add gem.index_sum
miklos1 Oct 19, 2016
a70a72f
remove dangling method
miklos1 Oct 19, 2016
d32edf0
Merge pull request #69 from firedrakeproject/finat-quadrature
miklos1 Oct 20, 2016
7412354
adopt new return value convention
miklos1 Oct 20, 2016
00a1c9e
use reconstruct because of ListTensor
miklos1 Oct 20, 2016
41b9f76
remove/inline Context.point_multiindex
miklos1 Oct 20, 2016
3551f66
be more backwards compatible
miklos1 Oct 20, 2016
03b8b21
no more restore_shape
miklos1 Oct 20, 2016
c38f3ae
wire up FInAT interface to TensorProductElement
miklos1 Oct 20, 2016
1d7ce21
cause not to trigger coneoproject/COFFEE#97
miklos1 Oct 21, 2016
24cb325
enable structured Q/DQ element on quadrilaterals
miklos1 Oct 21, 2016
b3b0ddb
Merge pull request #70 from firedrakeproject/finat-constant
miklos1 Oct 21, 2016
863613d
push QuadrilateralElement over to FInAT
miklos1 Oct 21, 2016
b32e258
adopt the more FIAT-like API of FInAT
miklos1 Oct 24, 2016
88b0a2d
quad_opc -> quad_tpc
miklos1 Oct 24, 2016
672a50d
Merge pull request #72 from firedrakeproject/finat-fiat-api
miklos1 Oct 24, 2016
7c0a163
implement FFC rounding on GEM expressions
miklos1 Nov 4, 2016
cc44792
remove temporary FFC rounding from COFFEE generation
miklos1 Nov 4, 2016
aeed9c4
wire in GEM rounding for FInAT expressions
miklos1 Nov 4, 2016
9a10712
drop scalar rounding in COFFEE code generation
miklos1 Nov 4, 2016
3410e9b
add comments
miklos1 Nov 4, 2016
77ded72
Merge branch 'master' into finat
miklos1 Nov 4, 2016
be89d06
Merge branch 'master' into finat
miklos1 Nov 16, 2016
6d76750
fix #pragma coffee linear loop
miklos1 Oct 25, 2016
d508d89
delta elimination and sum factorisation
miklos1 Oct 24, 2016
106fd02
do not generate empty for loops
miklos1 Oct 24, 2016
3b31592
IndexSum with multiindex
miklos1 Oct 24, 2016
7e24f0d
little clean up
miklos1 Nov 16, 2016
5c9c169
add docstrings
miklos1 Nov 16, 2016
1485550
limit O(N!) algorithm to N <= 5
miklos1 Nov 21, 2016
2418834
log warning for unexpected case
miklos1 Nov 21, 2016
e385969
add more comments
miklos1 Nov 21, 2016
778a69f
avoid accidental float result (1.0)
miklos1 Nov 22, 2016
48875f0
revise coefficient evaluation
miklos1 Nov 24, 2016
a65bdfb
improve constant folding
miklos1 Nov 24, 2016
b0fe22a
generate fast Jacobian code snippets
miklos1 Nov 24, 2016
629b392
remove Delta * ListTensor factorisation
miklos1 Nov 24, 2016
f78545b
use one in gem.optimise
miklos1 Nov 25, 2016
e6aec8f
replace sum factorisation algorithm
miklos1 Nov 25, 2016
ac43087
clean up sum factorisation
miklos1 Nov 25, 2016
4eea68d
use setdefault
miklos1 Nov 28, 2016
e73fceb
remove redundant line
miklos1 Nov 28, 2016
0098235
Merge pull request #81 from firedrakeproject/finat-sum-revised
miklos1 Nov 30, 2016
7ede136
Merge branch 'master' into finat
miklos1 Nov 30, 2016
30c3c92
revise the rounding of FIAT tables
miklos1 Dec 6, 2016
fbe7ea4
revise expression selection
miklos1 Dec 7, 2016
1c26094
restore automatic unrolling of length one sums
miklos1 Dec 7, 2016
cfa6f7c
Merge pull request #83 from firedrakeproject/finat-foo
miklos1 Dec 7, 2016
4e11208
adopt FInAT finite element construction changes
miklos1 Dec 7, 2016
53cf86d
remove affine index groups
miklos1 Dec 7, 2016
4f27e04
remove some no longer used code
miklos1 Dec 7, 2016
f5bb4e4
do not duplicate work
miklos1 Dec 7, 2016
bf03554
Merge pull request #84 from firedrakeproject/finat-cleanup
miklos1 Dec 8, 2016
6b32b56
restore FFC MixedElement
miklos1 Nov 22, 2016
441ee70
ugly and suboptimal but works
miklos1 Nov 23, 2016
5c7ffc6
refactor ufc.prepare_coefficient
miklos1 Dec 5, 2016
1b7f913
coefficient splitting for UFC
miklos1 Dec 6, 2016
3f4a5bc
test gem.reshape (with FlexiblyIndexed)
miklos1 Dec 8, 2016
7c80b1b
change the internal representation of FlexiblyIndexed
miklos1 Dec 8, 2016
80e31e8
generalised gem.view
miklos1 Dec 8, 2016
547833c
generalise gem.reshape
miklos1 Dec 8, 2016
8378f52
fix assertions
miklos1 Dec 8, 2016
abc7682
update COFFEE code generation for FlexiblyIndexed
miklos1 Dec 8, 2016
54156e4
WIP: update Firedrake kernel interface
miklos1 Dec 8, 2016
9935d40
WIP: update UFC kernel interface
miklos1 Dec 8, 2016
4e87c82
do away with UFC override of coefficient
miklos1 Dec 8, 2016
e30f137
revise Firedrake kernel interface
miklos1 Dec 9, 2016
b0d803c
revise UFC arguments interface
miklos1 Dec 9, 2016
500f43e
revise UFC coefficients interface
miklos1 Dec 9, 2016
343cd92
comment no longer applies
miklos1 Dec 9, 2016
d5d82cb
factor out common code of gem.view and gem.reshape
miklos1 Dec 9, 2016
68a585b
Merge pull request #85 from firedrakeproject/finat-ufc
miklos1 Dec 12, 2016
39512e0
add error messages
miklos1 Dec 12, 2016
1c255f6
fix typo
miklos1 Dec 12, 2016
2d6909e
update docstrings: FIAT -> FInAT
miklos1 Dec 12, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions gem/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from __future__ import absolute_import, print_function, division

from gem.gem import * # noqa
from gem.optimise import select_expression # noqa
251 changes: 190 additions & 61 deletions gem/gem.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,10 @@

from __future__ import absolute_import, print_function, division
from six import with_metaclass
from six.moves import map
from six.moves import range, zip

from abc import ABCMeta
import collections
from itertools import chain
import numbers
from operator import attrgetter

import numpy
Expand All @@ -34,9 +32,9 @@
'Variable', 'Sum', 'Product', 'Division', 'Power',
'MathFunction', 'MinValue', 'MaxValue', 'Comparison',
'LogicalNot', 'LogicalAnd', 'LogicalOr', 'Conditional',
'Index', 'VariableIndex', 'Indexed', 'FlexiblyIndexed',
'ComponentTensor', 'IndexSum', 'ListTensor', 'as_gem',
'partial_indexed', 'reshape']
'Index', 'VariableIndex', 'Indexed', 'ComponentTensor',
'IndexSum', 'ListTensor', 'Delta', 'index_sum',
'partial_indexed', 'reshape', 'view']


class NodeMeta(type):
Expand Down Expand Up @@ -129,10 +127,6 @@ def value(self):
assert not self.shape
return 0.0

@property
def array(self):
return numpy.zeros(self.shape)


class Identity(Constant):
"""Identity matrix"""
Expand Down Expand Up @@ -206,12 +200,15 @@ def __new__(cls, a, b):
assert not a.shape
assert not b.shape

# Zero folding
# Constant folding
if isinstance(a, Zero):
return b
elif isinstance(b, Zero):
return a

if isinstance(a, Constant) and isinstance(b, Constant):
return Literal(a.value + b.value)

self = super(Sum, cls).__new__(cls)
self.children = a, b
return self
Expand All @@ -224,10 +221,18 @@ def __new__(cls, a, b):
assert not a.shape
assert not b.shape

# Zero folding
# Constant folding
if isinstance(a, Zero) or isinstance(b, Zero):
return Zero()

if a == one:
return b
if b == one:
return a

if isinstance(a, Constant) and isinstance(b, Constant):
return Literal(a.value * b.value)

self = super(Product, cls).__new__(cls)
self.children = a, b
return self
Expand All @@ -240,12 +245,18 @@ def __new__(cls, a, b):
assert not a.shape
assert not b.shape

# Zero folding
# Constant folding
if isinstance(b, Zero):
raise ValueError("division by zero")
if isinstance(a, Zero):
return Zero()

if b == one:
return a

if isinstance(a, Constant) and isinstance(b, Constant):
return Literal(a.value / b.value)

self = super(Division, cls).__new__(cls)
self.children = a, b
return self
Expand All @@ -264,7 +275,7 @@ def __new__(cls, base, exponent):
raise ValueError("cannot solve 0^0")
return Zero()
elif isinstance(exponent, Zero):
return Literal(1)
return one

self = super(Power, cls).__new__(cls)
self.children = base, exponent
Expand Down Expand Up @@ -438,6 +449,10 @@ def __new__(cls, aggregate, multiindex):
if isinstance(index, Index):
index.set_extent(extent)

# Empty multiindex
if not multiindex:
return aggregate

# Zero folding
if isinstance(aggregate, Zero):
return Zero()
Expand Down Expand Up @@ -474,7 +489,7 @@ def __init__(self, variable, dim2idxs):

For example, if ``variable`` is rank two, and ``dim2idxs`` is

((1, ((i, 2), (j, 3), (k, 4))), (0, ()))
((1, ((i, 12), (j, 4), (k, 1))), (0, ()))

then this corresponds to the indexing:

Expand All @@ -484,31 +499,32 @@ def __init__(self, variable, dim2idxs):
assert isinstance(variable, Variable)
assert len(variable.shape) == len(dim2idxs)

indices = []
dim2idxs_ = []
free_indices = []
for dim, (offset, idxs) in zip(variable.shape, dim2idxs):
strides = []
offset_ = offset
idxs_ = []
last = 0
for idx in idxs:
index, stride = idx
strides.append(stride)

if isinstance(index, Index):
if index.extent is None:
index.set_extent(stride)
elif not (index.extent <= stride):
raise ValueError("Index extent cannot exceed stride")
indices.append(index)
assert index.extent is not None
free_indices.append(index)
idxs_.append((index, stride))
last += (index.extent - 1) * stride
elif isinstance(index, int):
if not (index <= stride):
raise ValueError("Index cannot exceed stride")
offset_ += index * stride
else:
raise ValueError("Unexpected index type for flexible indexing")

if dim is not None and offset + numpy.prod(strides) > dim:
if dim is not None and offset_ + last >= dim:
raise ValueError("Offset {0} and indices {1} exceed dimension {2}".format(offset, idxs, dim))

dim2idxs_.append((offset_, tuple(idxs_)))

self.children = (variable,)
self.dim2idxs = dim2idxs
self.free_indices = unique(indices)
self.dim2idxs = tuple(dim2idxs_)
self.free_indices = unique(free_indices)


class ComponentTensor(Node):
Expand Down Expand Up @@ -543,26 +559,36 @@ def __new__(cls, expression, multiindex):


class IndexSum(Scalar):
__slots__ = ('children', 'index')
__back__ = ('index',)
__slots__ = ('children', 'multiindex')
__back__ = ('multiindex',)

def __new__(cls, summand, index):
def __new__(cls, summand, multiindex):
# Sum zeros
assert not summand.shape
if isinstance(summand, Zero):
return summand

# Sum a single expression
if index.extent == 1:
return Indexed(ComponentTensor(summand, (index,)), (0,))
# Unroll singleton sums
unroll = tuple(index for index in multiindex if index.extent <= 1)
if unroll:
assert numpy.prod([index.extent for index in unroll]) == 1
summand = Indexed(ComponentTensor(summand, unroll),
(0,) * len(unroll))
multiindex = tuple(index for index in multiindex
if index not in unroll)

# No indices case
multiindex = tuple(multiindex)
if not multiindex:
return summand

self = super(IndexSum, cls).__new__(cls)
self.children = (summand,)
self.index = index
self.multiindex = multiindex

# Collect shape and free indices
assert index in summand.free_indices
self.free_indices = unique(set(summand.free_indices) - {index})
assert set(multiindex) <= set(summand.free_indices)
self.free_indices = unique(set(summand.free_indices) - set(multiindex))

return self

Expand Down Expand Up @@ -621,6 +647,31 @@ def get_hash(self):
return hash((type(self), self.shape, self.children))


class Delta(Scalar, Terminal):
__slots__ = ('i', 'j')
__front__ = ('i', 'j')

def __new__(cls, i, j):
assert isinstance(i, IndexBase)
assert isinstance(j, IndexBase)

# \delta_{i,i} = 1
if i == j:
return one

# Fixed indices
if isinstance(i, int) and isinstance(j, int):
return Literal(int(i == j))

self = super(Delta, cls).__new__(cls)
self.i = i
self.j = j
# Set up free indices
free_indices = tuple(index for index in (i, j) if isinstance(index, Index))
self.free_indices = tuple(unique(free_indices))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constant-folding of i == j makes the uniquifying unnecessary? Unless indices hash differently from equality?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, you care about ordering, making Delta(j, i) and Delta(i, j) have the same ordering of free_indices by calling unique here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has evolved from numpy.unique which sorts and removes duplicates. This idea is to make x.free_indices == y.free_indices work (without having to cast to sets first).

return self


def unique(indices):
"""Sorts free indices and eliminates duplicates.

Expand All @@ -630,18 +681,13 @@ def unique(indices):
return tuple(sorted(set(indices), key=id))


def as_gem(expr):
"""Cast an expression to GEM."""
if isinstance(expr, Node):
return expr
elif isinstance(expr, numbers.Number):
return Literal(expr)
elif isinstance(expr, numpy.ndarray) and expr.dtype != object:
return Literal(expr)
elif isinstance(expr, collections.Iterable):
return ListTensor(list(map(as_gem, expr)))
else:
raise ValueError("Cannot cast %s to GEM." % (expr,))
def index_sum(expression, indices):
"""Eliminates indices from the free indices of an expression by
summing over them. Skips any index that is not a free index of
the expression."""
multiindex = tuple(index for index in indices
if index in expression.free_indices)
return IndexSum(expression, multiindex)


def partial_indexed(tensor, indices):
Expand All @@ -667,20 +713,103 @@ def partial_indexed(tensor, indices):
raise ValueError("More indices than rank!")


def reshape(variable, *shapes):
def strides_of(shape):
"""Calculate cumulative strides from per-dimension capacities.

For example:

[2, 3, 4] ==> [12, 4, 1]

"""
temp = numpy.flipud(numpy.cumprod(numpy.flipud(list(shape)[1:])))
return list(temp) + [1]


def decompose_variable_view(expression):
"""Extract ComponentTensor + FlexiblyIndexed view onto a variable."""
if isinstance(expression, Variable):
variable = expression
indexes = tuple(Index(extent=extent) for extent in expression.shape)
dim2idxs = tuple((0, ((index, 1),)) for index in indexes)
elif isinstance(expression, ComponentTensor) and isinstance(expression.children[0], FlexiblyIndexed):
variable = expression.children[0].children[0]
indexes = expression.multiindex
dim2idxs = expression.children[0].dim2idxs
else:
raise ValueError("Cannot handle {} objects.".format(type(expression).__name__))

return variable, dim2idxs, indexes


def reshape(expression, *shapes):
"""Reshape a variable (splitting indices only).

:arg variable: a :py:class:`Variable`
:arg expression: view of a :py:class:`Variable`
:arg shapes: one shape tuple for each dimension of the variable.
"""
dim2idxs = []
variable, dim2idxs, indexes = decompose_variable_view(expression)
assert len(indexes) == len(shapes)
shape_of = dict(zip(indexes, shapes))

dim2idxs_ = []
indices = []
for shape in shapes:
idxs = []
for e in shape:
i = Index(extent=e)
idxs.append((i, e))
indices.append(i)
dim2idxs.append((0, tuple(idxs)))
expr = FlexiblyIndexed(variable, tuple(dim2idxs))
for offset, idxs in dim2idxs:
idxs_ = []
for idx in idxs:
index, stride = idx
assert isinstance(index, Index)
dim = index.extent
shape = shape_of[index]
if dim is not None and numpy.prod(shape) != dim:
raise ValueError("Shape {} does not match extent {}.".format(shape, dim))
strides = strides_of(shape)
for extent, stride_ in zip(shape, strides):
index = Index(extent=extent)
idxs_.append((index, stride_ * stride))
indices.append(index)
dim2idxs_.append((offset, tuple(idxs_)))

expr = FlexiblyIndexed(variable, tuple(dim2idxs_))
return ComponentTensor(expr, tuple(indices))


def view(expression, *slices):
"""View a part of a variable.

:arg expression: view of a :py:class:`Variable`
:arg slices: one slice object for each dimension of the variable.
"""
variable, dim2idxs, indexes = decompose_variable_view(expression)
assert len(indexes) == len(slices)
slice_of = dict(zip(indexes, slices))

dim2idxs_ = []
indices = []
for offset, idxs in dim2idxs:
offset_ = offset
idxs_ = []
for idx in idxs:
index, stride = idx
assert isinstance(index, Index)
dim = index.extent
s = slice_of[index]
start = s.start or 0
stop = s.stop or dim
if stop is None:
raise ValueError("Unknown extent!")
if dim is not None and stop > dim:
raise ValueError("Slice exceeds dimension extent!")
step = s.step or 1
offset_ += start * stride
extent = 1 + (stop - start - 1) // step
index_ = Index(extent=extent)
indices.append(index_)
idxs_.append((index_, step * stride))
dim2idxs_.append((offset_, tuple(idxs_)))

expr = FlexiblyIndexed(variable, tuple(dim2idxs_))
return ComponentTensor(expr, tuple(indices))


# Static one object for quicker constant folding
one = Literal(1)
Loading