Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor ops to allow non-funsor parameters #491

Merged
merged 23 commits into from
Mar 18, 2021
Merged

Refactor ops to allow non-funsor parameters #491

merged 23 commits into from
Mar 18, 2021

Conversation

fritzo
Copy link
Member

@fritzo fritzo commented Mar 15, 2021

Addresses #489
partly pair coded with @eb8680

This major op refactoring aims to unblock @ordabayevy in #477 and #482 by allowing ops to have arbitrary *arg,**kwarg parameters. Changes include:

  • nullop is renamed to null for consistency (NullOp remains unchanged).
  • Dispatch tables are moved from op instances to op subclasses.
  • @make_op has moved to a classmethod @MyOpClass.make e.g. @BinaryOp.make
  • Each concrete op subclass has a fixed .arity attribute.
  • Op patterns dispatch on only the first [:arity]-many args (and patterns are validated to ensure this).
  • Ops are new weakly cached via WeakValueDict.
  • .subclass_register() handlers have a different inferface, now including cls.
  • There is now a single shared WrappedTransformOp class, and its instances each have an op.
  • ops.WrappedTransformOp(fn) must now be written as a kwarg ops.WrappedTransformOp(fn=fn)
  • is_numeric_array() is demoted from an Op to a @singledispatch.
  • ops.stack interface has changed to ops.stack(parts, dim) as in NumPy and PyTorch.
  • funsor.tensor.stack is replaced by ops.stack.
  • ops.einsum interface has changed to ops.einsum(operands, equation) where operands is a list or tuple.
  • Finitary funsor interface has changed from variously-many args to a single tuple arg.

Tested

  • refactoring is covered by existing tests

funsor/adjoint.py Outdated Show resolved Hide resolved
@fritzo fritzo marked this pull request as ready for review March 16, 2021 21:58
@fritzo fritzo requested a review from eb8680 March 16, 2021 21:59
@fritzo
Copy link
Member Author

fritzo commented Mar 18, 2021

@eb8680 finally got the tests to pass 😓

@eb8680 eb8680 merged commit 7c7e0c5 into master Mar 18, 2021
@eb8680 eb8680 deleted the op-arity branch March 18, 2021 00:37
@fehiepsi
Copy link
Member

For some reason, I am getting

E       AttributeError: 'DirichletMultinomial' object has no attribute 'data'

in NumPyro test/test_handlers.py::test_collapse_beta_binomial. The instance DirichletMultinomial has the value

DirichletMultinomial(concentration=[1.5 0.5], total_count=10.0, value=Finitary(stack, (Tensor(7.0, OrderedDict(), 'real'), Number(3.0))))

@fritzo
Copy link
Member Author

fritzo commented Mar 18, 2021

@eb8680, @fehiepsi's issue with DirichletMultinomial sounds related to the lazy test we disabled. Do you think we should try to get that working again?

@fehiepsi
Copy link
Member

fehiepsi commented Mar 18, 2021

Yeah, enabling it raises two errors

  • NotImplementedError: inhomogeneous total_count is not supported (only with torch backend)
  • NotImplementedError: TODO support converting Indep(Transform)

but it seems to me refactoring of stack is more relevant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants