Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for immutable LinearSpace elements #1663

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

leftaroundabout
Copy link
Contributor

Up until now, ODL has had a strong emphasis on carrying out operations in-place. This is similar to traditional application-specific software like solvers written in Fortran, even when the user writes code that looks more functional/declarative.

While this is in many instances a good tradeoff between high-level ease of use and low-level performance, it is not suitable for all use cases.

  • The pre-allocation philosophy makes ODL overly rigid. It requires that it is known beforehand what storage every element of a space will take up. This interferes with things like adaptive resolution and batching.
  • Often it is not clear that it is performance-advantageous at all to modify in-place. If a result array is allocated ad-hoc just to write to it, nothing is gained. On the other hand, immutable objects can avoid allocation by reusing objects in place, simply sharing references.
  • While in-place updates are efficient in backends that merely wrap around arrays (like NumPy), this is not the case for more advanced backends. Of particular interest is PyTorch, which can build up a computation graph for automatic-differentiation purposes. Though PyTorch also supports in-place updates, these complicate the process and as a result make performance much worse than for out-of-place calculations.

This pull request allows spaces to specify whether their elements can be handled as mutable or immutable objects, and/or which way is more appropriate. Based on this, the general arithmetic operations will either pre-allocate memory and use it in place, or give back results in a RAII- / move-semantics fashion.

I have tested that this works by confirming that the test suite succeeds also when (contrivedly) selecting in_place = NumOperationParadigmSupport.NOT_SUPPORTED for NumPyTensorSpace. Not sure whether it would make sense to include this into the test suite.

The PR does not yet guarantee that operators and solvers follow the style indicated by supported_num_operation_paradigms, though to some extend they inherit the behaviour of the arithmetic primitives.

@pep8speaks
Copy link

pep8speaks commented Dec 3, 2024

Checking updated PR...

Line 533:17: E126 continuation line over-indented for hanging indent
Line 527:17: E126 continuation line over-indented for hanging indent
Line 107:80: E501 line too long (81 > 79 characters)
Line 21:80: E501 line too long (85 > 79 characters)

Line 561:17: E126 continuation line over-indented for hanging indent
Line 389:14: E127 continuation line over-indented for visual indent
Line 388:14: E127 continuation line over-indented for visual indent
Line 286:80: E501 line too long (82 > 79 characters)
Line 265:18: E127 continuation line over-indented for visual indent
Line 253:80: E501 line too long (82 > 79 characters)

Line 314:1: W293 blank line contains whitespace
Line 313:35: E251 unexpected spaces around keyword / parameter equals
Line 313:33: E251 unexpected spaces around keyword / parameter equals
Line 312:31: E251 unexpected spaces around keyword / parameter equals
Line 312:29: E251 unexpected spaces around keyword / parameter equals
Line 312:21: E126 continuation line over-indented for hanging indent
Line 309:35: E251 unexpected spaces around keyword / parameter equals
Line 309:33: E251 unexpected spaces around keyword / parameter equals
Line 308:31: E251 unexpected spaces around keyword / parameter equals
Line 308:29: E251 unexpected spaces around keyword / parameter equals
Line 308:21: E126 continuation line over-indented for hanging indent
Line 22:9: E128 continuation line under-indented for visual indent

Line 656:19: E131 continuation line unaligned for hanging indent
Line 646:19: E131 continuation line unaligned for hanging indent
Line 624:19: E131 continuation line unaligned for hanging indent
Line 433:17: E126 continuation line over-indented for hanging indent
Line 400:25: E127 continuation line over-indented for visual indent
Line 20:5: E128 continuation line under-indented for visual indent

Comment last updated at 2024-12-03 17:16:00 UTC

This allows a single point of entry for addition, multiplication etc., which is
useful when these operations involve more decisions, like the planned in-place
vs out-of-place distinction.
This is the underlying implementation behind several of the standard arithmetic operators like +.
Previously, ODL would always force these to be evaluated with an in-place result, even though
that has little benefit when no preallocated memory is used for this.
…division.

Until now, ODL always switches to in-place style in the general wrappers, but this will
change in the future because it does not play well with PyTorch.
The new SupportedNumOperationParadigms feature would in principle allow declaring that
the NumPy version supports _only_ in-place operations. That would probably work ok in
practice, with much the same behaviour as the old version, but it could lead to strange
problems when dealing with product spaces where some subspaces support only out-of-place.
Since NumPy can do out-of-place just fine, it would be pointlessly dogmatic to prevent
that.
The subsequently called `lincomb` etc. methods already have the logic for
allocating `out` objects if the space desires, and doing it beforehand precludes
using the out-of-place operations that are preferrable in e.g. PyTorch.
The previous implementation in terms of `_lincomb` precluded out-of-place
implementations of the latter. It is also inefficient, and cannot well
avoid unnecessary copies.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants