-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve creating an op documentation page #1086
base: main
Are you sure you want to change the base?
Conversation
@@ -165,35 +151,6 @@ or :meth:`Op.make_thunk`. | |||
:meth:`COp.c_code` and other related ``c_**`` methods. Note that an | |||
:class:`Op` can provide both Python and C implementations. | |||
|
|||
:meth:`Op.make_thunk` method is another alternative to :meth:`Op.perform`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is outdated, and tbh I don't think it's necessary for 99.9999% of the people reading this page
2f1a456
to
3a77d6b
Compare
7e407f9
to
d37103d
Compare
Changes: 1. Remove references to c-code which apply to `COp` but not `Op` 2. Fix failing doctests 3. Improve explanation of `make_node` 4. Emphasize distinction between itypes/otypes and make-node 5. Show `L_op` instead of `grad` 6. Show how to test `L_op` and `infer_shape` implementation 7. Simplify explanation of `__props__` and illustrate in example. 8. Introduce more complex multi-output Op to drive these details home 9. Remove old references to numba/ random variable Ops
d37103d
to
1d9bc2b
Compare
@@ -1,40 +1,18 @@ | |||
|
|||
.. _creating_an_op: | |||
|
|||
Creating a new :class:`Op`: Python implementation | |||
Creating a new :ref:`op`: Python implementation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do you want the :ref:`op`
/:class:`Op`
to point to? To https://pytensor--1086.org.readthedocs.build/en/1086/extending/op.html or to https://pytensor--1086.org.readthedocs.build/en/1086/extending/graphstructures.html#op?
I think many of the broken cross-references come from this page: https://pytensor--1086.org.readthedocs.build/en/1086/extending/op.html, which is wildly inconsistent with itself. If the cross-reference would ideally point there we should fix this page first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will check. I just wanted it to point somewhere, it wasn't before
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@OriolAbril extending op seems much more useful than the graphstructures#op, so I guess there.
================================================= | ||
|
||
So suppose you have looked through the library documentation and you don't see | ||
a function that does what you want. | ||
You may have looked through the library documentation and you don't see a function that does what you want. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT: "... documentation but don't see ..."
has no bugs, and potentially profits from rewrites that have already been | ||
implemented. | ||
Odds are your function that uses existing PyTensor expressions is short, has no bugs, can be differentiated, | ||
and may even work in non-default backends and benefit from optimization rewrites. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion: "A PyTensor function that builds upon existing expressions tends to be concise, reliable, and automatically differentiable, while often working seamlessly across different backends, thereby benefiting from optimization."
:ref:`variable` nodes. :ref:`apply` nodes perform computation on these | ||
variables to produce new variables. Each :ref:`apply` node has a link to an | ||
instance of :ref:`op` which describes the computation to perform. This tutorial | ||
details how to write such an :ref:`op` instance. Please refers to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Please refer to"
--------------------------- | ||
|
||
An :class:`Op` is any Python object which inherits from :class:`Op`. | ||
An :ref:`op` is any Python object which inherits from :ref:`op`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"that inherits"
:meth:`make_node` method creates an :ref:`apply` node representing the application | ||
of the :ref:`op` on the inputs provided. This method is responsible for three things: | ||
|
||
- Checks that the inputs can be converted or are symbolic :ref:`variable`\s whose types are compatible with the current :ref:`op`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
converted to what?
- Checks that the inputs can be converted or are symbolic :ref:`variable`\s whose types are compatible with the current :ref:`op`. | ||
If the :ref:`op` cannot be applied on the provided input types, it must raises an exception (such as :class:`TypeError`). | ||
- Creates new output :ref:`variable`\s of a suitable symbolic :class:`Type` to serve as the outputs of this :ref:`op`'s application. | ||
- Returns an :ref:`apply` instance with the input and output :ref:`variable`\s, and itself an the :ref:`op`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something is up with the end of this sentence.
hold a value of ``None``. It can also pre-allocate some memory | ||
for the :ref:`op` to use. This feature can allow ``perform`` to reuse | ||
memory between calls, for example. If there is something | ||
preallocated in the ``output_storage``, it will be of the good |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"of the correct dtype"
|
||
This function implements the application of the R-operator on the | ||
function represented by your :ref:`op`. Let assume that function is :math:`f`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Let's assume ..."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
function represented by your :ref:`op`. Let assume that function is :math:`f`, | |
If we assume that function is :math:`f`, | |
with input :math:`x`, then applying the R-operator means computing the | |
Jacobian of :math:`f` and right-multiplying it by :math:`v` (the evaluation | |
point): :math:`\frac{\partial f}{\partial x} v`. |
|
||
At a high level, the code fragment declares a class (e.g., ``DoubleOp1``) and then creates one instance of that class (e.g., ``doubleOp1``). | ||
|
||
As you'll se below, you can then pass an instantiated :ref:`variable`, such as ``x = tensor.matrix("x")`` to the instantiated :ref:`op`, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see
manually. It also generates a default :func:`__str__` method that prints the | ||
attribute names and their values. | ||
The use of :attr:`__props__` saves the user the trouble of implementing :meth:`__eq__` and :meth:`__hash__` manually. | ||
It also generates a default :meth:`__repr__` and :meth:`__str__` methods that prints the attribute names and their values. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"generates default ... methods that print the ..."
Finally, the gradient expression: | ||
|
||
Again, we can use pytensor `verify_grad` function to test the gradient implementation. | ||
Due to the presence of multiple outputs we need to pass a Callable instead of the Op instance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Callable
instead of the Op
(add backticks). There are a couple more below too.
See commit message for list of changes.
The changes are based on feedback from helping other devs and users
Tagging @OriolAbril because I'm 100% sure I messed up some sphinx stuff (sorry)
📚 Documentation preview 📚: https://pytensor--1086.org.readthedocs.build/en/1086/