Short for "A Tensor Library". The foundational tensor and mathematical operation library on which all else is built.
A unit of work. For example, the work of matrix multiplication is an operation called aten::matmul.
An operation that comes natively with PyTorch ATen, for example aten::matmul.
An Operation that is defined by users and is usually a Compound Operation. For example, this tutorial details how to create Custom Operations.
Implementation of a PyTorch operation, specifying what should be done when an operation executes.
A Compound Operation is composed of other operations. Its kernel is usually device-agnostic. Normally it doesn't have its own derivative functions defined. Instead, AutoGrad automatically computes its derivative based on operations it uses.
Same as Compound Operation.
Same as Compound Operation.
An operation that's considered a basic operation, as opposed to a Compound Operation. Leaf Operation always has dispatch functions defined, usually has a derivative function defined as well.
Device-specific kernel of a leaf operation.
Opposed to Device Kernels, Compound kernels are usually device-agnostic and belong to Compound Operations.
Just-In-Time Compilation.
An interface to the TorchScript JIT compiler and interpreter.
Using torch.jit.trace
on a function to get an executable that can be optimized
using just-in-time compilation.
Using torch.jit.script
on a function to inspect source code and compile it as
TorchScript code.