Accelerated calculations in Swift for TensorFlow are performed through the Tensor type. Currently, there are two options for how that acceleration is performed: eager mode or XLA compiler backed lazy tensor mode (X10).
This is the default mode. Execution is eagerly performed an operation-by-operation basis using the TensorFlow's 2.x eager execution without creating graphs.
The eager backend supports CPUs and GPUs. It does not support TPUs.
The X10 backend is backed by XLA and tensor operations are lazily evaluated. Operations are recorded in a graph until the results are needed. This allows for optimizations such as fusion into one graph.
This backend provides improved performance over the eager backend in many cases. However, if the model changes shapes at each step, recompilation costs might outweigh the benefits. See the X10 Troubleshooting Guide for more details.
X10 supports CPUs, GPUs, and TPUs.
Check out this Colab notebook to learn how to switch between the eager and X10 backends.