Skip to content
This repository has been archived by the owner on Mar 30, 2022. It is now read-only.

Latest commit

 

History

History
24 lines (11 loc) · 1.53 KB

SupportedBackends.md

File metadata and controls

24 lines (11 loc) · 1.53 KB

Swift For Tensorflow Backends

Accelerated calculations in Swift for TensorFlow are performed through the Tensor type. Currently, there are two options for how that acceleration is performed: eager mode or XLA compiler backed lazy tensor mode (X10).

Eager Backend

This is the default mode. Execution is eagerly performed an operation-by-operation basis using the TensorFlow's 2.x eager execution without creating graphs.

The eager backend supports CPUs and GPUs. It does not support TPUs.

X10 (XLA Compiler Based)

The X10 backend is backed by XLA and tensor operations are lazily evaluated. Operations are recorded in a graph until the results are needed. This allows for optimizations such as fusion into one graph.

This backend provides improved performance over the eager backend in many cases. However, if the model changes shapes at each step, recompilation costs might outweigh the benefits. See the X10 Troubleshooting Guide for more details.

X10 supports CPUs, GPUs, and TPUs.

Usage

Check out this Colab notebook to learn how to switch between the eager and X10 backends.