Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: TensorFlow Canonical Type System #208

Merged
merged 46 commits into from
Mar 25, 2020
Merged
Changes from 19 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
0395440
Clone draft template for new tf-types RFC.
Feb 11, 2020
4e58ec3
Initial commit.
Feb 11, 2020
4553626
Capitalization
Feb 11, 2020
001efcc
Update 20200211-tf-types.md
Feb 11, 2020
71a5164
Update 20200211-tf-types.md
Feb 11, 2020
0842a2f
Update 20200211-tf-types.md
Feb 11, 2020
e4fd7fa
Update 20200211-tf-types.md
Feb 11, 2020
4666892
Update 20200211-tf-types.md
Feb 11, 2020
de72ac5
Update 20200211-tf-types.md
Feb 11, 2020
e9720c1
Update 20200211-tf-types.md
Feb 12, 2020
e3224da
Update 20200211-tf-types.md
Feb 18, 2020
53680e5
Update 20200211-tf-types.md
Feb 19, 2020
7b21bf7
Update 20200211-tf-types.md
Feb 19, 2020
c951112
Update 20200211-tf-types.md
Feb 19, 2020
aae458f
Update 20200211-tf-types.md
Feb 20, 2020
f923705
Update 20200211-tf-types.md
Feb 20, 2020
dc924a7
Update PR number.
Feb 20, 2020
5d4ce1d
Add `TensorLike` to core types.
Feb 20, 2020
fb3e2f2
Clarify note about static checkers.
Feb 20, 2020
c42f292
Add note about ABC performance concerns
Feb 21, 2020
198a1d0
Clarify semantics of `Optional`
Feb 21, 2020
5bf02fb
Add details and example for `Optional`
Feb 21, 2020
2f4d14d
Fix typo.
Feb 21, 2020
52d5617
Update 20200211-tf-types.md
Feb 21, 2020
d27dcb7
Add discussion topic
Mar 4, 2020
5bff3e7
Clarify intent for type checking mechanics
Mar 4, 2020
f692cc0
Add section on protocol for custom Tensors
Mar 7, 2020
1b20c1f
Add open question about legacy API
Mar 7, 2020
451b627
Add discussion topic for tf.TypeSpec
Mar 10, 2020
c3a1c2d
Detail the relationship with tf.TypeSpec
Mar 17, 2020
f551da1
Formatting
Mar 20, 2020
cbe5eaf
Draft guidance for adding new types
Mar 20, 2020
b8aae73
Minor edit
Mar 20, 2020
e7b0fef
Use a proper example for type annotations
Mar 20, 2020
8797e73
Update 20200211-tf-types.md
Mar 20, 2020
882dbb6
Fix typo.
Mar 20, 2020
97d5d8d
Clarify that example is non-prescriptive
Mar 21, 2020
e86de57
Update status to proposed
Mar 21, 2020
fac8975
Update time stamp
Mar 21, 2020
9c2f37f
Fix typo
Mar 21, 2020
8e47bc1
Slight update to example
Mar 21, 2020
bc33a1a
Add link to an example
Mar 24, 2020
af51680
Add design review notes
Mar 24, 2020
c8816c4
Fix indentation in notes
Mar 24, 2020
327ceda
Minor updates following design review
Mar 24, 2020
aeff124
Update status
Mar 24, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
147 changes: 147 additions & 0 deletions rfcs/20200211-tf-types.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
# TensorFlow Canonical Type System

| Status | Proposed |
:-------------- |:---------------------------------------------------- |
| **RFC #** | [208](https://github.com/tensorflow/community/pull/208)
| **Author(s)** | Dan Moldovan (mdan@google.com) |
| **Sponsor** | Gaurav Jain (gjn@google.com) |
| **Updated** | 2020-02-19 |

## Objective

This RFC proposes a new TensorFlow module and namespace (`tf.types`) dedicated to storing implementation-free type definitions, similar to C++ header files. This module has no other dependencies inside TensorFlow, so any other internal module can depend on it to ensure interoperability without the risk of creating circular dependencies. These definitions can also be used by external users, for example in pytype annotations.
The RFC focuses on the Python API, however the design should be reviewed with cross-language consistency in mind.
## Motivation

**Interoperability and composability**. A set of standard types that formalize an interface and decouples it from implementation ensures composability between components, especially when multiple implementations are involved.

**Supports the [acyclic dependencies principle](https://en.wikipedia.org/wiki/Acyclic_dependencies_principle)**. In many instances, circular dependencies are caused between low-level complex components that need to compose (e.g. autograph needs to recognize datasets, and datasets need to use autograph). Interface extraction is a common pattern for breaking such cycles.

**Supports pytype**. A set of static types that is consistent under Python’s `isinstance`/`issubclass` is required to support [PEP-484 type annotations](https://www.python.org/dev/peps/pep-0484/) in TensorFlow. This module can serve as the basis for that.

**Helps formalize requirements for new APIs**. Having a formal, implementation-independent definition for things such as tensors, variables, iterables, iterators makes it easy to document and test compatibility between APIs.

## User Benefit

Application developers may use these canonical definitions for pytype annotations.

Library developers can more easily define their API interfaces by referring to this namespace.

Developers of modules internal to TensorFlow can use this module to avoid creating circular dependencies.

## Design Proposal

### The `tf.types` Namespace / Module
All the declarations exposed under the `tf.types` namespace reside in the `python/types/*.py` module. These are [abstract base classes](https://docs.python.org/3.7/library/abc.html) with a bare minimum of method definitions and minimal or no implementation, which serve to formalize and document the contract of common types such as `Tensor`, `Variable`, etc.

These definitions may be used as PEP 484 type hints, although in some cases they may be type- or shape- erased (for example, `tf.types.Tensor` may not necessarily be parametrized by `dtype` or `shape`). Note however that designs which parametrize on shape do exist, see for instance [tensorflow#31579](https://github.com/tensorflow/tensorflow/issues/31579).

The type definitions are consistent with `isinstance` and `issubclass`. For example, `isinstance(tf.Tensor, tf.types.Tensor) == True`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this should not be a requirement as this is not a requirements for python types to support isinstance. It also restricts us: we can't use anything in the typing module and protocols. Ex:

from typing import List
import tensorflow as tf

TensorLike = Union[tf.Tensor, List[float]]
isinstance([3.8, 23.4], TensorLike)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-10-0ff96ca65533> in <module>
----> 1 isinstance([3.8, 23.4], TensorLike)

~/softwares/python/anaconda/lib/python3.7/typing.py in __instancecheck__(self, obj)
    709
    710     def __instancecheck__(self, obj):
--> 711         return self.__subclasscheck__(type(obj))
    712
    713     def __subclasscheck__(self, cls):

~/softwares/python/anaconda/lib/python3.7/typing.py in __subclasscheck__(self, cls)
    717             if cls._special:
    718                 return issubclass(cls.__origin__, self.__origin__)
--> 719         raise TypeError("Subscripted generics cannot be used with"
    720                         " class and instance checks")
    721

TypeError: Subscripted generics cannot be used with class and instance checks

I believe this RFC should include some examples of implementation to make things clearer.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. It seems like we need to think through the interaction between classical interfaces, protocols and how they play with injection mechanisms like register_tensor_conversion_function. Ideally we wouldn't need so many mechanisms for the same thing.

Regarding protocols, I suspect @runtime_checkable can help. Although, it could come with a performance hit, as in the case of ABCMeta.

I'll look into elaborating this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a section proposing the use of protocols for custom Tensor types - I think that is in line with the use of typing that you suggested as well.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much, I'll take a look whenever I can.


### General Principles
This module should not contain any implementation code. An advantage of that is that users exploring the implementation of specific types will not need to inspect this module. However, users who do not wish to inspect the code may visit the documentation of these generic types to better understand specifically what are the concrete subclasses of this type expected to do.

The `tf.types` module may depend on external packages (such as `numpy`) _strictly for the purpose of defining type annotations and documentation_. No dependencies to other TensorFlow interfaces are allowed. Any dependencies on external packages which themselves depend on TensorFlow are expressly forbidden.

Changes definitions inside `tf.types` must be approved by TensorFlow leads, and typically should be accompanied by an RFC.

All type declarations are based on PEP-484 and related specifications, and defined using [typing](https://docs.python.org/3/library/typing.html), with the aim of being compatible with static type checkers like [pytype](https://github.com/google/pytype), [mypy](http://mypy-lang.org/), [pyre](https://pyre-check.org/).

It is recommended that internal and external type annotations, `isinstance` and `issubclass` checks use these types, eventually deprecating helpers like `tf.is_tensor`. However, concrete types continue to exist - for example, variables are instances of `tf.Variable`, which is now a subclass of `tf.types.Variable`.

Class type definitions define a minimum of abstract methods and properties which are required for pytype compatibility.

### Support for `tf.function`'s `input_signature`
The type system listed here can be expanded to allow input signatures using type annotations, see for instance [this thread](https://github.com/tensorflow/tensorflow/issues/31579).

### Initial Type Hierarchy
TensorFlow generally adopts an incremental development method. This RFC aims to remain consistent with that.

Below are listed the major types presently used in TensorFlow. All types included in this list are subject to [normal compatibility rules](https://www.tensorflow.org/guide/versions), so they are unlikely to change in the future. It is therefore preferable to maintain a strict minimum of orthogonal declarations and carefully vet any additions.

Most of these symbols will not be initially exported as public symbols. Only internal submodules will be able to use unexported types. The unexported types may be gradually exposed under `tf.types` or under `tf.types.experimental`.

The initial type hierarchy is focused on V2 symbols. We expect to encounter places where these symbols would not be compatible with V1 code; in such cases, the V1 symbols will not be affected.

#### Types created by this RFC
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are the specifics of each type out of scope for this RFC? i.e. which methods will be available on a Shape or Tensor?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was hoping so, although I think we'll need to define them sooner than later if type annotations are to be useful.


These types will be added with the initial creation of the `tf.types` namespace.

* Core tensor types

* `DType`
* `Shape`
* `Tensor` - generic dense tensor

* `Symbol` - the regular graph tensor
* `Value` - eager tensors

* `TensorLike` - any type that can be implicitly converted to `Tensor` (see for example https://github.com/tensorflow/addons/blob/master/tensorflow_addons/utils/types.py)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How will TensorLike interact with custom ->Tensor conversion machinery? If a user registers a new conversion function, should they also register a "virtual" subclass of TensorLike?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. Since using ABC may have performance implications as you pointed out, we may need to think of a different mechanism, like a concrete TensorLike superclass (instead of a Union generic) that also defines to_tensor or defining a dunder method similar to __ndarray__.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that regardless of the implementation, the relationship between TensorLike, tf.convert_to_tensor and tf.register_tensor_conversion_function should be specified to avoid confusion.

Personally, I'm fine with TensorLike being non-extensible, i.e. if you want your custom type to be TensorLike, convert it to such explicitly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial expectation is that isinstance(x, tf.types.TensorLike) would work for anything registered with tf.register_tensor_conversion_function. I think we cannot do that while requiring TensorLike to be a protocol (e.g. np.ndarray doesn't define to_tensor, but can be converted to one).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added a section about this - PTAL.

It turns out that protocols do support static type checking, and can support isinstance in newer Python versions too.

I'm not sure that register_tensor_conversion_function can be made compatible with static type checking. ABCMeta.register might be the equivalent of it, but it lacks the means to specify the conversion logic if you don't control the class (e.g. how do you specify the conversion logic for list with ABCMeta.register?). And besides, if we replace register_tensor_conversion_function, we might as well opt for the leaner protocols (hasattr seems slightly faster than a bare isinstance). So my preference would be to use protocol as alternative to register_tensor_conversion_function, and eventually deprecate it.


* `Variable`

#### Potential types for subsequent implementation

These types are raised for discussion by this RFC, but are not part of the original implementation, unless they are strictly required for consistency (to be determined during the initial submission).

Many of these are expected to be required when breaking the cyclic dependencies that currently exist between submodules. However, it is hoped that opening them up for discussion early can help create a more coherent type system.

* Container types

* `Composite` - low-level static structure (opaque to GraphDef/IR)
* `Module` - builder for structures of `Variables` (invisible to GraphDef/IR)
* `Optional` - basic programming construct
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an alias to typing.Optional or a separate type? If the latter, could you give an example when one might need a tf.types.Optional and not typing.Optional?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add a clarification - tf.types.Optional is an alias to tf.data.experimental.Optional, so a separate type from typing.Optional. I think the main differences are that it cannot be None and exposes has_value() get_value().

* `List` - superclass for `TensorArray`, `Queue`, etc. (opaque to GraphDef/IR)

* Higher-level types
* `Dataset` - ETL pipeline
* `Iterator` - basic stateful programming construct
* `Iterable` - basic stateless programming construct
* `Function` - basic programming construct
* `Error` - superclass of all TF-specific errors

* Distributed types
* `DistributedDataset` - collective ETL
* `DistributedIterator` - collective iterator

* Low-level execution primitives
* `Graph` - GraphDef/IR program
* `FunctionGraph` - IR of a single concrete function

### Alternatives Considered
* N/A
Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse Feb 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should consider the alternative where not everything is in the same module and we don't use the trick of making this:

variables are instances of tf.Variable, which is now a subclass of tf.types.Variable

I like the idea of having types in tensorflow, but this whole ABC black magic of avoiding dependecies is done because internally, Tensorflow has many, many circular dependencies (see here). In python, the more black magic we do, the more it will come back to bite us, see https://github.com/tensorflow/community/pull/208/files#r382246511 .

Python types are not supposed to be used in this way and I'm afraid that we're going to have many problems in the future just for the sake of supporting circular dependencies in the TF codebase. Like for example some type system being confused about our types, or we'll realize the need to redeclare all the classes' interfaces in tf.types, breaking the DRY rule.

The alternative is to declare types like AnyTensor = Union[tf.Tensor, tf.sparse.SparseTensor] and to fix tensorflow circular dependencies internally. Types can be declared in their corresponding files (ex: AnyTensor is defined in the file defining the Tensor class).

In short, making Tensorflow types without any ABC tricks shouldn't create new circular dependencies, they'll just make the existing ones come to light.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I agree with the broader comment: we want to avoid, rather than facilitate, circular dependencies. This RFC specifically aims to support the extract interface pattern for removing such circular dependencies.

That said, I fear the pattern is not a universal solution. For example, in the Keras instance you mentioned, the circular dependency cannot be broken by extracting types and requires moving some code around.

I'm not sure I fully follow the Variable example though, could you elaborate on it? The main idea is to have the types defined in a module separated from all others, so for example if you want to recognize a Variable you can import just tf.types, rather than the entire tf module.

By "ABC tricks" I suspect you are referring to ABCMeta.register? There are no plans currently to use it (in fact we are considering not using abc at all, see one of the comments above); I hope it will not be needed.

Copy link
Member

@gabrieldemarmiesse gabrieldemarmiesse Feb 21, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I'm referring to the trick of having all public classes of tensorflow subclass a class defined in tf.types, I think it's a brittle trick to enable the use of isinstance and avoid using the typing module.

if you want to recognize a Variable you can import just tf.types, rather than the entire tf module.

I see, but what is the benefit of that? As a user, what bonus do I get by doing from tensorflow.types import Variable rather than from tensorflow import Variable? I'm sure there is a reason, but it's not clear in this RFC and we would benefit from a small paragraph detailing the benefits.

A side note for the user experience: A user will first try to use tf.Variable as a type hint before reading the docs when he/she needs it. It's not very intuitive to put the type hint in a separate module.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this as a discussion topic. I'm inclined to agree - we don't need both tf.types.Variable and tf.Variable, even if it was safe to use either. In that case, tf.types would only have entries for types which do not already belong elsewhere.

The Python types module uses a similar pattern.

Related, if you have additional thoughts on how typing could enable better decoupling, both internally and externally, please do suggest them.


### Performance Implications
* No performance implications expected. At most, we are adding a small number of levels to the class tree of some objects.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One caveat here is that isinstance checks involving ABCs are significantly slower than that with non-abstract classes. This is because abc.ABCMeta implements __instancecheck__ so isinstance has to jump back to pure Python instead of doing MRO traversal in C:

>>> import abc                                                                                                                                                                                
>>> class A(abc.ABC): pass                                                                                                                                                                                            
>>> class B: pass                                                                                                                                                                             
>>> class SA(A): pass                                                                                                                                                                         
>>> class SB(B): pass                                                                                                                                                                         
>>> sa = SA()                                                                                                                                                                                 
>>> sb = SB()                                                                                                                                                                                 
>>> %timeit isinstance(sa, A)                                                                                                                                                                 
237 ns ± 0.45 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> %timeit isinstance(sb, B)                                                                                                                                                                 
51.2 ns ± 0.245 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

While this is of course an artificial benchmark, and a single isintance check is still "just" ~200ns (on my fairly slow laptop), TensorFlow does a lot of such checks, and the slowdown might as well be noticeable in more realistic workloads.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very interesting. It sounds like we may want to avoid adding ABCMeta superclasses in instances when performance is critical, like eager tensors.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One option is to define these types without abc.ABCMeta as a metaclass, i.e. make them abstract by convention. The way abc.ABCMeta enforces an implementation (at instantiation time instead of definition time) is likely not relevant/useful for tf.types. Note however, that this also makes these type non-extensible which could be an issue for TensorLike.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. Just to be clear, the types would be non-extensible using ABCs registration mechanism, and one could still extend using traditional subclassing.
I added a preliminary note - PTAL.


### Dependencies
* None, by definition.

### Engineering Impact
* Engineering impact: Separate interfaces allow for faster loading times by reducing coupling between modules.
* Maintenance: Minimal maintenance overhead since there is no functionality involved. The TensorFlow team and contributors will maintain the documentation up to date. Changes should be reviewed and approved by the TensorFlow team leads.

### Platforms and Environments
* Platforms: Python only, in the first stage. However, the type system should be aligned as much as possible with the core types in the TensorFlow runtime, and be language-independent as much as possible.
* Execution environments: The type system is independent of platform. This also implies that no platform-specific types (such as `TPUTensor`) exist.

### Best Practices
* This set of type definitions support the acyclic dependencies principle, by requiring that implementations avoid lateral dependencies (e.g. with a linter rule).

### Tutorials and Examples
* As the design matures, we plan to showcase libraries that leverage this pattern.
* Type annotations will be included in existing tutorials as definitions become final.

### Compatibility
* New minor version. Existing classes (`tf.Tensor`) will become subclasses of the new type interfaces.
* Most subcomponents of TF (Lite, distributed, function, SavedModel) will depend on this new module, although their functionality is not impacted.
* Libraries which depend on TensorFlow are encouraged to refer to `tf.types` definitions, rather than the concrete implementations for better future compatibility.

### User Impact
* Users will see a new `tf.types` module, that may be referenced from documentation and type annotations.


## Questions and Discussion Topics

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would the new type system unified with https://www.tensorflow.org/api_docs/python/tf/TypeSpec?

I think having both name space might confuse user about which to use, and we probably want to unify them into one if possible.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, and I agree it would be great to either unify them, or think of ways to make the differences clearer and more intuitive - perhaps we should begin to refer to them as [TF] Python types and TF [native] types.
The distinctions blur when using type annotations (e.g. if a tf.function had both an input signature and type annotations, we need to at least make sure they're consistent).
The most intuitive path toward unification ought be to use Python type annotations to specify a TypeSpec, although the type annotation system is not yet powerful enough.

* Single flat vs. hierarchical namespace - for example: `tf.types.distribute.Dataset`, or `tf.types.DistributedDataset`?
* The inclusion of more specialized `Graph` types, such as `FuncGraph`, `CondBranchFuncGraph`, `WhileCondFuncGraph`, `WhileBodyFuncGraph`. It’s unclear where these should be defined, however internal submodules needs these subtypes to maintain acyclic dependencies.