Skip to content

Opinionated typing package for precise type hints in Python

License

Notifications You must be signed in to change notification settings

jorenham/optype

optype

Building blocks for precise & flexible type hints.

optype - PyPI optype - Python Versions optype - SPEC 0 — minimum supported dependencies optype - license

optype - CI optype - pre-commit optype - basedmypy optype - basedpyright optype - code style: ruff


Installation

Optype is available as optype on PyPI:

pip install optype

For optional NumPy support, it is recommended to use the numpy extra. This ensures that the installed numpy version is compatible with optype, following NEP 29 and SPEC 0.

pip install "optype[numpy]"

See the optype.numpy docs for more info.

Example

Let's say you're writing a twice(x) function, that evaluates 2 * x. Implementing it is trivial, but what about the type annotations?

Because twice(2) == 4, twice(3.14) == 6.28 and twice('I') = 'II', it might seem like a good idea to type it as twice[T](x: T) -> T: .... However, that wouldn't include cases such as twice(True) == 2 or twice((42, True)) == (42, True, 42, True), where the input- and output types differ. Moreover, twice should accept any type with a custom __rmul__ method that accepts 2 as argument.

This is where optype comes in handy, which has single-method protocols for all the builtin special methods. For twice, we can use optype.CanRMul[T, R], which, as the name suggests, is a protocol with (only) the def __rmul__(self, lhs: T) -> R: ... method. With this, the twice function can written as:

Python 3.10 Python 3.12+
from typing import Literal
from typing import TypeAlias, TypeVar
from optype import CanRMul

R = TypeVar("R")
Two: TypeAlias = Literal[2]
RMul2: TypeAlias = CanRMul[Two, R]


def twice(x: RMul2[R]) -> R:
    return 2 * x
from typing import Literal
from optype import CanRMul

type Two = Literal[2]
type RMul2[R] = CanRMul[Two, R]


def twice[R](x: RMul2[R]) -> R:
    return 2 * x

But what about types that implement __add__ but not __radd__? In this case, we could return x * 2 as fallback (assuming commutativity). Because the optype.Can* protocols are runtime-checkable, the revised twice2 function can be compactly written as:

Python 3.10 Python 3.12+
from optype import CanMul

Mul2: TypeAlias = CanMul[Two, R]
CMul2: TypeAlias = Mul2[R] | RMul2[R]


def twice2(x: CMul2[R]) -> R:
    if isinstance(x, CanRMul):
        return 2 * x
    else:
        return x * 2
from optype import CanMul

type Mul2[R] = CanMul[Two, R]
type CMul2[R] = Mul2[R] | RMul2[R]


def twice2[R](x: CMul2[R]) -> R:
    if isinstance(x, CanRMul):
        return 2 * x
    else:
        return x * 2

See examples/twice.py for the full example.

Reference

The API of optype is flat; a single import optype as opt is all you need (except for optype.numpy).

optype

There are four flavors of things that live within optype,

  • optype.Can{} types describe what can be done with it. For instance, any CanAbs[T] type can be used as argument to the abs() builtin function with return type T. Most Can{} implement a single special method, whose name directly matched that of the type. CanAbs implements __abs__, CanAdd implements __add__, etc.
  • optype.Has{} is the analogue of Can{}, but for special attributes. HasName has a __name__ attribute, HasDict has a __dict__, etc.
  • optype.Does{} describe the type of operators. So DoesAbs is the type of the abs({}) builtin function, and DoesPos the type of the +{} prefix operator.
  • optype.do_{} are the correctly-typed implementations of Does{}. For each do_{} there is a Does{}, and vice-versa. So do_abs: DoesAbs is the typed alias of abs({}), and do_pos: DoesPos is a typed version of operator.pos. The optype.do_ operators are more complete than operators, have runtime-accessible type annotations, and have names you don't need to know by heart.

The reference docs are structured as follows:

All typing protocols here live in the root optype namespace. They are runtime-checkable so that you can do e.g. isinstance('snail', optype.CanAdd), in case you want to check whether snail implements __add__.

Unlikecollections.abc, optype's protocols aren't abstract base classes, i.e. they don't extend abc.ABC, only typing.Protocol. This allows the optype protocols to be used as building blocks for .pyi type stubs.

Builtin type conversion

The return type of these special methods is invariant. Python will raise an error if some other (sub)type is returned. This is why these optype interfaces don't accept generic type arguments.

operator operand
expression function type method type
complex(_) do_complex DoesComplex __complex__ CanComplex
float(_) do_float DoesFloat __float__ CanFloat
int(_) do_int DoesInt __int__ CanInt[R: int = int]
bool(_) do_bool DoesBool __bool__ CanBool[R: bool = bool]
bytes(_) do_bytes DoesBytes __bytes__ CanBytes[R: bytes = bytes]
str(_) do_str DoesStr __str__ CanStr[R: str = str]

Note

The Can* interfaces of the types that can used as typing.Literal accept an optional type parameter R. This can be used to indicate a literal return type, for surgically precise typing, e.g. None, True, and 42 are instances of CanBool[Literal[False]], CanInt[Literal[1]], and CanStr[Literal['42']], respectively.

These formatting methods are allowed to return instances that are a subtype of the str builtin. The same holds for the __format__ argument. So if you're a 10x developer that wants to hack Python's f-strings, but only if your type hints are spot-on; optype is you friend.

operator operand
expression function type method type
repr(_) do_repr DoesRepr __repr__ CanRepr[R: str = str]
format(_, x) do_format DoesFormat __format__ CanFormat[T: str = str, R: str = str]

Additionally, optype provides protocols for types with (custom) hash or index methods:

operator operand
expression function type method type
hash(_) do_hash DoesHash __hash__ CanHash
_.__index__() (docs) do_index DoesIndex __index__ CanIndex[R: int = int]

Rich relations

The "rich" comparison special methods often return a bool. However, instances of any type can be returned (e.g. a numpy array). This is why the corresponding optype.Can* interfaces accept a second type argument for the return type, that defaults to bool when omitted. The first type parameter matches the passed method argument, i.e. the right-hand side operand, denoted here as x.

operator operand
expression reflected function type method type
_ == x x == _ do_eq DoesEq __eq__ CanEq[T = object, R = bool]
_ != x x != _ do_ne DoesNe __ne__ CanNe[T = object, R = bool]
_ < x x > _ do_lt DoesLt __lt__ CanLt[T, R = bool]
_ <= x x >= _ do_le DoesLe __le__ CanLe[T, R = bool]
_ > x x < _ do_gt DoesGt __gt__ CanGt[T, R = bool]
_ >= x x <= _ do_ge DoesGe __ge__ CanGe[T, R = bool]

Binary operations

In the Python docs, these are referred to as "arithmetic operations". But the operands aren't limited to numeric types, and because the operations aren't required to be commutative, might be non-deterministic, and could have side-effects. Classifying them "arithmetic" is, at the very least, a bit of a stretch.

operator operand
expression function type method type
_ + x do_add DoesAdd __add__ CanAdd[T, R]
_ - x do_sub DoesSub __sub__ CanSub[T, R]
_ * x do_mul DoesMul __mul__ CanMul[T, R]
_ @ x do_matmul DoesMatmul __matmul__ CanMatmul[T, R]
_ / x do_truediv DoesTruediv __truediv__ CanTruediv[T, R]
_ // x do_floordiv DoesFloordiv __floordiv__ CanFloordiv[T, R]
_ % x do_mod DoesMod __mod__ CanMod[T, R]
divmod(_, x) do_divmod DoesDivmod __divmod__ CanDivmod[T, R]
_ ** x
pow(_, x)
do_pow/2 DoesPow __pow__ CanPow2[T, R]
CanPow[T, None, R, Never]
pow(_, x, m) do_pow/3 DoesPow __pow__ CanPow3[T, M, R]
CanPow[T, M, Never, R]
_ << x do_lshift DoesLshift __lshift__ CanLshift[T, R]
_ >> x do_rshift DoesRshift __rshift__ CanRshift[T, R]
_ & x do_and DoesAnd __and__ CanAnd[T, R]
_ ^ x do_xor DoesXor __xor__ CanXor[T, R]
_ | x do_or DoesOr __or__ CanOr[T, R]

Note

Because pow() can take an optional third argument, optype provides separate interfaces for pow() with two and three arguments. Additionally, there is the overloaded intersection type CanPow[T, M, R, RM] =: CanPow2[T, R] & CanPow3[T, M, RM], as interface for types that can take an optional third argument.

Reflected operations

For the binary infix operators above, optype additionally provides interfaces with reflected (swapped) operands, e.g. __radd__ is a reflected __add__. They are named like the original, but prefixed with CanR prefix, i.e. __name__.replace('Can', 'CanR').

operator operand
expression function type method type
x + _ do_radd DoesRAdd __radd__ CanRAdd[T, R]
x - _ do_rsub DoesRSub __rsub__ CanRSub[T, R]
x * _ do_rmul DoesRMul __rmul__ CanRMul[T, R]
x @ _ do_rmatmul DoesRMatmul __rmatmul__ CanRMatmul[T, R]
x / _ do_rtruediv DoesRTruediv __rtruediv__ CanRTruediv[T, R]
x // _ do_rfloordiv DoesRFloordiv __rfloordiv__ CanRFloordiv[T, R]
x % _ do_rmod DoesRMod __rmod__ CanRMod[T, R]
divmod(x, _) do_rdivmod DoesRDivmod __rdivmod__ CanRDivmod[T, R]
x ** _
pow(x, _)
do_rpow DoesRPow __rpow__ CanRPow[T, R]
x << _ do_rlshift DoesRLshift __rlshift__ CanRLshift[T, R]
x >> _ do_rrshift DoesRRshift __rrshift__ CanRRshift[T, R]
x & _ do_rand DoesRAnd __rand__ CanRAnd[T, R]
x ^ _ do_rxor DoesRXor __rxor__ CanRXor[T, R]
x | _ do_ror DoesROr __ror__ CanROr[T, R]

Note

CanRPow corresponds to CanPow2; the 3-parameter "modulo" pow does not reflect in Python.

According to the relevant python docs:

Note that ternary pow() will not try calling __rpow__() (the coercion rules would become too complicated).

Inplace operations

Similar to the reflected ops, the inplace/augmented ops are prefixed with CanI, namely:

operator operand
expression function type method types
_ += x do_iadd DoesIAdd __iadd__ CanIAdd[T, R]
CanIAddSelf[T]
_ -= x do_isub DoesISub __isub__ CanISub[T, R]
CanISubSelf[T]
_ *= x do_imul DoesIMul __imul__ CanIMul[T, R]
CanIMulSelf[T]
_ @= x do_imatmul DoesIMatmul __imatmul__ CanIMatmul[T, R]
CanIMatmulSelf[T]
_ /= x do_itruediv DoesITruediv __itruediv__ CanITruediv[T, R]
CanITruedivSelf[T]
_ //= x do_ifloordiv DoesIFloordiv __ifloordiv__ CanIFloordiv[T, R]
CanIFloordivSelf[T]
_ %= x do_imod DoesIMod __imod__ CanIMod[T, R]
CanIModSelf[T]
_ **= x do_ipow DoesIPow __ipow__ CanIPow[T, R]
CanIPowSelf[T]
_ <<= x do_ilshift DoesILshift __ilshift__ CanILshift[T, R]
CanILshiftSelf[T]
_ >>= x do_irshift DoesIRshift __irshift__ CanIRshift[T, R]
CanIRshiftSelf[T]
_ &= x do_iand DoesIAnd __iand__ CanIAnd[T, R]
CanIAndSelf[T]
_ ^= x do_ixor DoesIXor __ixor__ CanIXor[T, R]
CanIXorSelf[T]
_ |= x do_ior DoesIOr __ior__ CanIOr[T, R]
CanIOrSelf[T]

These inplace operators usually return itself (after some in-place mutation). But unfortunately, it currently isn't possible to use Self for this (i.e. something like type MyAlias[T] = optype.CanIAdd[T, Self] isn't allowed). So to help ease this unbearable pain, optype comes equipped with ready-made aliases for you to use. They bear the same name, with an additional *Self suffix, e.g. optype.CanIAddSelf[T].

Unary operations

operator operand
expression function type method types
+_ do_pos DoesPos __pos__ CanPos[R]
CanPosSelf
-_ do_neg DoesNeg __neg__ CanNeg[R]
CanNegSelf
~_ do_invert DoesInvert __invert__ CanInvert[R]
CanInvertSelf
abs(_) do_abs DoesAbs __abs__ CanAbs[R]
CanAbsSelf

Rounding

The round() built-in function takes an optional second argument. From a typing perspective, round() has two overloads, one with 1 parameter, and one with two. For both overloads, optype provides separate operand interfaces: CanRound1[R] and CanRound2[T, RT]. Additionally, optype also provides their (overloaded) intersection type: CanRound[T, R, RT] = CanRound1[R] & CanRound2[T, RT].

operator operand
expression function type method type
round(_) do_round/1 DoesRound __round__/1 CanRound1[T = int]
round(_, n) do_round/2 DoesRound __round__/2 CanRound2[T = int, RT = float]
round(_, n=...) do_round DoesRound __round__ CanRound[T = int, R = int, RT = float]

For example, type-checkers will mark the following code as valid (tested with pyright in strict mode):

x: float = 3.14
x1: CanRound1[int] = x
x2: CanRound2[int, float] = x
x3: CanRound[int, int, float] = x

Furthermore, there are the alternative rounding functions from the math standard library:

operator operand
expression function type method type
math.trunc(_) do_trunc DoesTrunc __trunc__ CanTrunc[R = int]
math.floor(_) do_floor DoesFloor __floor__ CanFloor[R = int]
math.ceil(_) do_ceil DoesCeil __ceil__ CanCeil[R = int]

Almost all implementations use int for R. In fact, if no type for R is specified, it will default in int. But technially speaking, these methods can be made to return anything.

Callables

Unlike operator, optype provides the operator for callable objects: optype.do_call(f, *args. **kwargs).

CanCall is similar to collections.abc.Callable, but is runtime-checkable, and doesn't use esoteric hacks.

operator operand
expression function type method type
_(*args, **kwargs) do_call DoesCall __call__ CanCall[**Pss, R]

Note

Pyright (and probably other typecheckers) tend to accept collections.abc.Callable in more places than optype.CanCall. This could be related to the lack of co/contra-variance specification for typing.ParamSpec (they should almost always be contravariant, but currently they can only be invariant).

In case you encounter such a situation, please open an issue about it, so we can investigate further.

Iteration

The operand x of iter(_) is within Python known as an iterable, which is what collections.abc.Iterable[V] is often used for (e.g. as base class, or for instance checking).

The optype analogue is CanIter[R], which as the name suggests, also implements __iter__. But unlike Iterable[V], its type parameter R binds to the return type of iter(_) -> R. This makes it possible to annotate the specific type of the iterable that iter(_) returns. Iterable[V] is only able to annotate the type of the iterated value. To see why that isn't possible, see python/typing#548.

The collections.abc.Iterator[V] is even more awkward; it is a subtype of Iterable[V]. For those familiar with collections.abc this might come as a surprise, but an iterator only needs to implement __next__, __iter__ isn't needed. This means that the Iterator[V] is unnecessarily restrictive. Apart from that being theoretically "ugly", it has significant performance implications, because the time-complexity of isinstance on a typing.Protocol is $O(n)$, with the $n$ referring to the amount of members. So even if the overhead of the inheritance and the abc.ABC usage is ignored, collections.abc.Iterator is twice as slow as it needs to be.

That's one of the (many) reasons that optype.CanNext[V] and optype.CanNext[V] are the better alternatives to Iterable and Iterator from the abracadabra collections. This is how they are defined:

operator operand
expression function type method type
next(_) do_next DoesNext __next__ CanNext[V]
iter(_) do_iter DoesIter __iter__ CanIter[R: CanNext[object]]

For the sake of compatibility with collections.abc, there is optype.CanIterSelf[V], which is a protocol whose __iter__ returns typing.Self, as well as a __next__ method that returns T. I.e. it is equivalent to collections.abc.Iterator[V], but without the abc nonsense.

Awaitables

The optype is almost the same as collections.abc.Awaitable[R], except that optype.CanAwait[R] is a pure interface, whereas Awaitable is also an abstract base class (making it absolutely useless when writing stubs).

operator operand
expression method type
await _ __await__ CanAwait[R]

Async Iteration

Yes, you guessed it right; the abracadabra collections made the exact same mistakes for the async iterablors (or was it "iteramblers"...?).

But fret not; the optype alternatives are right here:

operator operand
expression function type method type
anext(_) do_anext DoesANext __anext__ CanANext[V]
aiter(_) do_aiter DoesAIter __aiter__ CanAIter[R: CanAnext[object]]

But wait, shouldn't V be a CanAwait? Well, only if you don't want to get fired... Technically speaking, __anext__ can return any type, and anext will pass it along without nagging (instance checks are slow, now stop bothering that liberal). For details, see the discussion at python/typeshed#7491. Just because something is legal, doesn't mean it's a good idea (don't eat the yellow snow).

Additionally, there is optype.CanAIterSelf[R], with both the __aiter__() -> Self and the __anext__() -> V methods.

Containers

operator operand
expression function type method type
len(_) do_len DoesLen __len__ CanLen[R: int = int]
_.__length_hint__() (docs) do_length_hint DoesLengthHint __length_hint__ CanLengthHint[R: int = int]
_[k] do_getitem DoesGetitem __getitem__ CanGetitem[K, V]
_.__missing__() (docs) do_missing DoesMissing __missing__ CanMissing[K, D]
_[k] = v do_setitem DoesSetitem __setitem__ CanSetitem[K, V]
del _[k] do_delitem DoesDelitem __delitem__ CanDelitem[K]
k in _ do_contains DoesContains __contains__ CanContains[K = object]
reversed(_) do_reversed DoesReversed __reversed__ CanReversed[R], or
CanSequence[I, V, N = int]

Because CanMissing[K, D] generally doesn't show itself without CanGetitem[K, V] there to hold its hand, optype conveniently stitched them together as optype.CanGetMissing[K, V, D=V].

Similarly, there is optype.CanSequence[K: CanIndex | slice, V], which is the combination of both CanLen and CanItem[I, V], and serves as a more specific and flexible collections.abc.Sequence[V].

Attributes

operator operand
expression function type method type
v = _.k or
v = getattr(_, k)
do_getattr DoesGetattr __getattr__ CanGetattr[K: str = str, V = object]
_.k = v or
setattr(_, k, v)
do_setattr DoesSetattr __setattr__ CanSetattr[K: str = str, V = object]
del _.k or
delattr(_, k)
do_delattr DoesDelattr __delattr__ CanDelattr[K: str = str]
dir(_) do_dir DoesDir __dir__ CanDir[R: CanIter[CanIterSelf[str]]]

Context managers

Support for the with statement.

operator operand
expression method(s) type(s)
__enter__ CanEnter[C], or CanEnterSelf
__exit__ CanExit[R = None]
with _ as c: __enter__, and
__exit__
CanWith[C, R=None], or
CanWithSelf[R=None]

CanEnterSelf and CanWithSelf are (runtime-checkable) aliases for CanEnter[Self] and CanWith[Self, R], respectively.

For the async with statement the interfaces look very similar:

operator operand
expression method(s) type(s)
__aenter__ CanAEnter[C], or
CanAEnterSelf
__aexit__ CanAExit[R=None]
async with _ as c: __aenter__, and
__aexit__
CanAsyncWith[C, R=None], or
CanAsyncWithSelf[R=None]

Descriptors

Interfaces for descriptors.

operator operand
expression method type
v: V = T().d
vt: VT = T.d
__get__ CanGet[T: object, V, VT = V]
T().k = v __set__ CanSet[T: object, V]
del T().k __delete__ CanDelete[T: object]
class T: d = _ __set_name__ CanSetName[T: object, N: str = str]

Buffer types

Interfaces for emulating buffer types using the buffer protocol.

operator operand
expression method type
v = memoryview(_) __buffer__ CanBuffer[T: int = int]
del v __release_buffer__ CanReleaseBuffer

optype.copy

For the copy standard library, optype.copy provides the following runtime-checkable interfaces:

copy standard library optype.copy
function type method
copy.copy(_) -> R __copy__() -> R CanCopy[R]
copy.deepcopy(_, memo={}) -> R __deepcopy__(memo, /) -> R CanDeepcopy[R]
copy.replace(_, /, **changes: V) -> R [1] __replace__(**changes: V) -> R CanReplace[V, R]

[1] copy.replace requires python>=3.13 (but optype.copy.CanReplace doesn't)

In practice, it makes sense that a copy of an instance is the same type as the original. But because typing.Self cannot be used as a type argument, this difficult to properly type. Instead, you can use the optype.copy.Can{}Self types, which are the runtime-checkable equivalents of the following (recursive) type aliases:

type CanCopySelf = CanCopy[CanCopySelf]
type CanDeepcopySelf = CanDeepcopy[CanDeepcopySelf]
type CanReplaceSelf[V] = CanReplace[V, CanReplaceSelf[V]]

optype.dataclasses

For the dataclasses standard library, optype.dataclasses provides the HasDataclassFields[V: Mapping[str, Field]] interface. It can conveniently be used to check whether a type or instance is a dataclass, i.e. isinstance(obj, HasDataclassFields).

optype.inspect

A collection of functions for runtime inspection of types, modules, and other objects.

Function Description
get_args(_)

A better alternative to typing.get_args(), that

  • unpacks typing.Annotated and Python 3.12 type _ alias types (i.e. typing.TypeAliasType),
  • recursively flattens unions and nested typing.Literal types, and
  • raises TypeError if not a type expression.

Return a tuple[type | object, ...] of type arguments or parameters.

To illustrate one of the (many) issues with typing.get_args:

>>> from typing import Literal, TypeAlias, get_args
>>> Falsy: TypeAlias = Literal[None] | Literal[False, 0] | Literal["", b""]
>>> get_args(Falsy)
(typing.Literal[None], typing.Literal[False, 0], typing.Literal['', b''])

But this is in direct contradiction with the official typing documentation:

When a Literal is parameterized with more than one value, it’s treated as exactly equivalent to the union of those types. That is, Literal[v1, v2, v3] is equivalent to Literal[v1] | Literal[v2] | Literal[v3].

So this is why optype.inspect.get_args should be used

>>> import optype as opt
>>> opt.inspect.get_args(Falsy)
(None, False, 0, '', b'')

Another issue of typing.get_args is with Python 3.12 type _ = ... aliases, which are meant as a replacement for _: typing.TypeAlias = ..., and should therefore be treated equally:

>>> import typing
>>> import optype as opt
>>> type StringLike = str | bytes
>>> typing.get_args(StringLike)
()
>>> opt.inspect.get_args(StringLike)
(<class 'str'>, <class 'bytes'>)

Clearly, typing.get_args fails misarably here; it would have been better if it would have raised an error, but it instead returns an empty tuple, hiding the fact that it doesn't support the new type _ = ... aliases. But luckily, optype.inspect.get_args doesn't have this problem, and treats it just like it treats typing.Alias (and so do the other optype.inspect functions).

get_protocol_members(_)

A better alternative to typing.get_protocol_members(), that

  • doesn't require Python 3.13 or above,
  • supports PEP 695 type _ alias types on Python 3.12 and above,
  • unpacks unions of typing.Literal ...
  • ... and flattens them if nested within another typing.Literal,
  • treats typing.Annotated[T] as T, and
  • raises a TypeError if the passed value isn't a type expression.

Returns a frozenset[str] with member names.

get_protocols(_)

Returns a frozenset[type] of the public protocols within the passed module. Pass private=True to also return the private protocols.

is_iterable(_)

Check whether the object can be iterated over, i.e. if it can be used in a for loop, without attempting to do so. If True is returned, then the object is a optype.typing.AnyIterable instance.

is_final(_)

Check if the type, method / classmethod / staticmethod / property, is decorated with @typing.final.

Note that a @property won't be recognized unless the @final decorator is placed below the @property decorator. See the function docstring for more information.

is_protocol(_)

A backport of typing.is_protocol that was added in Python 3.13, a re-export of typing_extensions.is_protocol.

is_runtime_protocol(_)

Check if the type expression is a runtime-protocol, i.e. a typing.Protocol type, decorated with @typing.runtime_checkable (also supports typing_extensions).

is_union_type(_)

Check if the type is a typing.Union type, e.g. str | int.

Unlike isinstance(_, types.Union), this function also returns True for unions of user-defined Generic or Protocol types (because those are different union types for some reason).

is_generic_alias(_)

Check if the type is a subscripted type, e.g. list[str] or optype.CanNext[int], but not list, CanNext.

Unlike isinstance(_, typing.GenericAlias), this function also returns True for user-defined Generic or Protocol types (because those are use a different generic alias for some reason).

Even though technically T1 | T2 is represented as typing.Union[T1, T2] (which is a (special) generic alias), is_generic_alias will returns False for such union types, because calling T1 | T2 a subscripted type just doesn't make much sense.

Note

All functions in optype.inspect also work for Python 3.12 type _ aliases (i.e. types.TypeAliasType) and with typing.Annotated.

optype.json

Type aliases for the json standard library:

Value AnyValue
json.load(s) return type json.dumps(s) input type
Array[V: Value = Value] AnyArray[V: AnyValue = AnyValue]
Object[V: Value = Value] AnyObject[V: AnyValue = AnyValue]

The (Any)Value can be any json input, i.e. Value | Array | Object is equivalent to Value. It's also worth noting that Value is a subtype of AnyValue, which means that AnyValue | Value is equivalent to AnyValue.

optype.pickle

For the pickle standard library, optype.pickle provides the following interfaces:

method(s) signature (bound) type
__reduce__ () -> R CanReduce[R: str | tuple = ...]
__reduce_ex__ (CanIndex) -> R CanReduceEx[R: str | tuple = ...]
__getstate__ () -> S CanGetstate[S]
__setstate__ (S) -> None CanSetstate[S]
__getnewargs__
__new__
() -> tuple[V, ...]
(V) -> Self
CanGetnewargs[V]
__getnewargs_ex__
__new__
() -> tuple[tuple[V, ...], dict[str, KV]]
(*tuple[V, ...], **dict[str, KV]) -> Self
CanGetnewargsEx[V, KV]

optype.string

The string standard library contains practical constants, but it has two issues:

  • The constants contain a collection of characters, but are represented as a single string. This makes it practically impossible to type-hint the individual characters, so typeshed currently types these constants as a LiteralString.
  • The names of the constants are inconsistent, and doesn't follow PEP 8.

So instead, optype.string provides an alternative interface, that is compatible with string, but with slight differences:

  • For each constant, there is a corresponding Literal type alias for the individual characters. Its name matches the name of the constant, but is singular instead of plural.
  • Instead of a single string, optype.string uses a tuple of characters, so that each character has its own typing.Literal annotation. Note that this is only tested with (based)pyright / pylance, so it might not work with mypy (it has more bugs than it has lines of codes).
  • The names of the constant are consistent with PEP 8, and use a postfix notation for variants, e.g. DIGITS_HEX instead of hexdigits.
  • Unlike string, optype.string has a constant (and type alias) for binary digits '0' and '1'; DIGITS_BIN (and DigitBin). Because besides oct and hex functions in builtins, there's also the builtins.bin function.
string._ optype.string._
constant char type constant char type
missing DIGITS_BIN DigitBin
octdigits LiteralString DIGITS_OCT DigitOct
digits DIGITS Digit
hexdigits DIGITS_HEX DigitHex
ascii_letters LETTERS Letter
ascii_lowercase LETTERS_LOWER LetterLower
ascii_uppercase LETTERS_UPPER LetterUpper
punctuation PUNCTUATION Punctuation
whitespace WHITESPACE Whitespace
printable PRINTABLE Printable

Each of the optype.string constants is exactly the same as the corresponding string constant (after concatenation / splitting), e.g.

>>> import string
>>> import optype as opt
>>> "".join(opt.string.PRINTABLE) == string.printable
True
>>> tuple(string.printable) == opt.string.PRINTABLE
True

Similarly, the values within a constant's Literal type exactly match the values of its constant:

>>> import optype as opt
>>> from optype.inspect import get_args
>>> get_args(opt.string.Printable) == opt.string.PRINTABLE
True

The optype.inspect.get_args is a non-broken variant of typing.get_args that correctly flattens nested literals, type-unions, and PEP 695 type aliases, so that it matches the official typing specs. In other words; typing.get_args is yet another fundamentally broken python-typing feature that's useless in the situations where you need it most.

optype.typing

Any* type aliases

Type aliases for anything that can always be passed to int, float, complex, iter, or typing.Literal

Python constructor optype.typing alias
int(_) AnyInt
float(_) AnyFloat
complex(_) AnyComplex
iter(_) AnyIterable
typing.Literal[_] AnyLiteral

Note

Even though some str and bytes can be converted to int, float, complex, most of them can't, and are therefore not included in these type aliases.

Empty* type aliases

These are builtin types or collections that are empty, i.e. have length 0 or yield no elements.

instance optype.typing type
'' EmptyString
b'' EmptyBytes
() EmptyTuple
[] EmptyList
{} EmptyDict
set() EmptySet
(i for i in range(0)) EmptyIterable

Literal types

Literal values optype.typing type Notes
{False, True} LiteralFalse Similar to typing.LiteralString, but for bool.
{0, 1, ..., 255} LiteralByte Integers in the range 0-255, that make up a bytes or bytearray objects.

Just types

Warning

This is experimental, and is likely to change in the future.

The JustInt type can be used to only accept instances of type int. Subtypes like bool will be rejected. This works with recent versions of mypy and pyright.

import optype.typing as opt

def only_int_pls(x: opt.JustInt, /) -> None: ...

f(42)  # accepted
f(True)  # rejected

The Just type is a generic variant of JustInt. At the moment of writing, pyright doesn't support this yet, but it will soon (after the bundled typeshed is updated).

import optype.typing as opt

class A: ...
class B(A): ...

def must_have_type_a(a: opt.Just[A]) -> None: ...

must_have_type_a(A())  # accepted
must_have_type_a(B())  # rejected (at least with mypy)

optype.dlpack

A collection of low-level types for working DLPack.

Protocols

type signature bound method
CanDLPack[
    +T = int,
    +D: int = int,
]
def __dlpack__(
    *,
    stream: int | None = ...,
    max_version: tuple[int, int] | None = ...,
    dl_device: tuple[T, D] | None = ...,
    copy: bool | None = ...,
) -> types.CapsuleType: ...
CanDLPackDevice[
    +T = int,
    +D: int = int,
]
def __dlpack_device__() -> tuple[T, D]: ...

The + prefix indicates that the type parameter is covariant.

Enums

There are also two convenient IntEnums in optype.dlpack: DLDeviceType for the device types, and DLDataTypeCode for the internal type-codes of the DLPack data types.

optype.numpy

Optype supports both NumPy 1 and 2. The current minimum supported version is 1.24, following NEP 29 and SPEC 0.

When using optype.numpy, it is recommended to install optype with the numpy extra, ensuring version compatibility:

pip install "optype[numpy]"

Note

For the remainder of the optype.numpy docs, assume that the following import aliases are available.

from typing import Any, Literal
import numpy as np
import numpy.typing as npt
import optype.numpy as onp

For the sake of brevity and readability, the PEP 695 and PEP 696 type parameter syntax will be used, which is supported since Python 3.13.

Shape-typing with Array

Optype provides the generic onp.Array type alias for np.ndarray. It is similar to npt.NDArray, but includes two (optional) type parameters: one that matches the shape type (ND: tuple[int, ...]), and one that matches the scalar type (ST: np.generic).

When put the definitions of npt.NDArray and onp.Array side-by-side, their differences become clear:

numpy.typing.NDArray1

optype.numpy.Array

optype.numpy.ArrayND

type NDArray[
    # no shape type
    ST: generic,  # no default
] = ndarray[Any, dtype[ST]]
type Array[
    ND: (int, ...) = (int, ...),
    ST: generic = generic,
] = ndarray[ND, dtype[ST]]
type ArrayND[
    ST: generic = generic,
    ND: (int, ...) = (int, ...),
] = ndarray[ND, dtype[ST]]

Additionally, there are the three Array{1,2,3}D[ST: generic] aliases, which are equivalent to Array with tuple[int], tuple[int, int] and tuple[int, int, int] as shape-type, respectively.

Tip

Before NumPy 2.1, the shape type parameter of ndarray (i.e. the type of ndarray.shape) was invariant. It is therefore recommended to not use Literal within shape types on numpy<2.1. So with numpy>=2.1 you can use tuple[Literal[3], Literal[3]] without problem, but with numpy<2.1 you should use tuple[int, int] instead.

See numpy/numpy#25729 and numpy/numpy#26081 for details.

With onp.Array, it becomes possible to type the shape of arrays.

A shape is nothing more than a tuple of (non-negative) integers, i.e. an instance of tuple[int, ...] such as (42,), (480, 720, 3) or (). The length of a shape is often referred to as the number of dimensions or the dimensionality of the array or scalar. For arrays this is accessible through the np.ndarray.ndim, which is an alias for len(np.ndarray.shape).

Note

Before NumPy 2, the maximum number of dimensions was 32, but has since been increased to ndim <= 64.

To make typing the shape of an array easier, optype provides two families of shape type aliases: AtLeast{N}D and AtMost{N}D. The {N} should be replaced by the number of dimensions, which currently is limited to 0, 1, 2, and 3.

Both of these families are generic, and their (optional) type parameters must be either int (default), or a literal (non-negative) integer, i.e. like typing.Literal[N: int].

The names AtLeast{N}D and AtMost{N}D are pretty much as self-explanatory:

  • AtLeast{N}D is a tuple[int, ...] with ndim >= N
  • AtMost{N}D is a tuple[int, ...] with ndim <= N

The shape aliases are roughly defined as:

AtLeast{N}D AtMost{N}D
type signature alias type type signature type alias
type AtLeast0D[
    Ds: int = int,
] = _
tuple[Ds, ...]
type AtMost0D = _
tuple[()]
type AtLeast1D[
    D0: int = int,
    Ds: int = int,
] = _
tuple[
    D0,
    *tuple[Ds, ...],
]
type AtMost1D[
    D0: int = int,
] = _
tuple[D0] | AtMost0D
type AtLeast2D[
    D0: int = int,
    D1: int = int,
    Ds: int = int,
] = _
tuple[
    D0,
    D1,
    *tuple[Ds, ...],
]
type AtMost2D[
    D0: int = int,
    D1: int = int,
] = _
(
    tuple[D0, D1]
    | AtMost1D[D0]
)
type AtLeast3D[
    D0: int = int,
    D1: int = int,
    D2: int = int,
    Ds: int = int,
] = _
tuple[
    D0,
    D1,
    D2,
    *tuple[Ds, ...],
]
type AtMost3D[
    D0: int = int,
    D1: int = int,
    D2: int = int,
] = _
(
    tuple[D0, D1, D2]
    | AtMost2D[D0, D1]
)

Array-likes

Similar to the numpy._typing._ArrayLike{}_co coercible array-like types, optype.numpy provides the optype.numpy.To{}ND. Unlike the ones in numpy, these don't accept "bare" scalar types (the __len__ method is required). Additionally, there are the To{}1D and To{}2D for vector-likes and matrix-likes, and the To{} aliases for "bare" scalar types.

scalar types scalar-like 1-d array-like 2-d array-like n-d array-like
builtins numpy optype.numpy
bool bool_ ToBool ToBool1D ToBool2D ToBoolND
int integer
| bool_
ToInt ToInt1D ToInt2D ToIntND
float
| int
floating
| integer
| bool_
ToFloat ToFloat1D ToFloat2D ToFloatND
complex
| float
| int
number
| bool_
ToComplex ToComplex1D ToComplex2D ToComplexND
bytes
| str
| complex
| float
| int
generic ToScalar ToArray1D ToArray2D ToArrayND

Source code: optype/numpy/_to.py

DType

In NumPy, a dtype (data type) object, is an instance of the numpy.dtype[ST: np.generic] type. It's commonly used to convey metadata of a scalar type, e.g. within arrays.

Because the type parameter of np.dtype isn't optional, it could be more convenient to use the alias optype.numpy.DType, which is defined as:

type DType[ST: np.generic = np.generic] = np.dtype[ST]

Apart from the "CamelCase" name, the only difference with np.dtype is that the type parameter can be omitted, in which case it's equivalent to np.dtype[np.generic], but shorter.

Scalar

The optype.numpy.Scalar interface is a generic runtime-checkable protocol, that can be seen as a "more specific" np.generic, both in name, and from a typing perspective.

Its type signature looks roughly like this:

type Scalar[
    # The "Python type", so that `Scalar.item() -> PT`.
    PT: object,
    # The "N-bits" type (without having to deal with `npt.NBitBase`).
    # It matches the `itemsize: NB` property.
    NB: int = int,
] = ...

It can be used as e.g.

are_birds_real: Scalar[bool, Literal[1]] = np.bool_(True)
the_answer: Scalar[int, Literal[2]] = np.uint16(42)
alpha: Scalar[float, Literal[8]] = np.float64(1 / 137)

Note

The second type argument for itemsize can be omitted, which is equivalent to setting it to int, so Scalar[PT] and Scalar[PT, int] are equivalent.

UFunc

A large portion of numpy's public API consists of universal functions, often denoted as ufuncs, which are (callable) instances of np.ufunc.

Tip

Custom ufuncs can be created using np.frompyfunc, but also through a user-defined class that implements the required attributes and methods (i.e., duck typing).

But np.ufunc has a big issue; it accepts no type parameters. This makes it very difficult to properly annotate its callable signature and its literal attributes (e.g. .nin and .identity).

This is where optype.numpy.UFunc comes into play: It's a runtime-checkable generic typing protocol, that has been thoroughly type- and unit-tested to ensure compatibility with all of numpy's ufunc definitions. Its generic type signature looks roughly like:

type UFunc[
    # The type of the (bound) `__call__` method.
    Fn: CanCall = CanCall,
    # The types of the `nin` and `nout` (readonly) attributes.
    # Within numpy these match either `Literal[1]` or `Literal[2]`.
    Nin: int = int,
    Nout: int = int,
    # The type of the `signature` (readonly) attribute;
    # Must be `None` unless this is a generalized ufunc (gufunc), e.g.
    # `np.matmul`.
    Sig: str | None = str | None,
    # The type of the `identity` (readonly) attribute (used in `.reduce`).
    # Unless `Nin: Literal[2]`, `Nout: Literal[1]`, and `Sig: None`,
    # this should always be `None`.
    # Note that `complex` also includes `bool | int | float`.
    Id: complex | bytes | str | None = float | None,
] = ...

Note

Unfortunately, the extra callable methods of np.ufunc (at, reduce, reduceat, accumulate, and outer), are incorrectly annotated (as None attributes, even though at runtime they're methods that raise a ValueError when called). This currently makes it impossible to properly type these in optype.numpy.UFunc; doing so would make it incompatible with numpy's ufuncs.

Any*Array and Any*DType

The Any{Scalar}Array type aliases describe array-likes that are coercible to an numpy.ndarray with specific dtype.

Unlike numpy.typing.ArrayLike, these optype.numpy aliases don't accept "bare" scalar types such as float and np.float64. However, arrays of "zero dimensions" like onp.Array[tuple[()], np.float64] will be accepted. This is in line with the behavior of numpy.isscalar on numpy >= 2.

import numpy.typing as npt
import optype.numpy as onp

v_np: npt.ArrayLike = 3.14  # accepted
v_op: onp.AnyArray = 3.14  # rejected

sigma1_np: npt.ArrayLike = [[0, 1], [1, 0]]  # accepted
sigma1_op: onp.AnyArray = [[0, 1], [1, 0]]  # accepted

Note

The numpy.dtypes docs exists since NumPy 1.25, but its type annotations were incorrect before NumPy 2.1 (see numpy/numpy#27008)

See the docs for more info on the NumPy scalar type hierarchy.

Abstract types
numpy._ optype.numpy._
scalar scalar base array-like dtype-like
generic AnyArray AnyDType
number generic AnyNumberArray AnyNumberDType
integer number AnyIntegerArray AnyIntegerDType
inexact AnyInexactArray AnyInexactDType
unsignedinteger integer AnyUnsignedIntegerArray AnyUnsignedIntegerDType
signedinteger AnySignedIntegerArray AnySignedIntegerDType
floating inexact AnyFloatingArray AnyFloatingDType
complexfloating AnyComplexFloatingArray AnyComplexFloatingDType
Unsigned integers
numpy._ numpy.dtypes._ optype.numpy._
scalar scalar base dtype array-like dtype-like
uint8, ubyte unsignedinteger UInt8DType AnyUInt8Array AnyUInt8DType
uint16, ushort UInt16DType AnyUInt16Array AnyUInt16DType

uint322

UInt32DType AnyUInt32Array AnyUInt32DType
uint64 UInt64DType AnyUInt64Array AnyUInt64DType

uintc2

UIntDType AnyUIntCArray AnyUIntCDType

uintp, uint_ 3

AnyUIntPArray AnyUIntPDType

ulong4

ULongDType AnyULongArray AnyULongDType
ulonglong ULongLongDType AnyULongLongArray AnyULongLongDType
Signed integers
numpy._ numpy.dtypes._ optype.numpy._
scalar scalar base dtype array-like dtype-like
int8 signedinteger Int8DType AnyInt8Array AnyInt8DType
int16 Int16DType AnyInt16Array AnyInt16DType

int322

Int32DType AnyInt32Array AnyInt32DType
int64 Int64DType AnyInt64Array AnyInt64DType

intc2

IntDType AnyIntCArray AnyIntCDType

intp, int_ 3

AnyIntPArray AnyIntPDType

long4

LongDType AnyLongArray AnyLongDType
longlong LongLongDType AnyLongLongArray AnyLongLongDType
Floats
numpy._ numpy.dtypes._ optype.numpy._
scalar scalar base dtype array-like dtype-like
float16,
half
np.floating Float16DType AnyFloat16Array AnyFloat16DType
float32,
single
Float32DType AnyFloat32Array AnyFloat32DType
float64,
double
np.floating &
builtins.float
Float64DType AnyFloat64Array AnyFloat64DType

longdouble5

np.floating LongDoubleDType AnyLongDoubleArray AnyLongDoubleDType
Complex numbers
numpy._ numpy.dtypes._ optype.numpy._
scalar scalar base dtype array-like dtype-like
complex64,
csingle
complexfloating Complex64DType AnyComplex64Array AnyComplex64DType
complex128,
cdouble
complexfloating &
builtins.complex
Complex128DType AnyComplex128Array AnyComplex128DType

clongdouble6

complexfloating CLongDoubleDType AnyCLongDoubleArray AnyCLongDoubleDType
"Flexible"

Scalar types with "flexible" length, whose values have a (constant) length that depends on the specific np.dtype instantiation.

numpy._ numpy.dtypes._ optype.numpy._
scalar scalar base dtype array-like dtype-like
bytes_ character BytesDType AnyBytesArray AnyBytesDType
str_ StrDType AnyStrArray AnyStrDType
void flexible VoidDType AnyVoidArray AnyVoidDType
Other types
numpy._ numpy.dtypes._ optype.numpy._
scalar scalar base dtype array-like dtype-like

bool_7

generic BoolDType AnyBoolArray AnyBoolDType
object_ ObjectDType AnyObjectArray AnyObjectDType
datetime64 DateTime64DType AnyDateTime64Array AnyDateTime64DType
timedelta64

generic8

TimeDelta64DType AnyTimeDelta64Array AnyTimeDelta64DType

9

StringDType AnyStringArray AnyStringDType

Low-level interfaces

Within optype.numpy there are several Can* (single-method) and Has* (single-attribute) protocols, related to the __array_*__ dunders of the NumPy Python API. These typing protocols are, just like the optype.Can* and optype.Has* ones, runtime-checkable and extensible (i.e. not @final).

Tip

All type parameters of these protocols can be omitted, which is equivalent to passing its upper type bound.

Protocol type signature Implements NumPy docs
class CanArray[
    ND: tuple[int, ...] = ...,
    ST: np.generic = ...,
]: ...
def __array__[RT = ST](
    _,
    dtype: DType[RT] | None = ...,
) -> Array[ND, RT]

User Guide: Interoperability with NumPy

class CanArrayUFunc[
    U: UFunc = ...,
    R: object = ...,
]: ...
def __array_ufunc__(
    _,
    ufunc: U,
    method: LiteralString,
    *args: object,
    **kwargs: object,
) -> R: ...

NEP 13

class CanArrayFunction[
    F: CanCall[..., object] = ...,
    R = object,
]: ...
def __array_function__(
    _,
    func: F,
    types: CanIterSelf[type[CanArrayFunction]],
    args: tuple[object, ...],
    kwargs: Mapping[str, object],
) -> R: ...

NEP 18

class CanArrayFinalize[
    T: object = ...,
]: ...
def __array_finalize__(_, obj: T): ...

User Guide: Subclassing ndarray

class CanArrayWrap: ...
def __array_wrap__[ND, ST](
    _,
    array: Array[ND, ST],
    context: (...) | None = ...,
    return_scalar: bool = ...,
) -> Self | Array[ND, ST]

API: Standard array subclasses

class HasArrayInterface[
    V: Mapping[str, object] = ...,
]: ...
__array_interface__: V

API: The array interface protocol

class HasArrayPriority: ...
__array_priority__: float

API: Standard array subclasses

class HasDType[
    DT: DType = ...,
]: ...
dtype: DT

API: Specifying and constructing data types

Footnotes

  1. Since numpy>=2.2 the NDArray alias uses tuple[int, ...] as shape-type instead of Any.

  2. On unix-based platforms np.[u]intc are aliases for np.[u]int32. 2 3 4

  3. Since NumPy 2, np.uint and np.int_ are aliases for np.uintp and np.intp, respectively. 2

  4. On NumPy 1 np.uint and np.int_ are what in NumPy 2 are now the np.ulong and np.long types, respectively. 2

  5. Depending on the platform, np.longdouble is (almost always) an alias for either float128, float96, or (sometimes) float64.

  6. Depending on the platform, np.clongdouble is (almost always) an alias for either complex256, complex192, or (sometimes) complex128.

  7. Since NumPy 2, np.bool is preferred over np.bool_, which only exists for backwards compatibility.

  8. At runtime np.timedelta64 is a subclass of np.signedinteger, but this is currently not reflected in the type annotations.

  9. The np.dypes.StringDType has no associated numpy scalar type, and its .type attribute returns the builtins.str type instead. But from a typing perspective, such a np.dtype[builtins.str] isn't a valid type.