Skip to content

Complete the tokenize module type hints #984

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Mar 15, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 30 additions & 17 deletions stdlib/3/tokenize.pyi
Original file line number Diff line number Diff line change
Expand Up @@ -2,34 +2,47 @@
#
# NOTE: This dynamically typed stub was automatically generated by stubgen.

from typing import Any, Union, TextIO
from typing import Any, Callable, Generator, Iterable, List, NamedTuple, Optional, Union, Sequence, TextIO, Tuple
from builtins import open as _builtin_open
from token import * # noqa: F403

COMMENT = ... # type: Any
NL = ... # type: Any
ENCODING = ... # type: Any
COMMENT = ... # type: int
NL = ... # type: int
ENCODING = ... # type: int

class TokenInfo:
_Position = Tuple[int, int]

_TokenInfo = NamedTuple('TokenInfo', [
('type', int),
('string', str),
('start', _Position),
('end', _Position),
('line', str)
])

class TokenInfo(_TokenInfo):
@property
def exact_type(self): ...
def exact_type(self) -> int: ...

# Backwards compatible tokens can be sequences of a shorter length too
_Token = Union[TokenInfo, Sequence[Union[int, str, _Position]]]

class TokenError(Exception): ...
class StopTokenizing(Exception): ...

class Untokenizer:
tokens = ... # type: Any
prev_row = ... # type: Any
prev_col = ... # type: Any
encoding = ... # type: Any
def __init__(self): ...
def add_whitespace(self, start): ...
def untokenize(self, iterable): ...
def compat(self, token, iterable): ...
tokens = ... # type: List[str]
prev_row = ... # type: int
prev_col = ... # type: int
encoding = ... # type: Optional[str]
def __init__(self) -> None: ...
def add_whitespace(self, start: _Position) -> None: ...
def untokenize(self, iterable: Iterable[_Token]) -> str: ...
def compat(self, token: Sequence[Union[int, str]], iterable: Iterable[_Token]) -> None: ...

def untokenize(iterable): ...
def detect_encoding(readline): ...
def tokenize(readline): ...
def untokenize(iterable: Iterable[_Token]) -> Any: ...
def detect_encoding(readline: Callable[[], bytes]) -> Tuple[str, Sequence[bytes]]: ...
def tokenize(readline: Callable[[], bytes]) -> Generator[TokenInfo, None, None]: ...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mjpieters This is fine but in the future, if you're type hinting the return value of a simple generator, simply use -> Iterator[SomeType].


def open(filename: Union[str, bytes, int]) -> TextIO: ...

Expand Down