Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The io module doesn't support non-blocking files #57531

Open
sbt mannequin opened this issue Nov 2, 2011 · 48 comments
Open

The io module doesn't support non-blocking files #57531

sbt mannequin opened this issue Nov 2, 2011 · 48 comments
Assignees
Labels
3.11 only security fixes 3.12 bugs and security fixes 3.13 bugs and security fixes stdlib Python modules in the Lib dir topic-IO type-bug An unexpected behavior, bug, or error

Comments

@sbt
Copy link
Mannequin

sbt mannequin commented Nov 2, 2011

BPO 13322
Nosy @pitrou, @vstinner, @benjaminp, @jab, @vadmium, @jstasiak, @bharel, @izbyshev
Files
  • blockingioerror.py
  • blockingioerror.py
  • write_blockingioerror.patch
  • write_blockingioerror.patch
  • write_blockingioerror.patch
  • write_blockingioerror.patch
  • nonblock-none.patch
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = None
    created_at = <Date 2011-11-02.15:08:55.126>
    labels = ['type-bug', 'library', '3.9']
    title = "The io module doesn't support non-blocking files"
    updated_at = <Date 2020-08-16.18:51:47.700>
    user = 'https://bugs.python.org/sbt'

    bugs.python.org fields:

    activity = <Date 2020-08-16.18:51:47.700>
    actor = 'bar.harel'
    assignee = 'docs@python'
    closed = False
    closed_date = None
    closer = None
    components = ['Library (Lib)']
    creation = <Date 2011-11-02.15:08:55.126>
    creator = 'sbt'
    dependencies = []
    files = ['23590', '23598', '23613', '23628', '23693', '23724', '37995']
    hgrepos = []
    issue_num = 13322
    keywords = ['patch']
    message_count = 43.0
    messages = ['146841', '146878', '146936', '146940', '146998', '147003', '147004', '147009', '147011', '147012', '147013', '147023', '147027', '147028', '147048', '147061', '147071', '147242', '147387', '147405', '147409', '147664', '147682', '147875', '147916', '147923', '148074', '151937', '235023', '235249', '235316', '239371', '307763', '307764', '307770', '307773', '334044', '354337', '354339', '354341', '354342', '354345', '375519']
    nosy_count = 14.0
    nosy_names = ['pitrou', 'vstinner', 'benjamin.peterson', 'stutzbach', 'jab', 'neologix', 'abacabadabacaba', 'docs@python', 'python-dev', 'sbt', 'martin.panter', 'jstasiak', 'bar.harel', 'izbyshev']
    pr_nums = []
    priority = 'normal'
    resolution = None
    stage = 'patch review'
    status = 'open'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue13322'
    versions = ['Python 3.9']

    Linked PRs

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 2, 2011

    According to the the documentation, BufferedReader.read() and BufferedWriter.write() should raise io.BlockingIOError if the file is in non-blocking mode and the operation cannot succeed without blocking.

    However, BufferedReader.read() returns None (which is what RawIOBase.read() is documented as doing), and BufferedWriter.write() raises IOError with a message like

    raw write() returned invalid length -1 (should have been 
    between 0 and 5904)
    

    I tested this on Linux with Python 2.6, 2.7 and 3.x.

    Attached is a unit test.

    @sbt sbt mannequin added the type-bug An unexpected behavior, bug, or error label Nov 2, 2011
    @pitrou pitrou added stdlib Python modules in the Lib dir topic-IO labels Nov 2, 2011
    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 2, 2011

    BufferedReader.readinto() should also raise BlockingIOError according to the docs. Updated unittest checks for that also.

    BTW, The documentation for BufferedIOBase.read() says that BlockingIOError should be raised if nothing can be read in non-blocking mode. BufferedReader inherits from BufferedIOBase and overrides the read() method. This is the documentation for BufferedReader.read():

        read([n])
            Read and return n bytes, or if n is not given or negative, 
            until EOF or if the read call would block in non-blocking mode.

    This sentence is complete gobbledygook, and it makes no mention of what should happen if nothing can be read in non-blocking mode. So I presume behaviour for BufferedReader.read() should match the documented behaviour for BufferedIOBase.read().

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 3, 2011

    Wierdly, it looks like BlockingIO is not raised anywhere in the code for the C implementation of io.

    Even more wierdly, in the Python implementation of io, BlockingIOError is only ever raised by except clauses which have already caught BlockingIOError. So, of course, these clauses are dead code.

    The only code in CPython which can ever succesfully raise BlockingIOError is MockNonBlockWriterIO.write() in test/test_io.py.

    I don't know what the correct behaviour is for flush() and close() if you get EAGAIN. I think flush() should raise an error rather than blocking, and that close() should delegate to self.raw.close() before raising the error.

    The docs say that read(), readinto() and write() can raise BlockingIOError. But what should readall() and readline() do? Should we just try to emulate whatever Python's old libc IO system did (with BlockingIOError replacing IOError(EAGAIN))?

    @pitrou
    Copy link
    Member

    pitrou commented Nov 3, 2011

    Wierdly, it looks like BlockingIO is not raised anywhere in the code
    for the C implementation of io.

    That would explain why it isn't raised :)

    This is a hairy issue: read(n) is documented as returning either n bytes or nothing. But what if less than n bytes are available non-blocking? Currently we return a partial read. readline() behaviour is especially problematic:

    >>> fcntl.fcntl(r, fcntl.F_SETFL, os.O_NDELAY)
    0
    >>> rf = open(r, mode='rb')
    >>> os.write(w, b'xy')
    2
    >>> rf.read(3)
    b'xy'
    >>> os.write(w, b'xy')
    2
    >>> rf.readline()
    b'xy'

    We should probably raise BlockingIOError in these cases, but that complicates the implementation quite a bit: where do we buffer the partial data? The internal (fixed size) buffer might not be large enough.

    write() is a bit simpler, since BlockingIOError has a "characters_written" attribute which is meant to inform you of the partial success: we can just reuse that. That said, BlockingIOError could grow a "partial_read" attribute containing the read result...

    Of course, we may also question whether it's useful to use buffered I/O objects around non-blocking file descriptors; if you do non-blocking I/O, you generally want to be in control, which means not having any implicit buffer between you and the OS.

    (this may be a topic for python-dev)

    @neologix
    Copy link
    Mannequin

    neologix mannequin commented Nov 4, 2011

    This is a hairy issue

    Indeed.

    Performing partial read/write may sound imperfect, but using buffered I/O around non-blockind FD is definitely not a good idea.
    Also, the advantage of the current approach is that at least, no data is ever lost (and changing the behavior to raise a BlockingIOError might break some code out there in the wild).

    Note that Java's BufferedInputStream and ReadableByteChannel also return partial reads.

    So I'm somewhat inclined to keep the current behavior (it would however probably be a good idea to update the documentation to warn about this limitation, though).

    @pitrou
    Copy link
    Member

    pitrou commented Nov 4, 2011

    Also, the advantage of the current approach is that at least, no data
    is ever lost

    But what about the buggy readline() behaviour?

    @pitrou
    Copy link
    Member

    pitrou commented Nov 4, 2011

    Note that Java's BufferedInputStream and ReadableByteChannel also
    return partial reads.

    Apparently, they are specified to, even for blocking streams (which I find a bit weird, and the language in the docs seems deliberately vague). Python's buffered read(), though, is specified to return the requested number of bytes (unless EOF happens).

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 4, 2011

    No one has suggested raising BlockingIOError and DISCARDING the data when a partial read has occurred. The docs seem to imply that the partially read data should be returned since they only say that BlockingIOError should be raised if there is NOTHING to read. Clearly this should all be spelt out properly.

    That leaves the question of whether, when there is NOTHING to read, BlockingIOError should be raised (as the docs say) or None should be returned (as is done now). I don't mind either way as long as the docs match reality.

    The part which really needs addressing is partial writes. Currently, if a write fails with EAGAIN then IOError is raised and there is no way to work out how much data was written/buffered. The docs say that BlockingIOError should be raised with the e.args[2] set to indicate the number of bytes written/buffered. This at least should be fixed.

    I will work on a patch.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 4, 2011

    But what about the buggy readline() behaviour?

    Just tell people that if the return value is a string which does not end in '\n' then it might caused by EOF or EAGAIN. They can just call readline() again to check which.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 4, 2011

    The third arg of BlockingIOError is used in two quite different ways.

    In write(s) it indicates the number of bytes of s which have been "consumed" (ie written to the raw file or buffered).

    But in flush() and flush_unlocked() (in _pyio) it indicates the number of bytes from the internal buffer which have been written to the raw file.

    I think this explains the following comment in write():

                # We're full, so let's pre-flush the buffer
                try:
                    self._flush_unlocked()
                except BlockingIOError as e:
                    # We can't accept anything else.
                    # XXX Why not just let the exception pass through?
                    raise BlockingIOError(e.errno, e.strerror, 0)
    

    I don't think flush() should try to tell us how many bytes were flushed: we only need to know whether we need to try again.

    @neologix
    Copy link
    Mannequin

    neologix mannequin commented Nov 4, 2011

    Apparently, they are specified to, even for blocking streams (which
    I find a bit weird, and the language in the docs seems deliberately
    vague).

    """
    As an additional convenience, it attempts to read as many bytes as possible by repeatedly invoking the read method of the underlying stream. This iterated read continues until one of the following conditions becomes true:

    The specified number of bytes have been read,
    The read method of the underlying stream returns -1, indicating end-of-file, or
    The available method of the underlying stream returns zero, indicating that further input requests would block.
    """

    As I understand it, it will return the number of bytes asked, unless EOF or EAGAIN/EWOULDBLOCK. It would seem reasonable to me to add the same note for non-blocking FDs to Python's read().

    > But what about the buggy readline() behaviour?
    Just tell people that if the return value is a string which does not
    end in '\n' then it might caused by EOF or EAGAIN. They can just call
    readline() again to check which.

    Sounds reasonable.

    No one has suggested raising BlockingIOError and DISCARDING the data
    when a partial read has occurred.

    The problem is that if we raise BlockingIOError, we can only buffer a limited amount of data.

    The docs seem to imply that the partially read data should be returned
    since they only say that BlockingIOError should be raised if there is
    NOTHING to read. Clearly this should all be spelt out properly.

    Agreed.

    That leaves the question of whether, when there is NOTHING to
    read, BlockingIOError should be raised (as the docs say) or None
    should be returned (as is done now).

    I don't have a string feeling: if we don't raise BlockingIOError on partial reads, then it probably makes sense to keep None.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 4, 2011

    Currently a BlockingIOError exception raised by flush() sets
    characters_written to the number of bytes fushed from the internal
    buffer. This is undocument (although there is a unit test which tests
    for it) and causes confusion because characters_written has conflicting
    meanings depending on whether the exception was raised by flush() or
    write(). I would propose setting characters_written to zero on
    BlockingIOError exceptions raised by flush(). Are there any reasons not
    to make this change?

    Also, the docs say that the raw file wrapped by
    BufferedReader/BufferedWriter should implement RawIOBase. This means
    that self.raw.write() should return None instead of raising
    BlockingIOError. But the implementation tries to cope with
    BlockingIOError coming from a raw write. In fact, the
    MockNonBlockWriterIO class in unit tests is used as a raw file, but its
    write() method raises BlockingIOError.

    It would simplify matters a lot to insist that raw files implement
    RawIOBase properly.

    BTW, when I try to change characters_written of an existing
    BlockingIOError exception using the pointer returned by
    _buffered_check_blocking_error(), it appears not to work: the exception
    continues to have characters_written == 0 -- not sure why...

    @pitrou
    Copy link
    Member

    pitrou commented Nov 4, 2011

    >> But what about the buggy readline() behaviour?
    > Just tell people that if the return value is a string which does not
    > end in '\n' then it might caused by EOF or EAGAIN. They can just call
    > readline() again to check which.

    Sounds reasonable.

    But then what's the point of using buffered I/O at all? If it can't
    offer anything more than raw I/O, I'd rather do something like raise a
    RuntimeError("buffered I/O doesn't work with non-blocking streams") when
    the raw stream returns None. Returning partial results on a buffered's
    readline() is not something we should ever do.

    (actually, raw I/O readline() is probably buggy as well)

    @neologix
    Copy link
    Mannequin

    neologix mannequin commented Nov 4, 2011

    But then what's the point of using buffered I/O at all? If it can't
    offer anything more than raw I/O, I'd rather do something like raise
    a RuntimeError("buffered I/O doesn't work with non-blocking streams")
    when the raw stream returns None.

    Well, ideally it should be an UnsupportedOperation, but that's an option. The only think I didn't like about this is that we should ideally raise this error upon the first method call, not when - and if - we receive EAGAIN.
    Another possibility would be that, since lines are usually reasonably sized, they should fit in the buffer (which is 8KB by default). So we could do the extra effort of buffering the data and return it once the line is complete: if the buffer fills up before we got the whole line, then we could raise a RuntimeError("Partial line read"). Note that I didn't check if it's easily feasible (i.e. we should avoid introducing kludges in the I/O layer just to handle thi corner case).

    Returning partial results on a buffered's readline() is not something
    we should ever do.

    Yeah, I know.
    Java made the choice of making readline() block, which is IMHO even worse (I mean, it defeats the whole point of non-blocking I/O...).

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 4, 2011

    Another possibility would be that, since lines are usually reasonably
    sized, they should fit in the buffer (which is 8KB by default). So we
    could do the extra effort of buffering the data and return it once the
    line is complete: if the buffer fills up before we got the whole line,
    then we could raise a RuntimeError("Partial line read"). Note that I
    didn't check if it's easily feasible (i.e. we should avoid introducing
    kludges in the I/O layer just to handle thi corner case).

    Discarding data rarely is worse than always throwing an exception.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 5, 2011

    The attached patch makes BufferedWrite.write() raise BlockingIOError when the raw file is non-blocking and the write would block.

    @neologix
    Copy link
    Mannequin

    neologix mannequin commented Nov 5, 2011

    write() is a bit simpler, since BlockingIOError has
    a "characters_written" attribute which is meant to inform you of the
    partial success: we can just reuse that. That said, BlockingIOError
    could grow a "partial_read" attribute containing the read result...

    Now that I think about it, it's probably the best solution:
    always raise a BlockingIOError in case of partial write, with characters_written set correctly (sbt's patch).
    And do the same thing on partial read/readline, and return the partially read data as an attribute of BlockingIOError (we could also return a characters_read that would indicate the exact number of bytes read: then the user could call read()/read_into() with exactly characters_read).
    That could certainly break existing - sloppy - code, but this would be more much consistent than the current behavior.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 7, 2011

    Testing the patch a bit more thoroughly, I found that data received from the readable end of the pipe can be corrupted by the C implementation. This seems to be because two of the previously dormant codepaths did not properly maintain the necessary invariants.

    I got the failures to go away by adding

        self->pos += avail;
    

    in two places. However, I really do not know what all the attributes mean. (Should self->raw_pos also be modified...?) Someone familiar with the code would need to check whether things are being done properly. This new patch adds some XXX comments in places in bufferedio.c which I am unsure about.

    @pitrou
    Copy link
    Member

    pitrou commented Nov 10, 2011

    Hi,

    Testing the patch a bit more thoroughly, I found that data received
    from the readable end of the pipe can be corrupted by the C
    implementation. This seems to be because two of the previously
    dormant codepaths did not properly maintain the necessary invariants.

    Ouch. Were they only non-blocking codepaths?

    in two places. However, I really do not know what all the attributes
    mean. (Should self->raw_pos also be modified...?)

    raw_pos is the position which the underlying raw stream is currently at.
    It only needs to be modified when a successful write(), read() or seek()
    is done on the raw stream.

    Another comment: you set errno to EAGAIN, but it is not sure that was
    the actual errno raised by the raw stream (although that's quite
    likely). You might want to reflect the actual C errno (but you'd better
    set it to 0 before the system call, then).

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 10, 2011

    Ouch. Were they only non-blocking codepaths?

    Yes.

    raw_pos is the position which the underlying raw stream is currently
    at. It only needs to be modified when a successful write(), read()
    or seek() is done on the raw stream.

    Do you mean self->raw_pos should give the same answer as self.raw.tell()? (But that seems to be the definition of self->abs_pos.) Or is it the buffer offset which corresponds to self.raw.tell()?

    @pitrou
    Copy link
    Member

    pitrou commented Nov 10, 2011

    Do you mean self->raw_pos should give the same answer as
    self.raw.tell()? (But that seems to be the definition of
    self->abs_pos.) Or is it the buffer offset which corresponds to
    self.raw.tell()?

    The latter.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 15, 2011

    Here is an updated patch which uses the real errno.

    It also gets rid of the restore_pos argument of _bufferedwriter_flush_unlocked() which is always set to false --
    I guess buffered_flush_and_rewind_unlocked() is used instead.

    @pitrou
    Copy link
    Member

    pitrou commented Nov 15, 2011

    Thanks again. Just a nit: the tests should be in MiscIOTest, since they don't directly instantiate the individual classes. Also, perhaps it would be nice to check that the exception's "errno" attribute is EAGAIN.

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 18, 2011

    Thanks again. Just a nit: the tests should be in MiscIOTest, since
    they don't directly instantiate the individual classes. Also, perhaps
    it would be nice to check that the exception's "errno" attribute is
    EAGAIN.

    Done.

    @pitrou
    Copy link
    Member

    pitrou commented Nov 18, 2011

    Thanks. Who should I credit? "sbt"?

    @sbt
    Copy link
    Mannequin Author

    sbt mannequin commented Nov 18, 2011

    Thanks. Who should I credit? "sbt"?

    Yeah, thanks.

    @python-dev
    Copy link
    Mannequin

    python-dev mannequin commented Nov 21, 2011

    New changeset ac2c4c62b486 by Antoine Pitrou in branch '3.2':
    Issue bpo-13322: Fix BufferedWriter.write() to ensure that BlockingIOError is
    http://hg.python.org/cpython/rev/ac2c4c62b486

    New changeset 3cd1985ed04f by Antoine Pitrou in branch 'default':
    Issue bpo-13322: Fix BufferedWriter.write() to ensure that BlockingIOError is
    http://hg.python.org/cpython/rev/3cd1985ed04f

    New changeset e84e17643eeb by Antoine Pitrou in branch '2.7':
    Issue bpo-13322: Fix BufferedWriter.write() to ensure that BlockingIOError is
    http://hg.python.org/cpython/rev/e84e17643eeb

    @izbyshev izbyshev mannequin added the 3.8 (EOL) end of life label Dec 6, 2017
    @pitrou
    Copy link
    Member

    pitrou commented Dec 6, 2017

    Generally I doubt anyone is using the non-blocking semantics of the Python 3 I/O stack. People doing non-blocking I/O generally do it with sockets instead, which tend to reproduce quite literally the POSIX behaviour and error codes.

    @pitrou pitrou removed the 3.8 (EOL) end of life label Dec 6, 2017
    @izbyshev
    Copy link
    Mannequin

    izbyshev mannequin commented Dec 6, 2017

    Yes, your claim is confirmed by the fact that there have been little interest in this issue since 2011. Still, non-blocking behavior is incorrectly specified in the docs and is inconsistent (as investigated by Martin). And obscure errors like in my example or in bpo-13858 show that I/O stack is confused too. To prevent people from tripping on it, would you consider recommending against usage of I/O stack for non-blocking operations in io module docs?

    @pitrou
    Copy link
    Member

    pitrou commented Dec 6, 2017

    Yes, I think adding a note in the docs is reasonable.

    @vadmium
    Copy link
    Member

    vadmium commented Jan 19, 2019

    bpo-35762 was opened specifically about Izbyshev’s case: TextIOWrapper behaviour with a non-blocking file. Calling “os.fdopen” with mode='r' (text mode) returns a TextIOWrapper object.

    @vstinner
    Copy link
    Member

    I closed bpo-35762 as a duplicate of this issue: subprocess.Popen with universal_newlines and nonblocking streams fails with "can't concat NoneType to bytes".

    @vstinner
    Copy link
    Member

    I closed bpo-26292 as a duplicate of this issue: Raw I/O writelines() broken for non-blocking I/O.

    @vstinner
    Copy link
    Member

    I closed bpo-24560 as a duplicate of this issue: codecs.StreamReader doesn't work with nonblocking streams: TypeError: can't concat bytes to NoneType.

    @vstinner
    Copy link
    Member

    See also bpo-32561: Add API to io objects for non-blocking reads/writes.

    @vstinner vstinner added 3.9 only security fixes and removed 3.7 (EOL) end of life docs Documentation in the Doc dir topic-IO labels Oct 10, 2019
    @vstinner vstinner changed the title buffered read() and write() does not raise BlockingIOError The io module doesn't support non-blocking files Oct 10, 2019
    @vstinner
    Copy link
    Member

    TextIOWrapper, and maybe also BufferedRead, may raise an exception if the underlying file descriptor is configured in non-blocking mode. It may require an additional syscall() to query the FD properties, which may slowdown the creation of file objects in Python :-/

    @bharel
    Copy link
    Mannequin

    bharel mannequin commented Aug 16, 2020

    I have experienced both ״TypeError: can't concat NoneType to bytes״, and the fact BufferedIO returns None.

    @pitrou @izbyshev contrary to your belief, I think there is at least some interest in this issue. Every few months another ticket is opened about a different aspect of the same underlying problem.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    @z764969689
    Copy link
    Contributor

    z764969689 commented Jun 2, 2023

    Going through the same TypeError using 3.7 today, sample code:

    with open(file_path, 'r', encoding='utf-8') as f:
        result = f.read()
    

    the file read by the process is also manipulated by another process on Linux at the same time, while the code caught the IOError only. Wasn't expecting the TypeError...
    I propose raising something like IOError or BlockingIOError with more context notifying the callers of the BufferedReader behaviour rather than the simple TypeError. Modifications on the TextIOWrapper functions when calling self._buffer.read() should work.

    @pakal
    Copy link
    Contributor

    pakal commented Aug 1, 2023

    The ticket #80050 also relates to these problems.

    While updating Rsfile (https://github.com/pakal/rsfile) to Python3.12, I got very confused too by the interactions between io and non-blocking pipes. Maybe the best would be indeed to prevent using buffer/text layers of io with non-blocking pipes.

    If an additional syscall to check for the status of fileno is a performance problem, maybe this can be let as "behaviour undefined", and just advised-against in docs?

    @z764969689
    Copy link
    Contributor

    @pakal
    I agree to prevent using buffer layers for non-blocking pipes while reading files in non-blocking mode could be unintentional in many cases. The current TypeError is quite ungracefully showing no context to the callers. I opened a PR but no one was reviewing it.

    @serhiy-storchaka serhiy-storchaka added 3.11 only security fixes topic-IO 3.12 bugs and security fixes 3.13 bugs and security fixes and removed 3.9 only security fixes labels Jan 29, 2024
    @serhiy-storchaka serhiy-storchaka self-assigned this Jan 29, 2024
    @u-tung
    Copy link

    u-tung commented Sep 13, 2024

    I encountered the same problem

    process = subprocess.Popen(["bash"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True)
    os.set_blocking(process.stdout.fileno(), False)
    process.stdout.read()

    This leads to

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "<frozen codecs>", line 321, in decode
    TypeError: can't concat NoneType to bytes
    

    It seems that this issue is more complicated than I imagined, but In any case, a read() operation should never raise a TypeError under any circumstances.

    I think the crux of the issue lies in whether TextIOWrapper is a pure wrapper or a class with wrapping capabilities.

    If it is a pure wrapper, it should only add decode/encode capabilities to the read/write functions. When encountering other situations, it should return an empty bytes (which should decode to an empty str) or None, or it should raise an error wrapper that returns/raises the output of rawbuffer unchanged, in order to maintain the same behavior as rawbuffer.

    If it is a class with wrapping capabilities, it should have a consistent behavior, appropriately handling the return/raise of the raw buffer at EOF to maintain consistency in behavior.

    I lean towards the class with wrapping capabilities option, as it reduces the hassle for downstream developers. However, for compatibility reasons, the pure wrapper option is also an enticing choice; it is harmless because it does not change the original behavior of TextIOWrapper. It merely chooses to return None in situations that would originally cause a TypeError. For existing code, this might just postpone the occurrence of TypeError, but it enables subsequent developers to handle situations where TypeError arises appropriately (rather than having to use try: ... expect TypeError: ...). This will not affect legacy code if they originally had no issues with TypeError.

    @cmaloney
    Copy link
    Contributor

    #122933 (to fix gh-109523) adds more documentation around behavior + raise a BlockingIOError rather than TypeError for TextIO. Discuss thread around the change: https://discuss.python.org/t/handling-sys-stdin-read-in-non-blocking-mode/59633. That should help more cases here (open().read() which opens in buffered text mode by default, will raise a clearer exception)

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.11 only security fixes 3.12 bugs and security fixes 3.13 bugs and security fixes stdlib Python modules in the Lib dir topic-IO type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    8 participants