You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been working on trying imagecodecs out with the 3.13 free-threaded build. Everything seems to be working as expected (most of the C heavylifting is done by Cython), except for one thing.
Running test_compressors fails when testing the lz4h5 codec. I first encountered this on my PC, but only when there were enough other programs open that my RAM wasn't enough. In this case a SIGKILL would kills the test process.
I then started testing in a Docker container. The test failure can be reproduced fairly consistently on a Linux aarch64 Docker container, hard-coded to provide 6GBs of RAM with ulimit -v 6000000. When doing so, running pytest fails with a MemoryError and CPython outputs a warnings. The failure only happens with the free-threaded build when the GIL is actually disabled, which points to an upstream CPython bug.
My questions is: This is a CPython bug, probably related to the GC, but allocating 1GB of RAM still seems excessive. Is this expected? Can it be reduced somehow or is this the best we can do?
Hey! 👋
I've been working on trying
imagecodecs
out with the 3.13 free-threaded build. Everything seems to be working as expected (most of the C heavylifting is done by Cython), except for one thing.Running
test_compressors
fails when testing thelz4h5
codec. I first encountered this on my PC, but only when there were enough other programs open that my RAM wasn't enough. In this case aSIGKILL
would kills the test process.I then started testing in a Docker container. The test failure can be reproduced fairly consistently on a Linux aarch64 Docker container, hard-coded to provide 6GBs of RAM with
ulimit -v 6000000
. When doing so, runningpytest
fails with aMemoryError
and CPython outputs a warnings. The failure only happens with the free-threaded build when the GIL is actually disabled, which points to an upstream CPython bug.However, I still did a deep-dive and it turns out that
lz4h5_encode
does ask for a lot of memory, 1077952680 bytes to be exact. It callsPyBytes_FromStringAndSize
with a value in the order of the size that's returned fromLZ4_compressBound
.My questions is: This is a CPython bug, probably related to the GC, but allocating 1GB of RAM still seems excessive. Is this expected? Can it be reduced somehow or is this the best we can do?
Full test log
The text was updated successfully, but these errors were encountered: