You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(Please correct me if I'm wrong with any of the below. I love this library, but I'll cut to it...)
decompress currently crashes the process with 137 if the sum of the input file and its decompressed contents exceeds the available memory of the system.
As a result, you can only decompress files where the file itself summed with its decompressed contents is less than the available memory of your system (Buffers come from available memory, not from V8 heap, so aren't limited by --max-old-space-size, but are of course still limited by the amount of available memory the system has).
This is fine on development machines, or very large servers, but even on a medium EC2 instance that has 1.5GB of available memory, you're limited to extract files under 1.5GB (this includes their extracted content). On a nano instance you can only decompress files under 30MB before you get a 137 exit code (depending on the compression ratio) 😅
- breathes -
Do you see a future where decompress uses streams instead? (For backward compatibility you could always then buffer everything from the streams -- but at least users would have the option to opt-out of the "buffer everything in memory" behaviour if needed.)
The text was updated successfully, but these errors were encountered:
(Please correct me if I'm wrong with any of the below. I love this library, but I'll cut to it...)
decompress
currently crashes the process with137
if the sum of the input file and its decompressed contents exceeds the available memory of the system.This is because the entire input file is loaded into memory and all extracted files are also retained in memory.
As a result, you can only decompress files where the file itself summed with its decompressed contents is less than the available memory of your system (
Buffer
s come from available memory, not from V8 heap, so aren't limited by--max-old-space-size
, but are of course still limited by the amount of available memory the system has).This is fine on development machines, or very large servers, but even on a
medium
EC2 instance that has 1.5GB of available memory, you're limited to extract files under 1.5GB (this includes their extracted content). On anano
instance you can only decompress files under 30MB before you get a 137 exit code (depending on the compression ratio) 😅- breathes -
Do you see a future where
decompress
uses streams instead? (For backward compatibility you could always then buffer everything from the streams -- but at least users would have the option to opt-out of the "buffer everything in memory" behaviour if needed.)The text was updated successfully, but these errors were encountered: