*** zstd command line interface 64-bits v1.5.3, by Yann Collet *** Usage: zstd [OPTION]... [FILE]... [-o file] Compress or uncompress FILEs (with no FILE or when FILE is `-`, read from standard input). -o file result stored into `file` (only 1 output file) -1 .. -19 compression level (faster .. better; default: 3) -d, --decompress decompression -f, --force disable input and output checks. Allows overwriting existing files, input from console, output to stdout, operating on links, block devices, etc. During decompression and when the output destination is stdout, pass-through unrecognized formats as-is. --rm remove source file(s) after successful de/compression -k, --keep preserve source file(s) (default) -D DICT use DICT as Dictionary for compression or decompression -h display usage and exit -H,--help display long help and exit Advanced options : -V, --version display Version number and exit -c, --stdout write to standard output (even if it is the console), keep original file -v, --verbose verbose mode; specify multiple times to increase verbosity -q, --quiet suppress warnings; specify twice to suppress errors too --[no-]progress forcibly display, or never display the progress counter note: any (de)compressed output to terminal will mix with progress counter text -r operate recursively on directories --filelist FILE read list of files to operate upon from FILE --output-dir-flat DIR : processed files are stored into DIR --output-dir-mirror DIR : processed files are stored into DIR respecting original directory structure --[no-]asyncio use asynchronous IO (default: enabled) --[no-]check during compression, add XXH64 integrity checksum to frame (default: enabled) if specified with -d, decompressor will ignore/validate checksums in compressed frame (default: validate) --trace FILE log tracing information to FILE -- all arguments after "--" are treated as files Advanced compression options : --ultra enable levels beyond 19, up to 22 (requires more memory) --fast[=#] switch to very fast compression levels (default: 1) --long[=#] enable long distance matching with given window log (default: 27) --patch-from=FILE : specify the file to be used as a reference point for zstd's diff engine. --adapt dynamically adapt compression level to I/O conditions -T# spawn # compression threads (default: 1, 0==# cores) -B# select size of each job (default: 0==automatic) --single-thread use a single thread for both I/O and compression (result slightly different than -T1) --auto-threads={physical,logical} : use either physical cores or logical cores as default when specifying -T0 (default: physical) --rsyncable compress using a rsync-friendly method (-B sets block size) --exclude-compressed : only compress files that are not already compressed --stream-size=# specify size of streaming input from `stdin` --size-hint=# optimize compression parameters for streaming input of approximately this size --target-compressed-block-size=# : generate compressed block of approximately targeted size --no-dictID don't write dictID into header (dictionary compression only) --[no-]compress-literals : force (un)compressed literals --[no-]row-match-finder : force enable/disable usage of fast row-based matchfinder for greedy, lazy, and lazy2 strategies --format=zstd compress files to the .zst format (default) --format=gzip compress files to the .gz format Advanced decompression options : -l print information about zstd compressed files --test test compressed file integrity -M# Set a memory usage limit for decompression --[no-]sparse sparse mode (default: disabled) --[no-]pass-through : passes through uncompressed files as-is (default: disabled) Dictionary builder : --train ## create a dictionary from a training set of files --train-cover[=k=#,d=#,steps=#,split=#,shrink[=#]] : use the cover algorithm with optional args --train-fastcover[=k=#,d=#,f=#,steps=#,split=#,accel=#,shrink[=#]] : use the fast cover algorithm with optional args --train-legacy[=s=#] : use the legacy algorithm with selectivity (default: 9) -o DICT DICT is dictionary name (default: dictionary) --maxdict=# limit dictionary to specified size (default: 112640) --dictID=# force dictionary ID to specified value (default: random) Benchmark options : -b# benchmark file(s), using # compression level (default: 3) -e# test all compression levels successively from -b# to -e# (default: 1) -i# minimum evaluation time in seconds (default: 3s) -B# cut file into independent chunks of size # (default: no chunking) -S output one benchmark result per input file (default: consolidated result) --priority=rt set process priority to real-time