- TurboPFor: The new synonym for "integer compression"
- 100% C (C++ headers), as simple as memcpy
- π Java Critical Natives/JNI. Access TurboPFor incl. SIMD/AVX2! from Java as fast as calling from C
- β¨ FULL range 8/16/32/64 bits scalar + 16/32/64 bits SIMD functions
- No other "Integer Compression" compress/decompress faster
- β¨ Direct Access, integrated (SIMD/AVX2) FOR/delta/Delta of Delta/Zigzag for sorted/unsorted arrays
- π 16 bits + 64 bits SIMD integrated functions
- For/PFor/PForDelta
- Novel TurboPFor (PFor/PForDelta) scheme w./ direct access + SIMD/AVX2. π+RLE
- Outstanding compression/speed. More efficient than ANY other fast "integer compression" scheme.
- Compress 70 times faster and decompress up to 4 times faster than OptPFD
- Bit Packing
- Fastest and most efficient "SIMD Bit Packing" 10 Billions integers/sec (40Gb/s!)
- Scalar "Bit Packing" decoding nearly as fast as SIMD-Packing in realistic (No "pure cache") scenarios
- Direct/Random Access : Access any single bit packed entry with zero decompression
- Variable byte
- Scalar "Variable Byte" faster than ANY other (incl. SIMD) implementation
- Simple family
- Novel "Variable Simple" (incl. RLE) faster and more efficient than simple16, simple-8b
- Elias fano
- Fastest "Elias Fano" implementation w/ or w/o SIMD/AVX2
- Transform
- Scalar & SIMD Transform: Delta, Zigzag, Zigzag of delta, XOR, Transpose/Shuffle,
- π lossy floating point compression with TurboPFor or TurboTranspose+lz77
- Floating Point Compression
- Delta/Zigzag + improved gorilla style + (Differential) Finite Context Method FCM/DFCM floating point compression
- Using TurboPFor, unsurpassed compression and more than 5 GB/s throughput
- π Error bound lossy floating point compression
- π Time Series Compression
- Fastest Gorilla 16/32/64 bits style compression (:new: zigzag of delta + RLE).
- can compress times series to only 0.01%. Speed > 10 GB/s compression and > 13 GB/s decompress.
- Inverted Index ...do less, go fast!
- Direct Access to compressed frequency and position data w/ zero decompression
- Novel "Intersection w/ skip intervals", decompress the minimum necessary blocks (~10-15%)!.
- Novel Implicit skips with zero extra overhead
- Novel Efficient Bidirectional Inverted Index Architecture (forward/backwards traversal) incl. "integer compression".
- more than 2000! queries per second on GOV2 dataset (25 millions documents) on a SINGLE core
- β¨ Revolutionary Parallel Query Processing on Multicores > 7000!!! queries/sec on a simple quad core PC.
...forgetMap Reduce, Hadoop, multi-node clusters,...
- π Download IcApp a new benchmark for TurboPFor
for testing allmost all integer and floating point file types. - Practical (No PURE cache) "integer compression" benchmark w/ large arrays.
- CPU: Skylake i7-6700 3.4GHz gcc 7.2 single thread
-
Generate and test (zipfian) skewed distribution (100.000.000 integers, Block size=128/256)
Note: Unlike general purpose compression, a small fixed size (ex. 128 integers) is in general used in "integer compression". Large blocks involved, while processing queries (inverted index, search engines, databases, graphs, in memory computing,...) need to be entirely decoded../icbench -a1.5 -m0 -M255 -n100M ZIPF
C Size | ratio% | Bits/Integer | C MB/s | D MB/s | Name |
---|---|---|---|---|---|
62,939,886 | 15.7 | 5.04 | 1588 | 9400 | TurboPFor256 |
63,392,759 | 15.8 | 5.07 | 1320 | 6432 | TurboPFor |
63,392,801 | 15.8 | 5.07 | 1328 | 924 | TurboPForDA |
65,060,504 | 16.3 | 5.20 | 60 | 2748 | FP_SIMDOptPFor |
65,359,916 | 16.3 | 5.23 | 32 | 2436 | PC_OptPFD |
73,477,088 | 18.4 | 5.88 | 408 | 2484 | PC_Simple16 |
73,481,096 | 18.4 | 5.88 | 624 | 8748 | FP_SimdFastPFor 64Ki * |
76,345,136 | 19.1 | 6.11 | 980 | 2612 | VSimple |
91,947,533 | 23.0 | 7.36 | 284 | 11737 | QMX 64k * |
93,285,864 | 23.3 | 7.46 | 1568 | 10232 | FP_GroupSimple 64Ki * |
95,915,096 | 24.0 | 7.67 | 848 | 3832 | Simple-8b |
99,910,930 | 25.0 | 7.99 | 13976 | 11872 | TurboPackV |
99,910,930 | 25.0 | 7.99 | 9468 | 9404 | TurboPack |
99,910,930 | 25.0 | 7.99 | 8420 | 8876 | TurboFor |
100,332,929 | 25.1 | 8.03 | 14320 | 12124 | TurboPack256V |
101,015,650 | 25.3 | 8.08 | 9520 | 9484 | TurboVByte |
102,074,663 | 25.5 | 8.17 | 5712 | 7916 | MaskedVByte |
102,074,663 | 25.5 | 8.17 | 2260 | 4208 | PC_Vbyte |
102,083,036 | 25.5 | 8.17 | 5200 | 4268 | FP_VByte |
112,500,000 | 28.1 | 9.00 | 1528 | 12140 | VarintG8IU |
125,000,000 | 31.2 | 10.00 | 4788 | 11288 | StreamVbyte |
400,000,000 | 100.00 | 32.00 | 8960 | 8948 | Copy |
N/A | N/A | EliasFano |
(*) codecs inefficient for small block sizes are tested with 64Ki integers/block.
- MB/s: 1.000.000 bytes/second. 1000 MB/s = 1 GB/s
- #BOLD = pareto frontier.
- FP=FastPFor SC:simdcomp PC:Polycom
- TurboPForDA,TurboForDA: Direct Access is normally used when accessing few individual values.
- Eliasfano can be directly used only for increasing sequences
-
gov2.sorted from DocId data set Block size=128/Delta coding
./icbench -fS -r gov2.sorted
Size | Ratio % | Bits/Integer | C Time MB/s | D Time MB/s | Function |
---|---|---|---|---|---|
3,321,663,893 | 13.9 | 4.44 | 1320 | 6088 | TurboPFor |
3,339,730,557 | 14.0 | 4.47 | 32 | 2144 | PC.OptPFD |
3,350,717,959 | 14.0 | 4.48 | 1536 | 7128 | TurboPFor256 |
3,501,671,314 | 14.6 | 4.68 | 56 | 2840 | VSimple |
3,768,146,467 | 15.8 | 5.04 | 3228 | 3652 | EliasFanoV |
3,822,161,885 | 16.0 | 5.11 | 572 | 2444 | PC_Simple16 |
4,521,326,518 | 18.9 | 6.05 | 836 | 3296 | Simple-8b |
4,649,671,427 | 19.4 | 6.22 | 3084 | 3848 | TurboVbyte |
4,955,740,045 | 20.7 | 6.63 | 7064 | 10268 | TurboPackV |
4,955,740,045 | 20.7 | 6.63 | 5724 | 8020 | TurboPack |
5,205,324,760 | 21.8 | 6.96 | 6952 | 9488 | SC_SIMDPack128 |
5,393,769,503 | 22.5 | 7.21 | 9912 | 11588 | TurboPackV256 |
6,221,886,390 | 26.0 | 8.32 | 6668 | 6952 | TurboFor |
6,221,886,390 | 26.0 | 8.32 | 6644 | 2260 | TurboForDA |
6,699,519,000 | 28.0 | 8.96 | 1888 | 1980 | FP_Vbyte |
6,700,989,563 | 28.0 | 8.96 | 2740 | 3384 | MaskedVByte |
7,622,896,878 | 31.9 | 10.20 | 836 | 4792 | VarintG8IU |
8,060,125,035 | 33.7 | 11.50 | 3536 | 8684 | Streamvbyte |
8,594,342,216 | 35.9 | 11.50 | 5228 | 6376 | libfor |
23,918,861,764 | 100.0 | 32.00 | 5824 | 5924 | Copy |
Block size: 64Ki = 256k bytes. Ki=1024 Integers
Size | Ratio % | Bits/Integer | C Time MB/s | D Time MB/s | Function |
---|---|---|---|---|---|
3,164,940,562 | 13.2 | 4.23 | 1344 | 6004 | TurboPFor 64Ki |
3,273,213,464 | 13.7 | 4.38 | 1496 | 7008 | TurboPFor256 64Ki |
3,965,982,954 | 16.6 | 5.30 | 1520 | 2452 | lz4+DT 64Ki |
4,234,154,427 | 17.7 | 5.66 | 436 | 5672 | qmx 64Ki |
6,074,995,117 | 25.4 | 8.13 | 1976 | 2916 | blosc_lz4 64Ki |
8,773,150,644 | 36.7 | 11.74 | 2548 | 5204 | blosc_lz 64Ki |
"lz4+DT 64Ki" = Delta+Transpose from TurboPFor + lz4
"blosc_lz4" internal lz4 compressor+vectorized shuffle
-
Test file Timestamps: ts.txt(sorted)
./icapp -Ft ts.txt -I15 -J15
Function | C MB/s | size | ratio% | D MB/s | Text |
---|---|---|---|---|---|
bvzenc32 | 10632 | 45,909 | 0.008 | 12823 | ZigZag |
bvzzenc32 | 8914 | 56,713 | 0.010 | 13499 | ZigZag Delta of delta |
vsenc32 | 12294 | 140,400 | 0.024 | 12877 | Variable Simple |
p4nzenc256v32 | 1932 | 596,018 | 0.10 | 13326 | TurboPFor256 ZigZag |
p4ndenc256v32 | 1961 | 596,018 | 0.10 | 13339 | TurboPFor256 Delta |
bitndpack256v32 | 12564 | 909,189 | 0.16 | 13505 | TurboPackV256 Delta |
p4nzenc32 | 1810 | 1,159,633 | 0.20 | 8502 | TurboPFor ZigZag |
p4nzenc128v32 | 1795 | 1,159,633 | 0.20 | 13338 | TurboPFor ZigZag |
bitnzpack256v32 | 9651 | 1,254,757 | 0.22 | 13503 | TurboPackV256 ZigZag |
bitnzpack128v32 | 10155 | 1,472,804 | 0.26 | 13380 | TurboPackV ZigZag |
vbddenc32 | 6198 | 18,057,296 | 3.13 | 10982 | TurboVByte Delta of delta |
memcpy | 13397 | 577,141,992 | 100.00 |
./icbench -eTRANSFORM ZIPF
Size | C Time MB/s | D Time MB/s | Function |
---|---|---|---|
100,000,000 | 9400 | 9132 | TPbyte 4 TurboPFor Byte Transpose/shuffle AVX2 |
100,000,000 | 8784 | 8860 | TPbyte 4 TurboPFor Byte Transpose/shuffle SSE |
100,000,000 | 7688 | 7656 | Blosc_Shuffle AVX2 |
100,000,000 | 5204 | 7460 | TPnibble 4 TurboPFor Nibble Transpose/shuffle SSE |
100,000,000 | 6620 | 6284 | Blosc shuffle SSE |
100,000,000 | 3156 | 3372 | Bitshuffle AVX2 |
100,000,000 | 2100 | 2176 | Bitshuffle SSE |
./icapp -Fd file " 64 bits floating point raw file
./icapp -Ff file " 32 bits floating point raw file
./icapp -Fcf file " text file with miltiple entries (ex. 8.657,56.8,4.5 ...)
./icapp -Ftf file " text file (1 entry/line)
./icapp -Ftf file -v5 " + display the first entries read
./icapp -Ftf file.csv -K3 " but 3th column in a csv file (ex. number,Text,456.5 -> 456.5
./icapp -Ftf file -g.001 " lossy compression with allowed error 0.001
- see also TurboTranspose
GOV2: 426GB, 25 Millions documents, average doc. size=18k.
-
Aol query log: 18.000 queries
~1300 queries per second (single core)
~5000 queries per second (quad core)
Ratio = 14.37% Decoded/Total Integers. -
TREC Million Query Track (1MQT):
~1100 queries per second (Single core)
~4500 queries per second (Quad core CPU)
Ratio = 11.59% Decoded/Total Integers.
- Benchmarking intersections (Single core, AOL query log)
max.docid/q | Time s | q/s | ms/q | % docid found |
---|---|---|---|---|
1.000 | 7.88 | 2283.1 | 0.438 | 81 |
10.000 | 10.54 | 1708.5 | 0.585 | 84 |
ALL | 13.96 | 1289.0 | 0.776 | 100 |
q/s: queries/second, ms/q:milliseconds/query |
- Benchmarking Parallel Query Processing (Quad core, AOL query log)
max.docid/q | Time s | q/s | ms/q | % docids found |
---|---|---|---|---|
1.000 | 2.66 | 6772.6 | 0.148 | 81 |
10.000 | 3.39 | 5307.5 | 0.188 | 84 |
ALL | 3.57 | 5036.5 | 0.199 | 100 |
- Search engines are spending 90% of the time in intersections when processing queries.
- Most search engines are using pruning strategies, caching popular queries,... to reduce the time for intersections and query processing.
- As indication, google is processing 40.000 Queries per seconds, using 900.000 multicore servers for searching 8 billions web pages (320 X size of GOV2).
- Recent "integer compression" GOV2 experiments (best paper at ECIR 2014) On Inverted Index Compression for Search Engine Efficiency using 8-core Xeon PC are reporting 1.2 seconds per query (for 1.000 Top-k docids).
Download or clone TurboPFor
git clone git://github.com/powturbo/TurboPFor.git
cd TurboPFor
To benchmark external libraries:
git clone --recursive git://github.com/powturbo/TurboPFor.git
cd TurboPFor
make
or
make AVX2=1
Include external libs
make CODEC1=1 CODEC2=1
Disable SIMD
make NSIMD=1
nmake /f makefile.vs
-
benchmark groups of "integer compression" functions
./icbench -eBENCH -a1.2 -m0 -M255 -n100M ZIPF ./icbench -eBITPACK/VBYTE -a1.2 -m0 -M255 -n100M ZIPF
Type "icbench -l1" for a list
-zipfian distribution alpha = 1.2 (Ex. -a1.0=uniform -a1.5=skewed distribution)
-number of integers = 100.000.000
-integer range from 0 to 255
-
Unsorted lists: individual function test (ex. Copy TurboPack TurboPFor)
./icbench -a1.5 -m0 -M255 -ecopy/turbopack/turbopfor/turbopack256v ZIPF
-
Unsorted lists: Zigzag encoding w/ option -fz or FOR encoding
./icbench -fz -eturbovbyte/turbopfor/turbopackv ZIPF ./icbench -eturboforv ZIPF
-
Sorted lists: differential coding w/ option -fs (increasing) or -fS (strictly increasing)
./icbench -fs -eturbopack/turbopfor/turbopfor256v ZIPF
-
Generate interactive "file.html" plot for browsing
./icbench -p2 -S2 -Q3 file.tbb
-
Unit test: test function from bit size 0 to 32
./icbench -m0 -M32 -eturbpfor ./icbench -m0 -M8 -eturbopack -fs -n1M
-
Raw 32 bits binary data file Test data
./icbench file ./icapp file ./icapp -Fs file "16 bits binary file ./icapp -Fu file "32 bits binary file ./icapp -Fl file "64 bits binary file ./icapp -Ff file "32 bits floating point binary file ./icapp -Fd file "64 bits floating point binary file
-
Text file: 1 entry per line. Test data: ts.txt(sorted) and lat.txt(unsorted))
./icbench -eBENCH -fts ts.txt ./icbench -eBENCH -ft lat.txt ./icapp -Fts data.txt "text file, one 16 bits integer per line ./icapp -Ftu ts.txt "text file, one 32 bits integer per line ./icapp -Ftl ts.txt "text file, one 64 bits integer per line ./icapp -Ftf file "text file, one 32 bits floating point (ex. 8.32456) per line ./icapp -Ftd file "text file, one 64 bits floating point (ex. 8.324567789) per line ./icapp -Ftd file -v5 "like prev., display the first 100 values read ./icapp -Ftd file -v5 -g.00001 "like prev., error bound lossy floating point compression ./icapp -Ftt file "text file, timestamp in seconds iso-8601 -> 32 bits integer (ex. 2018-03-12T04:31:06) ./icapp -FtT file "text file, timestamp in milliseconds iso-8601 -> 64 bits integer (ex. 2018-03-12T04:31:06.345) ./icapp -Ftl -D2 -H file "skip 1th line, convert numbers with 2 decimal digits to 64 bits integers (ex. 456.23 -> 45623) ./icapp -Ftl -D2 -H -K3 file.csv "like prev., use the 3th number in the line (ex. label=3245, text=99 usage=456.23 -> 456.23 ) ./icapp -Ftl -D2 -H -K3 -k| file.csv "like prev., use '|' as separator
-
Text file: multiple numbers separated by non-digits (0..9,-,.) characters (ex. 134534,-45678,98788,4345, )
./icapp -Fc data.txt "text file, 32 bits integers (ex. 56789,3245,23,678 ) ./icapp -Fcd data.txt "text file, 64 bits floting-point numbers (ex. 34.7689,5.20,45.789 )
-
Multiblocks of 32 bits binary file. (Example gov2 from DocId data set)
Block format: [n1: #of Ids][Id1] [Id2]...[IdN] [n2: #of Ids][Id1][Id2]...[IdN]..../icbench -fS -r gov2.sorted
1 - Download Gov2 (or ClueWeb09) + query files (Ex. "1mq.txt") from DocId data set
8GB RAM required (16GB recommended for benchmarking "clueweb09" files).
2 - Create index file
./idxcr gov2.sorted .
create inverted index file "gov2.sorted.i" in the current directory
3 - Test intersections
./idxqry gov2.sorted.i 1mq.txt
run queries in file "1mq.txt" over the index of gov2 file
1 - Create partitions
./idxseg gov2.sorted . -26m -s8
create 8 (CPU hardware threads) partitions for a total of ~26 millions document ids
2 - Create index file for each partition
./idxcr gov2.sorted.s*
create inverted index file for all partitions "gov2.sorted.s00 - gov2.sorted.s07" in the current directory
3 - Intersections:
delete "idxqry.o" file and then type "make para" to compile "idxqry" w. multithreading
./idxqry gov2.sorted.s*.i 1mq.txt
run queries in file "1mq.txt" over the index of all gov2 partitions "gov2.sorted.s00.i - gov2.sorted.s07.i".
See benchmark "icbench" program for "integer compression" usage examples. In general encoding/decoding functions are of the form:
char *endptr = encode( unsigned *in, unsigned n, char *out, [unsigned start], [int b])
endptr : set by encode to the next character in "out" after the encoded buffer
in : input integer array
n : number of elements
out : pointer to output buffer
b : number of bits. Only for bit packing functions
start : previous value. Only for integrated delta encoding functions
char *endptr = decode( char *in, unsigned n, unsigned *out, [unsigned start], [int b])
endptr : set by decode to the next character in "in" after the decoded buffer
in : pointer to input buffer
n : number of elements
out : output integer array
b : number of bits. Only for bit unpacking functions
start : previous value. Only for integrated delta decoding functions
Simple high level functions:
size_t compressed_size = encode( unsigned *in, size_t n, char *out)
compressed_size : number of bytes written into compressed output buffer out
size_t compressed_size = decode( char *in, size_t n, unsigned *out)
compressed_size : number of bytes read from compressed input buffer in
-
{vb | p4 | bit | vs}[d | d1 | f | fm | z ]{enc/dec | pack/unpack}[| 128V | 256V][8 | 16 | 32 | 64]:
vb: variable byte
p4: turbopfor
vs: variable simple
bit: bit packingd: delta encoding for increasing integer lists (sorted w/ duplicate)
d1: delta encoding for strictly increasing integer lists (sorted unique)
f : FOR encoding for sorted integer lists
fm: FOR encoding for unsorted integer lists
z: ZigZag encoding for unsorted integer listsenc/pack: encode
dec/unpack:decode
XX : integer size (8/16/32/64)
header files to use with documentation:
c/c++ header file | Integer Compression functions | examples |
---|---|---|
vint.h | variable byte | vbenc32/vbdec32 vbdenc32/vbddec32 vbzenc32/vbzdec32 |
vsimple.h | variable simple | vsenc64/vsdec64 |
vp4.h | TurboPFor | p4enc32/p4dec32 p4denc32/p4ddec32 p4zenc32/p4zdec32 |
bitpack.h | Bit Packing, For, +Direct Access | bitpack256v32/bitunpack256v32 bitforenc64/bitfordec64 |
eliasfano.h | Elias Fano | efanoenc256v32/efanoc256v32 |
- Linux: GNU GCC (>=4.6)
- clang (>=3.2)
- Windows: MinGW-w64 (no parallel query processing demo app)
- Visual c++ (VS2008-VS2017)
- All TurboPFor integer compression functions are thread safe
-
Benchmark references:
- FastPFor + Simdcomp: SIMDPack FPF, Vbyte FPF, VarintG8IU, StreamVbyte, GroupSimple
- Optimized Pfor-delta compression code: OptPFD/OptP4, Simple16 (limited to 28 bits integers)
- MaskedVByte. See also: Vectorized VByte Decoding
- Streamvbyte.
- Index Compression Using 64-Bit Words: Simple-8b (speed optimized version tested)
- libfor
- Compression, SIMD, and Postings Lists QMX integer compression from the "simple family"
- lz4. included w. block size 64K as indication. Tested after preprocessing w. delta+transpose
- blosc. blosc is like transpose/shuffle+lz77. Tested blosc+lz4 and blosclz incl. vectorizeed shuffle.
- Document identifier data set
-
Integer compression publications:
- πIn Vacuo and In Situ Evaluation of SIMD Codecs (TurboPackV,TurboPFor/QMX) + paper
- πSIMD Compression and the Intersection of Sorted Integers
- πPartitioned Elias-Fano Indexes
- πOn Inverted Index Compression for Search Engine Efficiency
- πGoogle's Group Varint Encoding
- πInteger Compression tweets
- πEfficient Compression of Scientific Floating-Point Data and An Application in Structural Analysis
- πSPDP is a compression/decompression algorithm for binary IEEE 754 32/64 bits floating-point data
π SPDP - An Automatically Synthesized Lossless Compression Algorithm for Floating-Point Data + DCC 2018
-
Applications:
Last update: 16 Mar 2018