Skip to content

Latest commit

 

History

History
 
 

benchmarks

DataFusion Benchmarks

This crate contains benchmarks based on popular public data sets and open source benchmark suites, making it easy to run real-world benchmarks to help with performance and scalability testing and for comparing performance with other Arrow implementations as well as other query engines.

Benchmark derived from TPC-H

These benchmarks are derived from the TPC-H benchmark.

Generating Test Data

TPC-H data can be generated using the tpch-gen.sh script, which creates a Docker image containing the TPC-DS data generator.

# scale_factor: scale of the database population. scale 1.0 represents ~1 GB of data
./tpch-gen.sh <scale_factor>

Data will be generated into the data subdirectory and will not be checked in because this directory has been added to the .gitignore file.

Running the DataFusion Benchmarks

The benchmark can then be run (assuming the data created from dbgen is in ./data) with a command such as:

cargo run --release --bin tpch -- benchmark datafusion --iterations 3 --path ./data --format tbl --query 1 --batch-size 4096

You can enable the features simd (to use SIMD instructions, cargo nightly is required.) and/or mimalloc or snmalloc (to use either the mimalloc or snmalloc allocator) as features by passing them in as --features:

cargo run --release --features "simd mimalloc" --bin tpch -- benchmark datafusion --iterations 3 --path ./data --format tbl --query 1 --batch-size 4096

The benchmark program also supports CSV and Parquet input file formats and a utility is provided to convert from tbl (generated by the dbgen utility) to CSV and Parquet.

cargo run --release --bin tpch -- convert --input ./data --output /mnt/tpch-parquet --format parquet

This utility does not yet provide support for changing the number of partitions when performing the conversion. Another option is to use the following Docker image to perform the conversion from tbl files to CSV or Parquet.

docker run -it ballistacompute/spark-benchmarks:0.4.0-SNAPSHOT
  -h, --help   Show help message

Subcommand: convert-tpch
  -i, --input  <arg>
      --input-format  <arg>
  -o, --output  <arg>
      --output-format  <arg>
  -p, --partitions  <arg>
  -h, --help                   Show help message

Note that it is necessary to mount volumes into the Docker container as appropriate so that the file conversion process can access files on the host system.

Here is a full example that assumes that data is stored in the /mnt path on the host system.

docker run -v /mnt:/mnt -it ballistacompute/spark-benchmarks:0.4.0-SNAPSHOT \
  convert-tpch \
  --input /mnt/tpch/csv \
  --input-format tbl \
  --output /mnt/tpch/parquet \
  --output-format parquet \
  --partitions 64

Expected output

The result of query 1 should produce the following output when executed against the SF=1 dataset.

+--------------+--------------+----------+--------------------+--------------------+--------------------+--------------------+--------------------+----------------------+-------------+
| l_returnflag | l_linestatus | sum_qty  | sum_base_price     | sum_disc_price     | sum_charge         | avg_qty            | avg_price          | avg_disc             | count_order |
+--------------+--------------+----------+--------------------+--------------------+--------------------+--------------------+--------------------+----------------------+-------------+
| A            | F            | 37734107 | 56586554400.73001  | 53758257134.870026 | 55909065222.82768  | 25.522005853257337 | 38273.12973462168  | 0.049985295838396455 | 1478493     |
| N            | F            | 991417   | 1487504710.3799996 | 1413082168.0541    | 1469649223.1943746 | 25.516471920522985 | 38284.467760848296 | 0.05009342667421622  | 38854       |
| N            | O            | 74476023 | 111701708529.50996 | 106118209986.10472 | 110367023144.56622 | 25.502229680934594 | 38249.1238377803   | 0.049996589476752576 | 2920373     |
| R            | F            | 37719753 | 56568041380.90001  | 53741292684.60399  | 55889619119.83194  | 25.50579361269077  | 38250.854626099666 | 0.05000940583012587  | 1478870     |
+--------------+--------------+----------+--------------------+--------------------+--------------------+--------------------+--------------------+----------------------+-------------+
Query 1 iteration 0 took 1956.1 ms
Query 1 avg time: 1956.11 ms

NYC Taxi Benchmark

These benchmarks are based on the New York Taxi and Limousine Commission data set.

cargo run --release --bin nyctaxi -- --iterations 3 --path /mnt/nyctaxi/csv --format csv --batch-size 4096

Example output:

Running benchmarks with the following options: Opt { debug: false, iterations: 3, batch_size: 4096, path: "/mnt/nyctaxi/csv", file_format: "csv" }
Executing 'fare_amt_by_passenger'
Query 'fare_amt_by_passenger' iteration 0 took 7138 ms
Query 'fare_amt_by_passenger' iteration 1 took 7599 ms
Query 'fare_amt_by_passenger' iteration 2 took 7969 ms