Skip to content

Parquet files concat tool #1931

@asfimport

Description

@asfimport

Currently the parquet file generation is time consuming, most of time used for serialize and compress. It cost about 10mins to generate a 100MB~ parquet file in our scenario. We want to improve write performance without generate too many small files, which will impact read performance.

We propose to:

  1. generate several small parquet files concurrently
  2. merge small files to one file: concat the parquet blocks in binary (without SerDe), merge footers and modify the path and offset metadata.
    We create ParquetFilesConcat class to finish step 2. It can be invoked by parquet.tools.command.ConcatCommand. If this function approved by parquet community, we will integrate it in spark.

It will impact compression and introduced more dictionary pages, but it can be improved by adjusting the concurrency of step 1.

Reporter: flykobe cheng / @flykobe
Assignee: flykobe cheng / @flykobe

PRs and other links:

Note: This issue was originally created as PARQUET-460. Please see the migration documentation for further details.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions