You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to convert a very sparse dataset to parquet (~3% rows in a range are populated). The file I am working with spans upto ~63M rows. I decided to iterate in batches of 500k rows, 127 batches in total. Each row batch is a RecordBatch. I create 4 batches at a time, and write to a parquet file incrementally. Something like this:
batches= [..] # 4 batchestbl=pa.Table.from_batches(batches)
pqwriter.write_table(tbl, row_group_size=15000)
# same issue with pq.write_table(..)
I was getting a segmentation fault at the final step, I narrowed it down to a specific iteration. I noticed that iteration had empty batches; specifically, [0, 0, 2876, 14423]. The number of rows for each RecordBatch for the whole dataset is below:
On excluding the empty RecordBatch-es, the segfault goes away, but unfortunately I couldn't create a proper minimal example with synthetic data.
Not quite minimal example
The data I am using is from the 1000 Genome project, which has been public for many years, so we can be reasonably sure the data is good. The following steps should help you replicate the issue.
Install the Cython library pysam, a thin wrapper around the reference implementation of the VCF file spec. You will need zlib headers, but that's probably not a problem :)
$ pip3 install --user pysam
Now you can use the attached script to replicate the crash.
Extra information
I have tried attaching gdb, the backtrace when the segfault occurs is shown below (maybe it helps, this is how I realised empty batches could be the reason).
{code}
(gdb) bt
#0 0x00007f3e7676d670 in parquet::TypedColumnWriter<parquet::DataType<(parquet::Type::type)6> >::WriteMiniBatch(long, short const*, short const*, parquet::ByteArray const*) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11 ARROW-5: Update drill-fmpp-maven-plugin to 1.5.0 #1 0x00007f3e76733d1e in arrow::Status parquet::arrow::(anonymous namespace)::ArrowColumnWriter::TypedWriteBatch<parquet::DataType<(parquet::Type::type)6>, arrow::BinaryType>(arrow::Array const&, long, short const*, short const*) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11 ARROW-9: Replace straggler references to Drill #2 0x00007f3e7673a3d4 in parquet::arrow::(anonymous namespace)::ArrowColumnWriter::Write(arrow::Array const&) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11 ARROW-10: Fix mismatch of javadoc names and method parameters #3 0x00007f3e7673df09 in parquet::arrow::FileWriter::Impl::WriteColumnChunk(std::shared_ptrarrow::ChunkedArray const&, long, long) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11 ARROW-15: Fix a naming typo for memory.AllocationManager.AllocationOutcome #4 0x00007f3e7673c74d in parquet::arrow::FileWriter::WriteColumnChunk(std::shared_ptrarrow::ChunkedArray const&, long, long) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11 ARROW:17 & ARROW:18 #5 0x00007f3e7673c8d2 in parquet::arrow::FileWriter::WriteTable(arrow::Table const&, long) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11 ARROW-8: Add .travis.yml and test script for Arrow C++. OS X build fixes #6 0x00007f3e731e3a51 in __pyx_pw_7pyarrow_8_parquet_13ParquetWriter_5write_table(_object*, _object*, _object*) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/_parquet.cpython-36m-x86_64-linux-gnu.so
{code}
Environment: Fedora 28, pyarrow installed with pip
Fedora 29, pyarrow installed from conda-forge Reporter: Suvayu Ali / @suvayu Assignee: Wes McKinney / @wesm
Tanya Schlusser / @tanyaschlusser:
I have followed Suvayu's instructions and can successfully reproduce the segfault. I am going to try working on this, thanks!
Tanya Schlusser / @tanyaschlusser:
I can now reproduce the bug with a more minimal code (see the attached file minimal_bug_arrow3792.py) – it is a problem with the column that contains a list – I think the segfault occurs when dealing with the empty batch that is supposed to contain a column that contains a list. I'm still going to look at it more but in case I'm slow and someone else wants to do it faster you no longer need to download the genome dataset or pysam.
Wes McKinney / @wesm:
The Parquet writer code behaves incorrectly when writing a length-0 array. There is another bug report about writing length-0 record batches so possible the same fix involved.
Background
I am trying to convert a very sparse dataset to parquet (~3% rows in a range are populated). The file I am working with spans upto ~63M rows. I decided to iterate in batches of 500k rows, 127 batches in total. Each row batch is a
RecordBatch
. I create 4 batches at a time, and write to a parquet file incrementally. Something like this:I was getting a segmentation fault at the final step, I narrowed it down to a specific iteration. I noticed that iteration had empty batches; specifically, [0, 0, 2876, 14423]. The number of rows for each
RecordBatch
for the whole dataset is below:On excluding the empty
RecordBatch
-es, the segfault goes away, but unfortunately I couldn't create a proper minimal example with synthetic data.Not quite minimal example
The data I am using is from the 1000 Genome project, which has been public for many years, so we can be reasonably sure the data is good. The following steps should help you replicate the issue.
Download the data file (and index), about 330MB:
Install the Cython library
pysam
, a thin wrapper around the reference implementation of the VCF file spec. You will needzlib
headers, but that's probably not a problem :)Now you can use the attached script to replicate the crash.
Extra information
I have tried attaching gdb, the backtrace when the segfault occurs is shown below (maybe it helps, this is how I realised empty batches could be the reason).
{code}
(gdb) bt
#0 0x00007f3e7676d670 in parquet::TypedColumnWriter<parquet::DataType<(parquet::Type::type)6> >::WriteMiniBatch(long, short const*, short const*, parquet::ByteArray const*) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11
ARROW-5: Update drill-fmpp-maven-plugin to 1.5.0 #1 0x00007f3e76733d1e in arrow::Status parquet::arrow::(anonymous namespace)::ArrowColumnWriter::TypedWriteBatch<parquet::DataType<(parquet::Type::type)6>, arrow::BinaryType>(arrow::Array const&, long, short const*, short const*) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11
ARROW-9: Replace straggler references to Drill #2 0x00007f3e7673a3d4 in parquet::arrow::(anonymous namespace)::ArrowColumnWriter::Write(arrow::Array const&) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11
ARROW-10: Fix mismatch of javadoc names and method parameters #3 0x00007f3e7673df09 in parquet::arrow::FileWriter::Impl::WriteColumnChunk(std::shared_ptrarrow::ChunkedArray const&, long, long) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11
ARROW-15: Fix a naming typo for memory.AllocationManager.AllocationOutcome #4 0x00007f3e7673c74d in parquet::arrow::FileWriter::WriteColumnChunk(std::shared_ptrarrow::ChunkedArray const&, long, long) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11
ARROW:17 & ARROW:18 #5 0x00007f3e7673c8d2 in parquet::arrow::FileWriter::WriteTable(arrow::Table const&, long) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/../../../libparquet.so.11
ARROW-8: Add .travis.yml and test script for Arrow C++. OS X build fixes #6 0x00007f3e731e3a51 in __pyx_pw_7pyarrow_8_parquet_13ParquetWriter_5write_table(_object*, _object*, _object*) ()
from /home/user/miniconda3/lib/python3.6/site-packages/pyarrow/_parquet.cpython-36m-x86_64-linux-gnu.so
{code}
Environment: Fedora 28, pyarrow installed with pip
Fedora 29, pyarrow installed from conda-forge
Reporter: Suvayu Ali / @suvayu
Assignee: Wes McKinney / @wesm
Original Issue Attachments:
Externally tracked issue: #2951
PRs and other links:
Note: This issue was originally created as ARROW-3792. Please see the migration documentation for further details.
The text was updated successfully, but these errors were encountered: