You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am testing this on Alpine Linux, with PostgreSQL 12.5, TimescaleDB 2.0.1. With a batch size of 1000000, it worked fine. When I increased the batch size to 10000000, I started getting and error. FATAL: terminating connection because protocol synchronization was lost
Google led me to this lib/pq#473 (comment)
Not sure if that is the same. The file has 133 million records and PostgreSQL's COPY works without any issues. I am trying to optimize the batch-size/number-of-workers combinations and see if I can get a better throughput with parallel-copy. So far, I have been able to get roughly 90,000-1,00,000 per second with default COPY while parallel-copy gave me 50,000-60,000 per second.
The text was updated successfully, but these errors were encountered:
@jayadevanm With #63 merged, the utility no longer uses lib/pq under the hood. Can you rebuild with the most recent commit and see if this solves your issue?
I am testing this on Alpine Linux, with PostgreSQL 12.5, TimescaleDB 2.0.1. With a batch size of 1000000, it worked fine. When I increased the batch size to 10000000, I started getting and error.
FATAL: terminating connection because protocol synchronization was lost
Google led me to this lib/pq#473 (comment)
Not sure if that is the same. The file has 133 million records and PostgreSQL's COPY works without any issues. I am trying to optimize the batch-size/number-of-workers combinations and see if I can get a better throughput with parallel-copy. So far, I have been able to get roughly 90,000-1,00,000 per second with default COPY while parallel-copy gave me 50,000-60,000 per second.
The text was updated successfully, but these errors were encountered: