You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using maple to import a 40GB+ Postgres database I noticed that queries became too slow and the complete hadoop job failed because of the use of OFFSET:
// HARDCODING PRIMARY KEY.....
query.append(" WHERE id >= ").append(split.getStart());
query.append(" LIMIT ").append(split.getLength());
The query time doesn't grow exponentially anymore and stays the same. The above is not a generic solution (e.g. your index might not be id). Do you have suggestions to handle this situation? I'm also not sure how other JDBC databases handle OFFSET.
Has this library been used on large Postgres DB's before? I would like to gain some insights into best practices. Even with the above optimization my import time is around 3 hours.
Thanks for you work on maple.
Cheers,
Jeroen
The text was updated successfully, but these errors were encountered:
We have only used this Tap on relatively small MySQL tables so no, this has not been tested on a large dataset coming from a DB.
Keep in mind that there is a version of JDBCScheme on which the primary key is defined. You can use that instead and not rely on hardcoding the primary key.
When using maple to import a 40GB+ Postgres database I noticed that queries became too slow and the complete hadoop job failed because of the use of OFFSET:
After changing this line to this:
The query time doesn't grow exponentially anymore and stays the same. The above is not a generic solution (e.g. your index might not be id). Do you have suggestions to handle this situation? I'm also not sure how other JDBC databases handle OFFSET.
Has this library been used on large Postgres DB's before? I would like to gain some insights into best practices. Even with the above optimization my import time is around 3 hours.
Thanks for you work on maple.
Cheers,
Jeroen
The text was updated successfully, but these errors were encountered: