-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-17654] [SQL] Propagate bucketing information for Hive tables to planner #15229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPARK-17654] [SQL] Propagate bucketing information for Hive tables to planner #15229
Conversation
|
Test build #65862 has finished for PR 15229 at commit
|
8726cc6 to
db82040
Compare
|
Test build #66114 has finished for PR 15229 at commit
|
|
Test build #66128 has finished for PR 15229 at commit
|
|
Test build #66125 has finished for PR 15229 at commit
|
|
@tejasapatil how does HIve store bucketing files? |
|
Hi @rxin, In Hive you have two levels, the partition and the buckets. /apps/hive/warehouse/model_table/date=6Where model_table is the name of the table and date is the partition. Inside a folder you will have n files and Hive let you decide how many files you want to create (buckets) and which data you want to store within. If you create a table like this on Hive: create table events (
timestamp: long,
userId: String,
event: String
)
partitioned by (event_date int)
clustered by (userId) sorted by (userId, timestamp) into 10 buckets;Then when it will be only 10 files per partition and all the events for one user will be only on one partition and sorted by time. If you insert data on this table using the next query on Hive you will see that the clustering policy is respected: set hive.enforce.bucketing = true; -- (Note: Not needed in Hive 2.x onward)
from event_feed_source e
insert overwrite table events
partition (event_date = 20170307)
select e.*, 20170307
where event_day = 20170307;However... if you do the next insert with Spark: sqlContext.sql("insert overwrite table events partition (event_date = 20170307) select e.*,1 from event_feed_source e")You will see that the data is stored with the same partitioning as it is on the source dataset. What is the benefit of respecting the Hive clustering policy? To give an example we have a pipeline that reads thousands of events per user and save them into another table (model), so it means the events table is going to have x times more data than the model table (imagine a factor of 10x). First point is, if the source data are clustered properly we can read all the events per user without shuffle (I mean to do something like Second point is when we generate the model RDD/Dataser from the event RDD/Dataset. Spark respects the source partitioning (unless you indicate otherwise) which means... is going to save into Hive 10 times the number of files for the model as needed (not respecting the clustering policy on Hive). I hope I clarify the point on Hive clusters ;) |
|
@carlos-verdes : Thanks for the information. This is moved under an umbrella jira (SPARK-19256) which has a proposal : https://docs.google.com/document/d/1a8IDh23RAkrkg9YYAeO51F4aGO8-xAlupKwdshve2fc/edit I believe all your requirements are captured in the proposal. If not, let me know. Meanwhile, I will close this PR and re-open when the right pieces are together. |
What changes were proposed in this pull request?
This PR depends on #15300 and includes following changes to have better planning for Hive bucketed tables:
HiveTableScanExecnow exposesoutputPartitioningandoutputOrderingas per bucketing spec.InsertIntoHiveTablenow exposesrequiredChildDistributionandrequiredChildOrderingbased on the target table's bucketing spec.Despite of this PR, Spark still won't produce bucketed data as per Hive's bucketing guarantees, but will allow writes IFF user wishes to do so without caring about bucketing guarantees. I am incrementally working on closing the gaps to have complete Hive bucketing support in Spark but those will be separate PRs (eg. PR to add Hive's hashing function #15047)
How was this patch tested?
This PR depends on #15300 to let Spark create hive bucketed tables. Once that gets in, I can write unit tests for this PR.