Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BEAM-9650] Adding support for ReadAll from BigQuery transform #13170

Merged
merged 29 commits into from
Nov 30, 2020

Conversation

pabloem
Copy link
Member

@pabloem pabloem commented Oct 22, 2020

This adds a DoFn to perform BigQuery exports and pass them downstream to be consumed.

Eventually, we'll move ReadFromBigQuery to use this as part of its implementation, but for now, we'll leave them separate.

r: @chamikaramj
r: @tysonjh


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Choose reviewer(s) and mention them in a comment (R: @username).
  • Format the pull request title like [BEAM-XXX] Fixes bug in ApproximateQuantiles, where you replace BEAM-XXX with the appropriate JIRA issue, if applicable. This will automatically link the pull request to the issue.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

Post-Commit Tests Status (on master branch)

Lang SDK Dataflow Flink Samza Spark Twister2
Go Build Status --- Build Status --- Build Status ---
Java Build Status Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status Build Status
Build Status
Build Status
Build Status
Python Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
Build Status
--- Build Status ---
XLang Build Status --- Build Status --- Build Status ---

Pre-Commit Tests Status (on master branch)

--- Java Python Go Website Whitespace Typescript
Non-portable Build Status Build Status
Build Status
Build Status
Build Status
Build Status Build Status Build Status Build Status
Portable --- Build Status --- --- --- ---

See .test-infra/jenkins/README for trigger phrase, status and link of all Jenkins jobs.

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests

See CI.md for more information about GitHub Actions CI.

@pabloem
Copy link
Member Author

pabloem commented Oct 22, 2020

Run Python 3.8 PostCommit

1 similar comment
@pabloem
Copy link
Member Author

pabloem commented Oct 22, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 22, 2020

Run Python_PVR_Flink PreCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 23, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 23, 2020

Run Python_PVR_Flink PreCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 23, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 23, 2020

Run Python_PVR_Flink PreCommit

2 similar comments
@pabloem
Copy link
Member Author

pabloem commented Oct 23, 2020

Run Python_PVR_Flink PreCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 24, 2020

Run Python_PVR_Flink PreCommit

@pabloem
Copy link
Member Author

pabloem commented Oct 24, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Nov 4, 2020

Run Python 3.8 PostCommit

1 similar comment
@pabloem
Copy link
Member Author

pabloem commented Nov 4, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Nov 4, 2020

Run Portable_Python PreCommit

@pabloem
Copy link
Member Author

pabloem commented Nov 5, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Nov 5, 2020

@pabloem
Copy link
Member Author

pabloem commented Nov 5, 2020

Run Python PreCommit

@pabloem pabloem force-pushed the bq_si_final_change branch from db8d9e7 to 0842eaa Compare November 5, 2020 18:23
@pabloem pabloem changed the title readall from bq [BEAM-9650] Adding support for ReadAll from BigQuery transform Nov 5, 2020
@pabloem pabloem marked this pull request as ready for review November 5, 2020 18:41
@pabloem
Copy link
Member Author

pabloem commented Nov 5, 2020

Run Python 3.8 PostCommit

@pabloem pabloem requested a review from chamikaramj November 5, 2020 18:42
@pabloem
Copy link
Member Author

pabloem commented Nov 5, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Nov 5, 2020

| 'PeriodicImpulse' >> PeriodicImpulse(
first_timestamp, last_timestamp, interval, True)
| 'MapToReadRequest' >> beam.Map(
lambda x: BigQueryReadRequest(table='dataset.table'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see a few different names here,

ReadAllFromBigQuery
ReadFromBigQueryRequest
BigQueryReadRequest

I'm a bit confused by the differences and interaction between these classes.

If ReadFromBigQueryRequest is something users interact with it should not be in an internal file (e.g. bigquery_read_internal.py). Is there a need to expose that at all? Instead could it just be:

side_input = (
  p
  | 'PeriodicImpulse' >> PeriodicImpulse(...)
  | beam.io.ReadAllFromBigQuery(table=...))

Though this would make the initial example of several requests being included in a single ReadAll not possible. Is this something that needs to be special cased, as opposed to say, using a flatten?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding names - yes, that's a little confusing. The only names should be:

  • ReadFromBigQueryRequest - this is an input element for ReadAllFromBigQuery, and it represents a query or a table to be read (with a few other parameters).
  • ReadAllFromBigQuery - This is the transform that issues BQ reads.

All other names are misnaming in the configuration


Regarding your example - that's interesting. I recognize that what you show would be the most common use case (same query/table always, rather than varying) - with the only exception that some queries could be slightly updated over time (e.g. read only partitions of the last few days).

otoh, this would create two ways of using the transform, and complicate the constructor (all of the parameters in ReadFromBQRequest would need to be available in the constructor).

Users could build this functionality themselves though. My feeling is that it's better to build a transform that is more composable, and provide an example for users trying to build the functionality you propose. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My feeling is that it's better to build a transform that is more composable, and provide an example for users trying to build the functionality you propose. WDYT?

+1.

sources_to_read, cleanup_locations = (
pcoll
| beam.ParDo(
# TODO(pabloem): Make sure we have all necessary args.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be resolved or attributed to a Jira issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed. thanks Tyson!

define project and dataset (ex.: ``'PROJECT:DATASET.TABLE'``).
:param flatten_results:
Flattens all nested and repeated fields in the query results.
The default value is :data:`True`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default here is False.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops good catch. Thanks Tyson!

Comment on lines 220 to 244
if element.query is not None:
bq.clean_up_temporary_dataset(self._get_project())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be moved up to the other if element.query condition? That may allow putting the yield into the for loop above, getting rid of the intermediate split_result and avoiding the additional iteration.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's not possible in this case. We issue a table export in export_files, and only after that's finished is that we can delete the dataset. But I've moved the yield above.

self.table = table
self.validate()

# We use this internal object ID to generate BigQuery export directories.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be worth noting that there is also a UUID involved. I was worried about collisions until I read on a bit further.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added this to the Pydoc of the transform.

@pabloem
Copy link
Member Author

pabloem commented Nov 30, 2020

Run Python 3.8 PostCommit

@pabloem
Copy link
Member Author

pabloem commented Nov 30, 2020

@pabloem pabloem merged commit a1fac1d into apache:master Nov 30, 2020
@pabloem pabloem deleted the bq_si_final_change branch November 30, 2020 17:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants