Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

provider/google: BigQuery Table #13743

Merged
merged 10 commits into from
May 10, 2017
Merged

provider/google: BigQuery Table #13743

merged 10 commits into from
May 10, 2017

Conversation

heimweh
Copy link
Contributor

@heimweh heimweh commented Apr 18, 2017

This PR adds support for BigQuery tables.

This PR is dependent on the resource and changes in #13436.

@heimweh heimweh changed the title [WIP] provider/google: BigQuery Table provider/google: BigQuery Table Apr 21, 2017
Copy link
Contributor

@danawillow danawillow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @heimweh!

},

// Description: [Optional] The field description. The maximum length is
// 16K characters.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, how do you know the max length? I don't see it documented.


// Schema: [Optional] Describes the schema of this table.
"schema": {
Type: schema.TypeString,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why a json string instead of a nested object?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left this as a JSON string, because I wasn't sure what would be the preferred way.
I'd be happy to replace this and add support for defining the schema in Terraform directly instead.

I'll take a look and see if I can get that to work :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @danawillow, sorry for the delay here.

I started working on this but since TableSchema (https://godoc.org/google.golang.org/api/bigquery/v2#TableFieldSchema) can have an arbitrary depth of record fields I'm not entirely sure I can get this to work with helper/schema.

With the current JSON string implementation, one could either define the schema inline or for more complex schemas use for example the $(file()) interpolation function.

I'm not quite sure how to proceed here, so if you have any ideas please let me know :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I didn't realize the arbitrary nesting earlier. I think this is the best way, then. Thanks for the clarification!

"time_partitioning": &schema.Schema{
Type: schema.TypeList,
Optional: true,
MinItems: 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a MinItems: 1 on an Optional field is a bit confusing. Can you make it one or the other?


func resourceBigQueryTableParseID(id string) (string, string, string) {
// projectID, datasetID, tableID
parts := strings.FieldsFunc(id, func(r rune) bool { return r == ':' || r == '.' })
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you point me to where this is documented? All I'm seeing for id is An opaque ID uniquely identifying the table which makes me think it doesn't have a guaranteed format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! 👍 I must have missed that part of the documentation.
While it seems to work, I totally agree that it doesn't seem to be of a guaranteed format.
I'll push a fix for this.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you gotten to this yet? It probably works as-is so I'm not too concerned, but it would be nice to have some confidence.
(a pretty easy fix would be to store the expected id as the id instead of the returned one)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danawillow thanks for the quick feedback :)
I totally agree, just pushed a fix that should set the expected ID and format.


* `type` - (Required) The only type supported is DAY, which will generate
one partition per day based on data loading time.
## Attributes Reference
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: newline

@danawillow
Copy link
Contributor

Thanks, LGTM! Merging now so this gets in before 0.9.5 :)

@danawillow danawillow merged commit 9517d80 into hashicorp:master May 10, 2017
@heimweh heimweh deleted the gcpbigquery-table branch May 11, 2017 06:55
@heimweh
Copy link
Contributor Author

heimweh commented May 11, 2017

Awesome! :) thanks @danawillow!

@ghost
Copy link

ghost commented Apr 12, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 12, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants