You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
but normalization doesn't do this. So if you configure a custom stream namespace that starts with an underscore (e.g. _foo)... then destination-bigquery will create the raw tables in the n_foo dataset, but normalization will search for them in an _foo dataset.
We probably want to add the n prefix in normalization. (this stops being a problem once normalization runs in java, since we can just pass around a single BigQueryWriteConfig - but that's not an immediate solution)
#3 won't be fixed by a platform change, so we still need to handle this.
appending "n" is silly to begin with, we should at lesat append "Airbyte" or something meaningful
The plan:
in metabase confirm that case 3 is rate
if checkin catalogs #3 is really rare, this fix belongs in the platform so that "" namespaces are never empty and we append something (name of the source?). We make this change as part of this story.
Remove the "n_" hack the destination has now
docs to educate BQ users that _ tables are hidden (add to destination docs).
Migration plan - what do we tell existing users (use output of step 1)
The connector will ensure that the dataset ID (i.e. stream namespace) starts with an alphanumeric character - https://github.com/airbytehq/airbyte/blob/master/airbyte-integrations/connectors/destination-bigquery/src/main/java/io/airbyte/integrations/destination/bigquery/BigQuerySQLNameTransformer.java#L38
but normalization doesn't do this. So if you configure a custom stream namespace that starts with an underscore (e.g.
_foo
)... then destination-bigquery will create the raw tables in then_foo
dataset, but normalization will search for them in an_foo
dataset.We probably want to add the
n
prefix in normalization. (this stops being a problem once normalization runs in java, since we can just pass around a single BigQueryWriteConfig - but that's not an immediate solution)(dataset names are allowed to start with an underscore, but they won't show up in the UI - https://cloud.google.com/bigquery/docs/datasets#dataset-naming)
The text was updated successfully, but these errors were encountered: