You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i am using the SparkKubernetesOperator to load data from S3 to BQ. i would like to create the ingestion job in BQ project A, but write the data to a table in BQ project B.
Why is this needed?
this is needed to have control over BQ slot allocation for the spark job via project A instead of sharing resources with other workloads in project B. however, the data must remain in project B.
Describe the solution you would like
when creating the spark job, instead of using the project id from the destination table, add an feature that grabs the project id from
(1) the environment
(2) the service account
(3) a configuration when the operator is called
What feature you would like to be added?
i am using the
SparkKubernetesOperator
to load data from S3 to BQ. i would like to create the ingestion job in BQ project A, but write the data to a table in BQ project B.Why is this needed?
this is needed to have control over BQ slot allocation for the spark job via project A instead of sharing resources with other workloads in project B. however, the data must remain in project B.
Describe the solution you would like
when creating the spark job, instead of using the project id from the destination table, add an feature that grabs the project id from
(1) the environment
(2) the service account
(3) a configuration when the operator is called
Describe alternatives you have considered
Additional context
No response
Love this feature?
Give it a 👍 We prioritize the features with most 👍
The text was updated successfully, but these errors were encountered: