-
Notifications
You must be signed in to change notification settings - Fork 14.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of testing Providers that were prepared on May 19, 2023 #31322
Comments
Confirmed #31169 |
Provider cncf.kubernetes: 6.2.0rc1 |
Confirmed #30655 by verifying priority on ui when running the following dag from pendulum import datetime
from airflow.providers.common.sql.operators.sql import SQLExecuteQueryOperator
from airflow.decorators import task, dag
default_args = {
"owner": "example",
"retries": 0,
}
@dag(
description="tmp",
start_date=datetime(2023, 5, 1),
schedule=None,
)
def testing():
SQLExecuteQueryOperator(
task_id="test",
conn_id="bigquery",
sql="SELECT 2",
hook_params={
"use_legacy_sql": False,
"location": "us",
"priority": "BATCH", # this is the new part
"api_resource_configs": {"query": {"useQueryCache": False}},
},
)
testing() |
#30829: works good. Tested it by passing a shareIdentifier while submitting the Batch job
|
I tested #30516, it works as expected |
#30968 works as expected
|
Checked and marked all mine as ✅ (except #31080 - which looks good but I have no databricks account/config to check it with) . Looks good. @Stormhand I would appreciate if you could check if #31080 is solved for you - you'd need to install https://pypi.org/project/apache-airflow-providers-databricks/4.2.0rc1 (it should pull common-sql 1.5.0rc1 automatically). |
Confirmed #31063, #31042, and #31062 (all of them are for Provider amazon: 8.1.0rc1) are included in the RC and they ran fine on my example DAG. Thank you for the efforts! |
Tested - DynamoDBToS3Operator - Add a feature to export the table to a
point in time. (#31142) <#31142>
With below dag:
from datetime import datetime
from airflow import DAG
from airflow.providers.amazon.aws.transfers.dynamodb_to_s3 import
DynamoDBToS3Operator
with DAG(
dag_id='example_export_dynamodb',
schedule_interval=None,
start_date=datetime(2021, 1, 1),
tags=['example'],
catchup=False,
) as dag:
dynamodb_to_s3_operator = DynamoDBToS3Operator(
task_id="dynamodb_to_s3",
dynamodb_table_name="test",
s3_bucket_name="tmp9",
file_size=4000,
export_time=datetime.now(),
s3_key_prefix="test1"
)
Thanks,
Utkarsh Sharma
…On Thu, May 18, 2023 at 11:13 AM Pankaj Koti ***@***.***> wrote:
Confirmed #31063 <#31063>, #31042
<#31042>, and #31062
<#31062> (all of them are for
Provider amazon: 8.1.0rc1
<https://pypi.org/project/apache-airflow-providers-amazon/8.1.0rc1>) are
included in the RC and they ran fine on my example DAG. Thank you for the
efforts!
—
Reply to this email directly, view it on GitHub
<#31322 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADDLAHKV4MSGUBN2S2WFJZ3XGWZHDANCNFSM6AAAAAAYD5TKEE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
#31110 works as expected |
👍 for me |
Issue template has been updated for RC2 |
I didnt see breaking changes on that PR. Can you specify which change? |
That's right. We even documented it: https://github.com/apache/airflow#semantic-versioning
If there were no breaking changes in OUR API in providers, deps upgrade are not breaking. This is quite common approach - for example |
Everything Looks good:
|
I tested the following DAG create_job_flow = EmrCreateJobFlowOperator(
task_id="create_job_flow",
job_flow_overrides=JOB_FLOW_OVERRIDES,
aws_conn_id=AWS_CONN_ID,
) but encountered the following issue
|
Thank you everyone. Note: we decided to release amazon and microsoft.azure providers despite the bug reports, reasons are explained in the mailing list thread. |
I have a kind request for all the contributors to the latest provider packages release.
Could you please help us to test the RC versions of the providers?
Let us know in the comment, whether the issue is addressed.
Those are providers that require testing as there were some substantial changes introduced:
Provider amazon: 8.1.0rc2
StepFunctionStartExecutionOperator
: get logs in case of failure (#31072): @eladkalget_key
methods onS3Hook
(#30923): @jonsheashareIdentifier
in BatchOperator (#30829): @phanikumvProvider apache.beam: 5.1.0rc2
Provider apache.hdfs: 4.0.0rc2
Provider apache.hive: 6.1.0rc2
get_key
methods onS3Hook
(#30923): @jonsheaProvider apache.pinot: 4.1.0rc2
Provider cncf.kubernetes: 7.0.0rc2
KubernetesPodOperator
(#29498): @hussein-awalaProvider common.sql: 1.5.0rc2
Provider databricks: 4.2.0rc2
DatabricksPartitionSensor
(#30980): @harishkesavaraoProvider dbt.cloud: 3.2.0rc2
DbtCloudJobRunSensor
(#30968): @phanikumvDbtCloudRunJobOperator
(#31188): @phanikumvProvider elasticsearch: 4.5.0rc2
Provider google: 10.1.0rc2
GCSObjectUpdateSensor
(#30579): @phanikumvDataflowTemplatedJobStartOperator
fix overwriting of location with default value, when a region is provided. (#31082): @VVildVVolfGCSObjectsWithPrefixExistenceSensor
(#30939): @phanikumvGCSObjectsWithPrefixExistenceSensor
(#30618): @phanikumvuse_legacy_sql
param toBigQueryGetDataOperator
(#31190): @shahar1as_dict
param toBigQueryGetDataOperator
(#30887): @shahar1priority
parameter to BigQueryHook (#30655): @ying-wGCSObjectUpdateSensor
(#30920): @phanikumvGCSObjectExistenceSensor
(#30901): @phanikumvCreateBatchPredictionJobOperator
Add batch_size param for Vertex AI BatchPredictionJob objects (#31118): @VVildVVolfProvider microsoft.azure: 6.1.0rc2
WasbPrefixSensor
(#30252): @phanikumvAzureDataFactoryPipelineRunStatusSensor
(#30983): @phanikumvAzureDataFactoryRunPipelineOperator
(#31214): @phanikumvProvider mongo: 3.2.0rc2
Provider neo4j: 3.3.0rc2
Provider oracle: 3.7.0rc2
Provider pagerduty: 3.2.0rc2
Provider redis: 3.2.0rc2
Provider slack: 7.3.0rc2
The guidelines on how to test providers can be found in
Verify providers by contributors
All users involved in the PRs:
@ying-w @alextgu @RachitSharma2001 @hussein-awala @jonshea @vincbeck @dacort @VVildVVolf @pankajastro @shahar1 @potiuk @vandonr-amz @ferruzzi @eladkal @utkarsharma2 @harishkesavarao @bkossakowska
@dstandish @jon-evergreen @tnk-ysk @IAL32 @nsAstro @Owen-CH-Leung @eldar-eln-bigabid @ephraimbuddy @moiseenkov @jbbqqf @attilaszombati @ahidalgob @vchiapaikeo @JCoder01 @syedahsn @pankajkoti @lwyszomi
@phanikumv
The text was updated successfully, but these errors were encountered: