-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status of testing Providers that were prepared on August 10, 2022 #25640
Comments
All good regarding |
Tested the below two and works fine with microsoft.azure: 4.2.0rc3 |
Tested both Azure Service Bus (Update and Receive) Subscription Operator, working fine 👍 |
Hi! I've found an issue with @alexott I think it might be interesting for you. More info below:
I ran it on
Tell me if you need more detail. |
@jgr-trackunit oh, this field was introduced in 2.3.0 :-( I think that I need to fix it before releasing |
@potiuk unfortunately, Databricks provider became incompatible with 2.2. I'm preparing a fix for it, but it will be separate release. Sorry for adding more work on you :-( |
No problem. Good to know :). This is what testing is about @alexott :). |
Thanks @jgr-trackunit for spotting it. |
@potiuk I am not sure if this is the right place to put it, but here is the deal: #24554 added You did ask me as a contributor of the feature to test it for AWS provider amazon: 4.1.0rc1 in #25037 (comment) Unfortunately I did not have time to do so in time for the release. Taking advantage of this 5.0.0 release for Amazon Providers, I tested the feature. What have been testedGiven
Result
read_from_queue = SqsSensor(
aws_conn_id="aws_sqs_test",
task_id="read_from_queue",
sqs_queue=sqs_queue,
)
# Retrieve multiple batches of messages from SQS.
# The SQS API only returns a maximum of 10 messages per poll.
read_from_queue_in_batch = SqsSensor(
aws_conn_id="aws_sqs_test",
task_id="read_from_queue_in_batch",
sqs_queue=sqs_queue,
# Get maximum 10 messages each poll
max_messages=3,
# Combine 3 polls before returning results
num_batches=3,
)
|
opened #25674 to fix issue with DB provider |
@LaPetiteSouris - thank you ! This is cool to get it confirmed even now ! |
Found another issue with Databricks provider - DBSQL operator doesn't work anymore, most probably caused by #23971 - it looks like |
No worries @alexott - I will anyhow has to wait with rc4 for databricks till after this voting completes. |
My concern that this change in the common-sql may affect other packages - I see it in the Drill, Exasol, Presto, |
Hashicorp provider change appears to be working as expected for me. |
If there is a dag/operator you want to verify with Presto you can add it here and I'll check |
I don’t have something to test, but I’m concerned that if it broke databricks sql, may it break other as well? |
Webhdfs worked for me |
@alexott Can you mae a PR fixing it in databricks so that we can see how the problem manifests? I can take a look at others and asses if the potential of breaking it for other providers is there? |
Yes, will do, most probable on Saturday... |
Just looked it up @alexott -> I do not think it is breaking other providers (@kazanzhy to confirm). Only Databricks and Snowflake hooks have BTW. Loking at the change I think the problem might be when the query contains ; followed by whitespace and EOL after. The old regexp and .strip() would remove such "empty" statement where the new one would likely not do it. This is the method introduced:
|
Thank you for looking into it. I'll debug it. My query is just single select without |
Right - I see. I think the mistake is that it should be (@kazanzhy ?) :
That would actually make me think to remove common.sql and all the dependent packages and release rc4 together because indeed any query without ";" passed with "split_statement" will not work, which makes it quite problematic. Update: added handling whitespace that "potentially" might bre returned (though this is just defensive -> sqlparse.split() should handle it, but better to be safe than sorry. Also whether this is a buf or not depends a bit on sqlparse's behaviour. |
Yep. Confirmed this looks like a bug for all SQL - probably safer to make rc4 for all of them. Thanks @alexott for being vigilant :) - @kazanzhy - will you have time to take a look and double-check my findings and fix it before Monday ?
|
I tested all my changes (mostly checking if the code I moved around is there). Looking for more tests :) |
@potiuk is the PR open for it? If yes, I can test it tomorrow morning... |
Tested #25619 working as expected |
I'm currently testing my implementation for the Amazon Provider package change #25432. The one thing I have noticed so far is that the I didn't notice any other issues. Another thing I learned while testing-- MWAA (AWS managed Airflow service) locks to the provider package version 2.4.0. It may be worth double checking the documentation clarifies the requirement to not URL-encode is a new 5.0.0 feature. There may be a lot of overlap between people using the provider package and people using MWAA. They may find it confusing to see the latest version of the documentation mentions you can do something that doesn't work in their environment. |
The nice thin that the docs in providers is nice linked in the UI to the version that is installed. I think we also have nice changelog describing the difference, I am not sure if we need to do more. But any docs clarifications are welcome :) |
FYI. Vote is closed. I wil be removing databricks and also other SQL providers (that depend on common-sql-1.1) from the vote and prepare RC4 after fixing the problems found |
Closing the issue - thanks for help Everyone! |
Body
I have a kind request for all the contributors to the latest provider packages release.
Could you please help us to test the RC versions of the providers?
Let us know in the comment, whether the issue is addressed.
Those are providers that require testing as there were some substantial changes introduced:
Provider amazon: 5.0.0rc3
region_name
andconfig
in wrapper (#25336): @Taragolisextra[host]
in AWS's connection (#25494): @gmcrocettiProvider apache.drill: 2.2.0rc3
Provider apache.druid: 3.2.0rc3
Provider apache.hdfs: 3.1.0rc3
Provider apache.hive: 4.0.0rc3
Provider apache.livy: 3.1.0rc3
Provider apache.pinot: 3.2.0rc3
Provider cncf.kubernetes: 4.3.0rc3
Provider common.sql: 1.1.0rc3
Provider databricks: 3.2.0rc3
Provider dbt.cloud: 2.1.0rc3
Provider elasticsearch: 4.2.0rc3
Provider exasol: 4.0.0rc3
Provider google: 8.3.0rc3
Provider hashicorp: 3.1.0rc3
Provider jdbc: 3.2.0rc3
Provider microsoft.azure: 4.2.0rc3
test_connection
method to AzureContainerInstanceHook (#25362): @phanikumvProvider microsoft.mssql: 3.2.0rc3
Provider mysql: 3.2.0rc3
Provider neo4j: 3.1.0rc3
Provider odbc: 3.1.1rc3
Provider oracle: 3.3.0rc3
Provider postgres: 5.2.0rc3
Provider presto: 4.0.0rc3
PrestoToSlackOperator
(#25425): @eladkalProvider qubole: 3.2.0rc3
results_parser_callable
parameter in Qubole operator docs (#25514): @josh-fellProvider salesforce: 5.1.0rc3
Provider snowflake: 3.2.0rc3
Provider sqlite: 3.2.0rc3
Provider trino: 4.0.0rc3
Provider vertica: 3.2.0rc3
Provider yandex: 3.1.0rc3
The guidelines on how to test providers can be found in
Verify providers by contributors
Committer
The text was updated successfully, but these errors were encountered: