-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flink: backport PR #10777 from 1.19 to 1.18 for sink test refactoring. #10965
Conversation
…toring. 1.20 porting is not needed as the 1.20 code was based on the hash that already included PR apache#10777.
@@ -70,6 +70,7 @@ public class FlinkCatalogFactory implements CatalogFactory { | |||
public static final String HADOOP_CONF_DIR = "hadoop-conf-dir"; | |||
public static final String DEFAULT_DATABASE = "default-database"; | |||
public static final String DEFAULT_DATABASE_NAME = "default"; | |||
public static final String DEFAULT_CATALOG_NAME = "default_catalog"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a change introduced by 1.19 and used by TestBas#dropDatabase
. With the refactoring, this is needed for 1.18 now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to talk about the dropCatalog
related test changes with you.
It was introduced by the 1.19 migration - there is a Flink change there which prevents dropping the current catalog. At that time we introduced this change to 1.19 only, and planned to create a better clean-up policy around the tests (I think on another PR we already touched this topic).
This causes continuous "the backport is not clean, because..." issues.
Shall we:
- Accept the things as it is, and wait for 1.21 Flink release to solve the issue, or
- Backport the clean-up logic to Flink 1.18 branch (even if it is not needed there), or
- Do a generic cleanup in the Flink tests?
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accept the things as it is, and wait for 1.21 Flink release to solve the issue,
didn't quite fully get this point. Are you talking about leaving 1.18 untouched and with roll-in of 1.21, 1.18 will be removed.
Backport the clean-up logic to Flink 1.18 branch
I would probably favor this. since it is not a big change and it is good to minimize the difference btw versions.
not dropping the current catalog
seems like a reasonable behavior to me, although I am not opinionated on whether it should be allowed or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Accept the things as it is, and wait for 1.21 Flink release to solve the issue,
didn't quite fully get this point. Are you talking about leaving 1.18 untouched and with roll-in of 1.21, 1.18 will be removed.
Yes. Exactly my point.
Backport the clean-up logic to Flink 1.18 branch
I would probably favor this. since it is not a big change and it is good to minimize the difference btw versions.
My point is, that until this change we skipped backporting the catalog dropping test code. I need to collect the instances to see if we would like to change our approach here.
not dropping the current catalog
seems like a reasonable behavior to me, although I am not opinionated on whether it should be allowed or not.
That was an intentional change in Flink, so I think we should accept this, and just decide on our own testing strategy
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that until this change we skipped backporting the catalog dropping test code. I need to collect the instances to see if we would like to change our approach here.
is the change in this backport all we need? or it is a much broader scope then the two places that I called out in comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this PR: #10776 we decided differently, and removed the dropDatabase
calls from the 1.18 TestIcebergSpeculativeExecutionSupport
tests.
I haven't find any other backports where the question was raised.
I think we should handle this consistently between the backports. If we backport the dropDatabase
here, then here in this PR, or in another PR we need to make sure there are use in TestIcebergSpeculativeExecutionSupport
too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
sql("DROP CATALOG %s %s", ifExists ? "IF EXISTS" : "", catalogName); | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a change introduced by 1.19. With the refactoring, this is needed for 1.18 now.
1.20 porting is not needed as the 1.20 code was based on the hash that already included PR #10777.
This is a clean port except for two places (called out in comments) that are changed when 1.19 was introduced. with the refactoring and porting, now they are needed for 1.18