-
Notifications
You must be signed in to change notification settings - Fork 332
Fix TableIdentifier in TaskFileIOSupplier #2304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix TableIdentifier in TaskFileIOSupplier #2304
Conversation
we cant just convert a `TaskEntity` to a `IcebergTableLikeEntity` as the `getTableIdentifier()` method will not return a correct value by using the name of the task and its parent namespace (which is empty?). task handlers instead need to pass in the `TableIdentifier` that they already inferred via `TaskEntity.readData`.
e07dd53 to
4427dc8
Compare
| TableIdentifier tableId = cleanupTask.tableId(); | ||
| List<String> batchFiles = cleanupTask.batchFiles(); | ||
| try (FileIO authorizedFileIO = fileIOSupplier.apply(task, callContext)) { | ||
| try (FileIO authorizedFileIO = fileIOSupplier.apply(task, tableId, callContext)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the tableId just be a part of the TaskEntity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
depends on whether there will ever be tasks that dont operate on a single iceberg table... currently the TaskEntity design seems to leave this open
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thinking about this again, since the TableIdentifier is downstream only getting used for logging, it would seem an disproportionate amount of effort to change the TaskEntity to avoid passing in the tableidentifier here (that is already available in each task handler)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would the existence of a task that involves multiple tables mean that a given task can't include a table name in its properties?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry i might not be fully understanding what you had been asking for.
could you please provide a more detailed design of how we would be making the tableId part of the TaskEntity?
and then also why that would be a better solution that simply passing in the tableId that is already available in the task handlers? (which is taken from TaskEntity.readData so its already part of the TaskEntity in a way)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, sorry, I completely missed that. Anyway, here is a one-line fix that doesn't change any APIs. From my debugger:

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This correctly extracts the TableIdentifier (via the TableLikeEntity) from the TASK_DATA like I mentioned above.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anyway, here is a one-line fix that doesn't change any APIs
afaict your "one-line fix" only works for table cleanup tasks, where we happen to store a IcebergTableLikeEntity into the TASK_DATA:
Lines 1433 to 1437 in 1277eff
| Map<String, String> properties = new HashMap<>(); | |
| properties.put( | |
| PolarisTaskConstants.TASK_TYPE, | |
| String.valueOf(AsyncTaskType.ENTITY_CLEANUP_SCHEDULER.typeCode())); | |
| properties.put("data", PolarisObjectMapperUtil.serialize(refreshEntityToDrop)); |
for other tasks, it would be throwing an error as we store something else, for example:
Lines 64 to 66 in 20febda
| ManifestCleanupTask cleanupTask = task.readData(ManifestCleanupTask.class); | |
| TableIdentifier tableId = cleanupTask.tableId(); | |
| try (FileIO authorizedFileIO = fileIOSupplier.apply(task, callContext)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's what I was trying to say above -- if there is a TableIdentifier in the TASK_DATA we should be extracting and using that. The current "cast" of a TaskEntity into a TableLikeEntity is definitely wrong, but if we can fix this the easy way we should.
If there is a task that doesn't have a TableIdentifier in TASK_DATA we should either add it there or remove the need for it (i.e. pass in null to loadFileIO and mark it Nullable) if that's easier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if there is a TableIdentifier in the TASK_DATA we should be extracting and using that.
it seems like we have circled back to a previous point in our discussion, so i am posting my reply again:
So I can just guess that you want to add something like a Optional getTableIdentifier() method to TaskEntity ?
but when we think about how it would need to be implemented, it would mean that it has to contain full knowledge of ALL TYPES of tasks and what kind of objects each one stores in TASK_DATA to de-serialize them and get the TaskIdentifier.but that same knowledge ALREADY EXISTS in the task handlers (and its where we already have the TableIdentifier alongside the other task parameters), so again, just letting the task handlers pass that value into the TaskFileIOSupplier seems like the right approach to me.
i still dont have an answer as to why changing TaskFileIOSupplier is a no-go when all callers have the TableIdentifier already and the underlying FileIOFactory (in its current form) requires it.
snazy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
dimas-b
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
| private boolean taskEntityIsTable(TaskEntity task) { | ||
| PolarisEntity entity = PolarisEntity.of((task.readData(PolarisBaseEntity.class))); | ||
| return entity.getType().equals(PolarisEntityType.TABLE_LIKE); | ||
| private Optional<IcebergTableLikeEntity> tryGetTableEntity(TaskEntity task) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: the "try" prefix makes me wonder whether the method may or may not produce an "effect" (as in Semaphore.tryAcquire()), but in fact it always operates the same way without side effects... I'd prefer something like asTableEntity(TaskEntity task) or asOptionalTableEntity().
* Integration tests for Catalog Federation (apache#2344) Adds a Junit5 integration test for catalog federation. * Fix merge conflict in CatalogFederationIntegrationTest (apache#2420) apache#2344 added a new test for catalog federation, but it looks like an undetected conflict with concurrent changes related to authentication have broken the test in main. * chore(deps): update registry.access.redhat.com/ubi9/openjdk-21-runtime docker tag to v1.23-6.1755674729 (apache#2416) * 2334 (apache#2427) * Fix TableIdentifier in TaskFileIOSupplier (apache#2304) we cant just convert a `TaskEntity` to a `IcebergTableLikeEntity` as the `getTableIdentifier()` method will not return a correct value by using the name of the task and its parent namespace (which is empty?). task handlers instead need to pass in the `TableIdentifier` that they already inferred via `TaskEntity.readData`. * Fix NPE in CreateCatalog (apache#2435) * Doc fix: Access control page update (apache#2424) * 2418 * 2418 * fix(deps): update dependency software.amazon.awssdk:bom to v2.32.29 (apache#2443) * Optimize PolicyCatalog.listPolicies (apache#2370) this is a follow-up to apache#2290 the optimization is to use `listEntities` instead of `loadEntities` when there is `policyType` filter to apply * Add PolarisDiagnostics field to BaseMetaStoreManager (apache#2381) * Add PolarisDiagnostics field to BaseMetaStoreManager the ultimate goal is removing the `PolarisCallContext` parameter from every `PolarisMetaStoreManager` interface method, so we make steps towards reducing its usage first. * Add feature flag to disallow custom S3 endpoints (apache#2442) * Add new realm-level flag: `ALLOW_SETTING_S3_ENDPOINTS` (default: true) * Enforce in `PolarisServiceImpl.validateStorageConfig()` Fixes apache#2436 * Deprecate ActiveRolesProvider for removal (apache#2404) * Client: fix openapi verbose output, remove doc generate, and skip test generations (apache#2439) * Fix various issue in client code generation * Use logger instead of print * Add back exclude on __pycache__ as CI is not via Makefile * Add back exclude on __pycache__ as CI is not via Makefile * Add user principal tag in metrics (apache#2445) * Added API change to enable tag * Added test * Added production readiness check * fix(deps): update dependency io.opentelemetry.semconv:opentelemetry-semconv to v1.36.0 (apache#2454) * fix(deps): update dependency com.google.cloud:google-cloud-storage-bom to v2.56.0 (apache#2447) * fix(deps): update dependency gradle.plugin.org.jetbrains.gradle.plugin.idea-ext:gradle-idea-ext to v1.3 (apache#2428) * Build: Make jandex dependency used for index generation managed (apache#2431) Also allows specifying the jandex index version for the build. This is a preparation step contributing to apache#2204, once a jandex fix for reproducible builds is available. Co-authored-by: Alexandre Dutra <adutra@apache.org> * Built: improve reproducible archive files (apache#2432) As part of the effort for apache#2204, this change fixes a few aspects around reproducible builds: Some Gradle projects produce archive files, but don't get the necessary Gradle archive-tasks settings applied: one not-published project but also the tarball&zip of the distribution. This change moves the logic to the new build-plugin `polaris-reproducible`. Another change is to have some Quarkus generated jar files adhere to the same conventions, which are constant timestamps for the zip entries and a deterministic order of the entries. That's sadly not a full fix, as the classes that are generated or instumented by Quarkus differ in each build. Contributes to apache#2204 * Remove commons-lang3 dependency (apache#2456) outside of tests we can replace the functionality with jdk11 and guava. also stop using `org.assertj.core.util` as its a non-public api. * add refresh credentials property to loadTableResult (apache#2341) * add refresh credentials property to loadTableResult * IcebergCatalogAdapterTest: Added test to ensure refresh credentials endpoint is included * delegate refresh credential endpoint configuration to storage integration * GCP: Add refresh credential properties * fix(deps): update dependency io.opentelemetry.semconv:opentelemetry-semconv to v1.37.0 (apache#2458) * Add Delegator to all API Implementations (apache#2434) Per the Dev ML, implements the Delegator pattern to add Events instrumentation to all Polaris APIs. * Prefer java.util.Base64 over commons-codec (apache#2463) `java.util.Base64` is available since java8 and we are already using it in a few other spots. in a follow-up we might be able to get rid of our `commons-codec` dependency completely. * Service: Move tests to the right package (apache#2469) * Update versions in runtime LICENSE and NOTICE (apache#2468) * fix(deps): update dependency com.adobe.testing:s3mock-testcontainers to v4.8.0 (apache#2475) * fix(deps): update dependency com.gradleup.shadow:shadow-gradle-plugin to v9.1.0 (apache#2476) * Service: Remove hadoop-common from polaris-runtime-service (apache#2462) * Service: Always validate allowed locations from Storage Config (apache#2473) * Add Community Sync Meeting 20250828 (apache#2477) * Update dependency software.amazon.awssdk:bom to v2.33.0 (apache#2483) * Remove PolarisCallContext.getDiagServices (apache#2415) * Remove PolarisCallContext.getDiagServices usage * Remove diagnostics from PolarisCallContext * Feature: Expose resetCredentials via a new reset api to allow root user to reset credentials for an existing principal with custom values (apache#2197) * Add type-check to PolarisEntity subclass ctors (apache#2302) currently one can freely "cast" any `PolarisEntity` to a more specific type via their constructors. this can lead to subtle bugs like we fixed in a29f800 by adding type checks we discover a few more places where we need to be more careful about how we construct new or handle existing entities. note that we can add a check for `PolarisEntitySubType` in a followup, but it requires more fixes currently. * Fix CI (apache#2489) Fix undetected merge conflict after apache#2197 + apache#2415 + apache#2434 * Use local diagnostics in TransactionWorkspaceMetaStoreManager * Add resetCredentials to PolarisPrincipalsEventServiceDelegator * Core: Prevent AIOOBE for negative codes in PolarisEntityType, PolarisPrivilege, ReturnStatus (apache#2490) * feat(idgen): Start Implementation of NoSQL with the ID Generation Framework (apache#2131) Create an ID Generation Framework. Related to apache#650 & apache#844 Co-authored-by: Robert Stupp <snazy@snazy.de> Co-authored-by: Dmitri Bourlatchkov <dmitri.bourlatchkov@gmail.com> * perf(refactor): optimizing JdbcBasePersistenceImpl.listEntities (apache#2465) - Reduced Column Selection: Only 6 columns instead of 16 - Eliminated Object Creation Overhead: Direct conversion to EntityNameLookupRecord without intermediate PolarisBaseEntity * Add Polaris Events to Persistence (apache#1844) * AWS CloudWatch Event Sink Implementation (apache#1965) * Fix failing CI (apache#2498) * Update actions/stale digest to 3a9db7e (apache#2499) * Core: Prevent AIOOBE for negative policy codes in PredefinedPolicyType (apache#2486) * Service: Add location tests for views (apache#2496) * Update docker.io/jaegertracing/all-in-one Docker tag to v1.73.0 (apache#2500) * Update dependency io.netty:netty-codec-http2 to v4.2.5.Final (apache#2495) * Update actions/setup-python action to v6 (apache#2502) * Update the Release Guide about the Helm Chart package (apache#2179) * Update the Release Guide about the Helm Chart package * Update release-guide.md Co-authored-by: Pierre Laporte <pierre@pingtimeout.fr> * Add missing commit message * Whitespace * Use Helm GPG plugin to sign the Helm chart * Fix directories during Helm chart copy to SVN * Add Helm index to SVN * Use long name for svn checkout * Ensure the Helm index is updated after the chart is moved to SVN dist release * Do not publish any Docker image before the vote succeeds * Typos * Revert "Do not publish any Docker image before the vote succeeds" This reverts commit 5617e65. * Don't mention Helm values.yaml in the release guide as it doesn't contain version details --------- Co-authored-by: Pierre Laporte <pierre@pingtimeout.fr> * Update dependency com.azure:azure-sdk-bom to v1.2.38 (apache#2503) * Update registry.access.redhat.com/ubi9/openjdk-21-runtime Docker tag to v1.23-6.1756793420 (apache#2504) * Remove commons-codec dependency (apache#2474) follow-up to f8ad77a we can simply use guava instead and eliminate the extra dependency * CLI: Remove SCRIPT_DIR and default config location to user home (apache#2448) * Remove readInternalProperties helpers (apache#2506) the functionality is already provided by the `PrincipalEntity` * Add Events for Generic Table APIs (apache#2481) This PR adds the Events instrumentation for the Generic Tables Service APIs, surrounding the default delegated call to the business logic APIs. * Disable custom namespace locations (apache#2422) When we create a namespace or alter its location, we must confirm that this location is within the parent location. This PR introduces introduces a check similar to the one we have for tables, where custom locations are prohibited by default. This functionality is gated behind a new behavior change flag `ALLOW_NAMESPACE_CUSTOM_LOCATION`. In addition to allowing us to revert to the old behavior, this flag allows some tests relying on arbitrarily-located namespaces to pass (such as those from upstream Iceberg). Fixes: apache#2417 * fix for IcebergAllowedLocationTest (apache#2511) * Remove unused config from SparkSessionBuilder (apache#2512) Tests pass without it. * Add Events for Policy Service APIs (apache#2479) * Remove PolarisTestMetaStoreManager.jsonNode helper (apache#2513) * Update dependency software.amazon.awssdk:bom to v2.33.4 (apache#2517) * Update dependency com.nimbusds:nimbus-jose-jwt to v10.5 (apache#2514) * Update dependency io.opentelemetry:opentelemetry-bom to v1.54.0 (apache#2515) * Update dependency io.micrometer:micrometer-bom to v1.15.4 (apache#2519) * Port missed OSS change * NoSQL: adopt to updated test packages * NoSQL: adapt to removed PolarisDiagnostics param * NoSQL: fix libs.versions.toml * NoSQL: include jandex plugin related changes from OSS * NoSQL: changes for delete/set principal client-ID+secret * Last merged commit c6176dc --------- Co-authored-by: Pooja Nilangekar <poojan@umd.edu> Co-authored-by: Eric Maynard <eric.maynard+oss@snowflake.com> Co-authored-by: Mend Renovate <bot@renovateapp.com> Co-authored-by: Yong Zheng <yongzheng0809@gmail.com> Co-authored-by: Christopher Lambert <xn137@gmx.de> Co-authored-by: Honah (Jonas) J. <honahx@apache.org> Co-authored-by: Dmitri Bourlatchkov <dmitri.bourlatchkov@gmail.com> Co-authored-by: Alexandre Dutra <adutra@apache.org> Co-authored-by: fivetran-kostaszoumpatianos <kostas.zoumpatianos@fivetran.com> Co-authored-by: Jason <jasonf20@gmail.com> Co-authored-by: Adnan Hemani <adnan.h@berkeley.edu> Co-authored-by: Yufei Gu <yufei@apache.org> Co-authored-by: JB Onofré <jbonofre@apache.org> Co-authored-by: fivetran-arunsuri <103934371+fivetran-arunsuri@users.noreply.github.com> Co-authored-by: Adam Christian <105929021+adam-christian-software@users.noreply.github.com> Co-authored-by: Artur Rakhmatulin <artur.rakhmatulin@gmail.com> Co-authored-by: Pierre Laporte <pierre@pingtimeout.fr>
we cant just convert a
TaskEntityto aIcebergTableLikeEntityas thegetTableIdentifier()method will not return a correct value by usingthe name of the task and its parent namespace (which is empty?).
task handlers instead need to pass in the
TableIdentifierthat theyalready inferred via
TaskEntity.readData.