Skip to content

Conversation

@SamWheating
Copy link

Closes: #14953 - see this issue for a larger description and reproduction.

Its assumed that wap.id will be unique among snapshots, but this doesn't appear to be enforced anywhere which can lead to unexpected results when only the first write is actually published.

This PR updates the publish_changes procedure to fail when multiple matching snapshots are identified.

If this change is approved I will backport it to the other spark versions.

@SamWheating SamWheating force-pushed the sw-fail-publish-changes-on-duplicate-wap-id branch from 0699a5f to 7ed4b84 Compare January 2, 2026 22:27
@github-actions github-actions bot added the spark label Jan 2, 2026
@SamWheating SamWheating changed the title Fail publish_changes procedure if there's more than one matching snapshot Spark: Fail publish_changes procedure if there's more than one matching snapshot Jan 2, 2026
Comment on lines 116 to 117
throw new ValidationException(
"Cannot apply non-unique WAP ID. Found %d snapshots with WAP ID '%s'",
numMatchingSnapshots, wapId);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should not allow this situation like 2 snapshots with same wap id to happen in the first place, during the snapshot creating time ?

Copy link
Author

@SamWheating SamWheating Jan 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a strong opinion here, but this might be considered a more significant / potentially breaking change? Technically having a duplicate WAP ID doesn't cause any problems until they are cherry-picked into main.

Do you think there might be legitimate uses for staging multiple changes under the same WAP ID? For example:

  • staging multiple changes, evaluating all of them separately and then deleting all but one before committing.
  • creating staged snapshots which are never intended to be published (for testing / evaluation / etc)

I am not super familiar with the original designs behind WAP in iceberg, I'll look through older commits to see if there's any mention of a uniqueness constraint.

@SamWheating SamWheating force-pushed the sw-fail-publish-changes-on-duplicate-wap-id branch from 7ed4b84 to df314e9 Compare January 3, 2026 00:47
@manuzhang manuzhang changed the title Spark: Fail publish_changes procedure if there's more than one matching snapshot Spark 4.1: Fail publish_changes procedure if there's more than one matching snapshot Jan 5, 2026
@SamWheating SamWheating requested a review from singhpk234 January 6, 2026 17:33
Copy link
Contributor

@singhpk234 singhpk234 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @SamWheating !

Lets give it sometime before we check it in, incase someother folks have feedbacks on this.

@SamWheating
Copy link
Author

Thanks @singhpk234 !

Whats the preferred approach for applying this change to previous spark versions? Should I wait until this is approved and merged before creating a single backport PR for all of them?

if (!wapSnapshot.isPresent()) {
throw new ValidationException("Cannot apply unknown WAP ID '%s'", wapId);
Iterable<Snapshot> wapSnapshots =
Iterables.filter(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Instead of filtering all matching snapshots, could we scan table.snapshots() once and fail as soon as we see a 2nd match (avoid full-history scan)?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good point, I've rewritten the procedure to early-exit on the first conflicting snapshot.

@huaxingao
Copy link
Contributor

Since this changes publish_changes from “first match wins” to failing when wap.id is ambiguous, should we document this so upgrades don’t surprise users?

@SamWheating
Copy link
Author

SamWheating commented Jan 19, 2026

Since this changes publish_changes from “first match wins” to failing when wap.id is ambiguous, should we document this so upgrades don’t surprise users?

Definitely, but in that case we should also ensure that all of the different spark distributions are also updated to be consistent with the docs (not just 4.1). I will backport this change to the other versions, update the docs and re-request review.

Actually maybe I should get some feedback on this code before I replicated it into 4 different places 😂 If the code + doc changes look good I will add the backports.

@SamWheating
Copy link
Author

SamWheating commented Jan 19, 2026

@huaxingao could you take a look at the updated procedure and let me know what you think? If this looks good I will make another commit to backport the procedure change to the other distributions.

Copy link
Contributor

@huaxingao huaxingao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@SamWheating SamWheating changed the title Spark 4.1: Fail publish_changes procedure if there's more than one matching snapshot Spark 4.1 | 4.0 | 3.5 | 3.4: Fail publish_changes procedure if there's more than one matching snapshot Jan 27, 2026
@SamWheating
Copy link
Author

Thanks for the reviews @huaxingao and @singhpk234, I have backported the fix to other spark versions now so everything should be consistent with the updated documentation.

Let me know if there's anything else I can do to help get this merged!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

publish_changes spark procedure only cherry-picks a single snapshot when there are multiple with the same wap.id

3 participants