Skip to content

Reading snapshot of table uses current schema #1501

@wypoon

Description

@wypoon

I am new to Iceberg. When I do

val df = spark.read().format(“iceberg”).option(“snapshot-id”, snapshotId).load(path)

where spark is a SparkSession, df has the current schema of the table, as can be seen when an action is performed that causes df to be evaluated, such as

df.show()

Is this the expected behavior?
In my case, I tried altering the table, either adding a column or removing a column, and then trying to read an old snapshot before the table was altered, and I was expecting to get the table as it existed at the time of the snapshot (with the columns it had then).
Is there some conceptual or technical reason why the behavior is the way it is?
I have tried out some changes that causes reading the snapshot from Spark to behave the way I expect it to be (using the schema at the time of the snapshot rather than the current schema). I'd be happy to create a PR.
Or perhaps we could have different behaviors governed by a flag or option.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions