Throw TrinoException when Iceberg commit fails #7517
Annotations
6 errors
TestIcebergV2.testOptimizeDuringWriteOperations:
io/trino/plugin/iceberg/TestIcebergV2#L671
java.lang.RuntimeException: io.trino.testing.QueryFailedException: Failed to commit during optimize: Cannot commit, found new delete for replaced data file: GenericDataFile{content=data, file_path=local:///tpch/test_optimize_during_write_operationsqvx5g1uap7-b5bffde4ed6143dab17a22286c0ef3ed/data/20240821_225432_00167_43d6t-38f543db-6f2f-4775-aece-5304ff09e454.parquet, file_format=PARQUET, spec_id=0, partition=PartitionData{}, record_count=1, file_size_in_bytes=217, column_sizes=null, value_counts=null, null_value_counts=null, nan_value_counts=null, lower_bounds=null, upper_bounds=null, key_metadata=null, split_offsets=[4], equality_ids=null, sort_order_id=0, data_sequence_number=8, file_sequence_number=8}
|
|
|
|
|
TestDeltaLakeDeleteCompatibility > testDeletionVectors(0: id) [groups: profile_specific_tests, delta-lake-exclude-91, delta-lake-databricks, delta-lake-oss]:
io/trino/tests/product/deltalake/TestDeltaLakeDeleteCompatibility#L243
Expected row count to be <2>, but was <3>; rows=[[1, 11], [30, -1], [2, -1]]
|