Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flink: Document watermark generation feature #9179

Merged
merged 3 commits into from
Dec 5, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 52 additions & 0 deletions docs/flink-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,6 +277,58 @@ DataStream<Row> stream = env.fromSource(source, WatermarkStrategy.noWatermarks()
"Iceberg Source as Avro GenericRecord", new GenericRecordAvroTypeInfo(avroSchema));
```

### Emitting watermarks
Emitting watermarks from the source itself could be beneficial for several purposes, like harnessing the
[Flink Watermark Alignment](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/event-time/generating_watermarks/#watermark-alignment)
pvary marked this conversation as resolved.
Show resolved Hide resolved
feature to prevent runaway readers, or providing triggers for [Flink windowing](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/operators/windows/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should call out windows specifically for the benefit of emitting watermark from source itself. any event time and watermark strategy will have windows triggers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also remove prevent runaway readers, as it may not be very clear to users what "runaway readers" mean

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is very important to understand, that windowing and watermark generation based on records could cause surprising results - especially with batch reads, or in backfill situations. Without this feature there is not guarantee on the order of the files are read. Window triggering will only become reliable when the source controls the emitted watermarks.

I am not sure how detailed the description should be, but I think it is important to be noted here, so I am open for suggestions, if you think we should add more detail here.

Copy link
Contributor

@stevenzwu stevenzwu Nov 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature produces better watermark. But I think we don't have to explicitly say windowing here. If someone use low-level event timer triggers for stateful processing, this also helps.


Enable watermark generation for an `IcebergSource` by setting the `watermarkColumn`.
The supported column types are `timestamp`, `timestamptz` and `long`.
Timestamp columns are automatically converted to milliseconds since the Java epoch of
1970-01-01T00:00:00Z. Use `watermarkTimeUnit` to configure the conversion for long columns.
pvary marked this conversation as resolved.
Show resolved Hide resolved

The watermarks are generated based on column metrics stored for data files and emitted once per split.
When using watermarks for Flink watermark alignment set `read.split.open-file-cost` to prevent
combining multiple files to a single split.
stevenzwu marked this conversation as resolved.
Show resolved Hide resolved
By default, the column metrics are collected for the first 100 columns of the table. Use [write properties](configuration.md#write-properties) starting with `write.metadata.metrics` when needed.
pvary marked this conversation as resolved.
Show resolved Hide resolved

```java
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

// For windowing
stevenzwu marked this conversation as resolved.
Show resolved Hide resolved
DataStream<RowData> stream =
env.fromSource(
IcebergSource.forRowData()
.tableLoader(tableLoader)
// Watermark using timestamp column
.watermarkColumn("timestamp_column")
.build(),
// Watermarks are generated by the source, no need to generate it manually
WatermarkStrategy.<RowData>noWatermarks()
// Extract event timestamp from records
.withTimestampAssigner((record, eventTime) -> record.getTimestamp(pos, precision).getMillisecond()),
SOURCE_NAME,
TypeInformation.of(RowData.class));

// For watermark alignment
stevenzwu marked this conversation as resolved.
Show resolved Hide resolved
DataStream<RowData> stream =
env.fromSource(
IcebergSource source = IcebergSource.forRowData()
.tableLoader(tableLoader)
// Disable combining multiple files to a single split
.set(FlinkReadOptions.SPLIT_FILE_OPEN_COST, String.valueOf(TableProperties.SPLIT_SIZE_DEFAULT))
// Watermark using long column
.watermarkColumn("long_column")
.watermarkTimeUnit(TimeUnit.MILLI_SCALE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd include this in the previous example. I read this as a more advanced example as most users wouldn't need watermark alignment and so withTimestampAssigner could also be moved down here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep this 2 as a separate example.
If I understand correctly @stevenzwu thinks that the watermark alignment is the most important feature of this change, and @mas-chen thinks that the ordering / windowing is more important.

Probably this is a good indication that both benefits are important 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen can you clarify your comment? I am not quite following.

@pvary it might be good to separate this into two code snippets. we can remove the two lines in the beginning.

StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I just think

            .watermarkTimeUnit(TimeUnit.MILLI_SCALE)

should be advertised in the "basic" example. I think most people would just configure this, rather than the custom Timestamp assigner. This reduces code in the first example and keeps it simpler.

The 2nd example I consider as a more "advanced" example where we can show how to do the custom Timestamp assigner (and furthermore watermark alignment from the Flink perspective is an advanced feature--it requires lots of tuning and understanding of how it interacts with the watermark strategy--out of orderliness/idleness/etc).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen .watermarkTimeUnit() is only needed for long type column, which we don't know what's the precision. the first example is Iceberg timestamp field which carries time unit inherently (currently only micro-second) and hence there is no need to ask user to set the time unit like the second example.

timestamp assigner is for Flink StreamRecord timestamp, it is related to watermark generation / advancement at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explantation, makes sense. Please disregard my comment!

.build(),
// Watermarks are generated by the source, no need to generate it manually
WatermarkStrategy.<RowData>noWatermarks()
.withWatermarkAlignment(watermarkGroup, maxAllowedWatermarkDrift),
SOURCE_NAME,
TypeInformation.of(RowData.class));
```

## Options
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we also need to update the read options section.
https://iceberg.apache.org/docs/1.3.0/flink-configuration/#read-options

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no corresponding read-option for this feature yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh. then we would need to add them. cc @mas-chen


### Read options
Expand Down