Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flink: Document watermark generation feature #9179

Merged
merged 3 commits into from
Dec 5, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
69 changes: 69 additions & 0 deletions docs/flink-queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -277,6 +277,75 @@ DataStream<Row> stream = env.fromSource(source, WatermarkStrategy.noWatermarks()
"Iceberg Source as Avro GenericRecord", new GenericRecordAvroTypeInfo(avroSchema));
```

### Emitting watermarks
Emitting watermarks from the source itself could be beneficial for several purposes, like harnessing the
[Flink Watermark Alignment](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/event-time/generating_watermarks/#watermark-alignment),
or prevent triggering [windows](https://nightlies.apache.org/flink/flink-docs-stable/docs/dev/datastream/operators/windows/)
too early when reading multiple data files concurrently.

Enable watermark generation for an `IcebergSource` by setting the `watermarkColumn`.
The supported column types are `timestamp`, `timestamptz` and `long`.
Iceberg `timestamp` or `timestamptz` inherently contains the time precision. So there is no need
to specify the time unit. But `long` type column doesn't contain time unit information. Use
`watermarkTimeUnit` to configure the conversion for long columns.

The watermarks are generated based on column metrics stored for data files and emitted once per split.
If multiple smaller files with different time ranges are combined into a single split, it can increase
the out-of-orderliness and extra data buffering in the Flink state. The main purpose of watermark alignment
is to reduce out-of-orderliness and excess data buffering in the Flink state. Hence it is recommended to
set `read.split.open-file-cost` to a very large value to prevent combining multiple smaller files into a
single split. The negative impact (of not combining small files into a single split) is on read throughput,
especially if there are many small files. In typical stateful processing jobs, source read throughput is not
the bottleneck. Hence this is probably a reasonable tradeoff.

This feature requires column-level min-max stats. Make sure stats are generated for the watermark column
during write phase. By default, the column metrics are collected for the first 100 columns of the table.
If watermark column doesn't have stats enabled by default, use
[write properties](configuration.md#write-properties) starting with `write.metadata.metrics` when needed.

The following example could be useful if watermarks are used for windowing. The source reads Iceberg data files
in order, using a timestamp column and emits watermarks:
```java
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

DataStream<RowData> stream =
env.fromSource(
IcebergSource.forRowData()
.tableLoader(tableLoader)
// Watermark using timestamp column
.watermarkColumn("timestamp_column")
.build(),
// Watermarks are generated by the source, no need to generate it manually
WatermarkStrategy.<RowData>noWatermarks()
// Extract event timestamp from records
.withTimestampAssigner((record, eventTime) -> record.getTimestamp(pos, precision).getMillisecond()),
SOURCE_NAME,
TypeInformation.of(RowData.class));
```

Example for reading Iceberg table using a long event column for watermark alignment:
```java
StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

DataStream<RowData> stream =
env.fromSource(
IcebergSource source = IcebergSource.forRowData()
.tableLoader(tableLoader)
// Disable combining multiple files to a single split
.set(FlinkReadOptions.SPLIT_FILE_OPEN_COST, String.valueOf(TableProperties.SPLIT_SIZE_DEFAULT))
// Watermark using long column
.watermarkColumn("long_column")
.watermarkTimeUnit(TimeUnit.MILLI_SCALE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd include this in the previous example. I read this as a more advanced example as most users wouldn't need watermark alignment and so withTimestampAssigner could also be moved down here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep this 2 as a separate example.
If I understand correctly @stevenzwu thinks that the watermark alignment is the most important feature of this change, and @mas-chen thinks that the ordering / windowing is more important.

Probably this is a good indication that both benefits are important 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen can you clarify your comment? I am not quite following.

@pvary it might be good to separate this into two code snippets. we can remove the two lines in the beginning.

StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I just think

            .watermarkTimeUnit(TimeUnit.MILLI_SCALE)

should be advertised in the "basic" example. I think most people would just configure this, rather than the custom Timestamp assigner. This reduces code in the first example and keeps it simpler.

The 2nd example I consider as a more "advanced" example where we can show how to do the custom Timestamp assigner (and furthermore watermark alignment from the Flink perspective is an advanced feature--it requires lots of tuning and understanding of how it interacts with the watermark strategy--out of orderliness/idleness/etc).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen .watermarkTimeUnit() is only needed for long type column, which we don't know what's the precision. the first example is Iceberg timestamp field which carries time unit inherently (currently only micro-second) and hence there is no need to ask user to set the time unit like the second example.

timestamp assigner is for Flink StreamRecord timestamp, it is related to watermark generation / advancement at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explantation, makes sense. Please disregard my comment!

.build(),
// Watermarks are generated by the source, no need to generate it manually
WatermarkStrategy.<RowData>noWatermarks()
.withWatermarkAlignment(watermarkGroup, maxAllowedWatermarkDrift),
SOURCE_NAME,
TypeInformation.of(RowData.class));
```

## Options
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we also need to update the read options section.
https://iceberg.apache.org/docs/1.3.0/flink-configuration/#read-options

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no corresponding read-option for this feature yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh. then we would need to add them. cc @mas-chen


### Read options
Expand Down