Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flink: Document watermark generation feature #9179

Merged
merged 3 commits into from
Dec 5, 2023
Merged

Conversation

pvary
Copy link
Contributor

@pvary pvary commented Nov 29, 2023

Documentation for #8553

@github-actions github-actions bot added the docs label Nov 29, 2023
@pvary pvary requested a review from stevenzwu November 29, 2023 15:07
docs/flink-queries.md Outdated Show resolved Hide resolved
### Emitting watermarks
Emitting watermarks from the source itself could be beneficial for several purposes, like harnessing the
[Flink Watermark Alignment](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/event-time/generating_watermarks/#watermark-alignment)
feature to prevent runaway readers, or providing triggers for [Flink windowing](https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/dev/datastream/operators/windows/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we should call out windows specifically for the benefit of emitting watermark from source itself. any event time and watermark strategy will have windows triggers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also remove prevent runaway readers, as it may not be very clear to users what "runaway readers" mean

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is very important to understand, that windowing and watermark generation based on records could cause surprising results - especially with batch reads, or in backfill situations. Without this feature there is not guarantee on the order of the files are read. Window triggering will only become reliable when the source controls the emitted watermarks.

I am not sure how detailed the description should be, but I think it is important to be noted here, so I am open for suggestions, if you think we should add more detail here.

Copy link
Contributor

@stevenzwu stevenzwu Nov 30, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feature produces better watermark. But I think we don't have to explicitly say windowing here. If someone use low-level event timer triggers for stateful processing, this also helps.

docs/flink-queries.md Outdated Show resolved Hide resolved
docs/flink-queries.md Outdated Show resolved Hide resolved
docs/flink-queries.md Outdated Show resolved Hide resolved
docs/flink-queries.md Outdated Show resolved Hide resolved
SOURCE_NAME,
TypeInformation.of(RowData.class));
```

## Options
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we also need to update the read options section.
https://iceberg.apache.org/docs/1.3.0/flink-configuration/#read-options

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no corresponding read-option for this feature yet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh. then we would need to add them. cc @mas-chen

@stevenzwu
Copy link
Contributor

@dchristle can you also help review this doc PR? your perspective can help improve the readability of the doc.

docs/flink-queries.md Outdated Show resolved Hide resolved
.set(FlinkReadOptions.SPLIT_FILE_OPEN_COST, String.valueOf(TableProperties.SPLIT_SIZE_DEFAULT))
// Watermark using long column
.watermarkColumn("long_column")
.watermarkTimeUnit(TimeUnit.MILLI_SCALE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd include this in the previous example. I read this as a more advanced example as most users wouldn't need watermark alignment and so withTimestampAssigner could also be moved down here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would keep this 2 as a separate example.
If I understand correctly @stevenzwu thinks that the watermark alignment is the most important feature of this change, and @mas-chen thinks that the ordering / windowing is more important.

Probably this is a good indication that both benefits are important 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen can you clarify your comment? I am not quite following.

@pvary it might be good to separate this into two code snippets. we can remove the two lines in the beginning.

StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I just think

            .watermarkTimeUnit(TimeUnit.MILLI_SCALE)

should be advertised in the "basic" example. I think most people would just configure this, rather than the custom Timestamp assigner. This reduces code in the first example and keeps it simpler.

The 2nd example I consider as a more "advanced" example where we can show how to do the custom Timestamp assigner (and furthermore watermark alignment from the Flink perspective is an advanced feature--it requires lots of tuning and understanding of how it interacts with the watermark strategy--out of orderliness/idleness/etc).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen .watermarkTimeUnit() is only needed for long type column, which we don't know what's the precision. the first example is Iceberg timestamp field which carries time unit inherently (currently only micro-second) and hence there is no need to ask user to set the time unit like the second example.

timestamp assigner is for Flink StreamRecord timestamp, it is related to watermark generation / advancement at all.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explantation, makes sense. Please disregard my comment!

docs/flink-queries.md Outdated Show resolved Hide resolved
SOURCE_NAME,
TypeInformation.of(RowData.class));
```

## Options
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh. then we would need to add them. cc @mas-chen

.set(FlinkReadOptions.SPLIT_FILE_OPEN_COST, String.valueOf(TableProperties.SPLIT_SIZE_DEFAULT))
// Watermark using long column
.watermarkColumn("long_column")
.watermarkTimeUnit(TimeUnit.MILLI_SCALE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mas-chen can you clarify your comment? I am not quite following.

@pvary it might be good to separate this into two code snippets. we can remove the two lines in the beginning.

StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment();
TableLoader tableLoader = TableLoader.fromHadoopTable("hdfs://nn:8020/warehouse/path");

// Ordered data file reads with windowing, using a timestamp column
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no windowing in the snippet below. so may not be accurate to say it in the comment here.

If I understand this part correctly, it tries to demonstrate emit watermark from Iceberg source without enabling watermark alignment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the examples part... could you please check?

docs/flink-queries.md Outdated Show resolved Hide resolved
@stevenzwu stevenzwu merged commit 8519224 into apache:main Dec 5, 2023
2 checks passed
@stevenzwu
Copy link
Contributor

thanks @pvary for the documentation. thanks @mas-chen for the review

@pvary
Copy link
Contributor Author

pvary commented Dec 5, 2023

Thanks for the review and the merge @stevenzwu and @mas-chen !

@pvary pvary deleted the water_doc branch December 5, 2023 16:49
lisirrx pushed a commit to lisirrx/iceberg that referenced this pull request Jan 4, 2024
devangjhabakh pushed a commit to cdouglas/iceberg that referenced this pull request Apr 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants