Unlike Spark catalog table (Table is only required on the client/driver side), Flink needs obtain `Table` object in Job Manager or Task. - For writer (https://github.com/apache/iceberg/pull/1185): Flink needs obtain `Table` in committer task for appending files. - For reader (https://github.com/apache/iceberg/pull/1293): Flink needs obtain `Table` in Job Manager for planing tasks. So we can introduce a `CatalogLoader` for reader and writer, users can define a custom catalog loader in `FlinkCatalogFactory`. ``` public interface CatalogLoader extends Serializable { Catalog loadCatalog(Configuration hadoopConf); } ```