Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
4904d91
[core] Fix invalidate tables with same tableNames in other db issue (…
XiaoHongbo-Hope Jan 13, 2025
c1d4616
[core] Add a http-report action to reporting partition done to remote…
LinMingQiang Jan 13, 2025
a4a1758
[rest] Add name to GetTableResponse and remove path (#4894)
JingsongLi Jan 13, 2025
b35bb3a
[hotfix] fix some typos in CombinedTableCompactorSink (#4890)
yangjf2019 Jan 13, 2025
97441c3
[core] Introduce conversion from parquet type to paimon type (#4888)
Zouxxyy Jan 13, 2025
9613c7b
[spark] Clean empty directory after removing orphan files (#4824)
wkang13579 Jan 13, 2025
a926deb
[docs] update link about Apache Doris (#4898)
morningman Jan 13, 2025
74a0894
[hotfix] fix Trino version to 440 (#4896)
yangjf2019 Jan 13, 2025
381e58b
[flink] Add a action/procedure to remove unexisting files from manife…
tsreaper Jan 14, 2025
5de4d95
[core] Support partition API and update get table (#4879)
jerry-024 Jan 14, 2025
107dfa0
[core] Extract loadTable in CatalogUtils (#4904)
JingsongLi Jan 14, 2025
74408a5
[flink] Introduce precommit compact for newly created files in unawar…
tsreaper Jan 14, 2025
4c2ba07
[doc] Add doc for precommit-compact
JingsongLi Jan 14, 2025
ee023b0
[core] Fix that sequence group fields are mistakenly aggregated by de…
yuzelin Jan 15, 2025
1a5915d
[core] Introduce SnapshotCommit to abstract atomically commit (#4911)
JingsongLi Jan 15, 2025
c4c8980
[core] Support view API in Rest catalog (#4908)
jerry-024 Jan 15, 2025
a3c0e32
[hotfix][doc] fix the url link in document (#4914)
yangjf2019 Jan 16, 2025
98c9f9a
[core] Add min_partition_stats and max_partition_stats columns to man…
tsreaper Jan 16, 2025
0df8b9d
[hotfix] Modify the type conversion method (#4928)
yangjf2019 Jan 16, 2025
3d81240
[test] Fix the unstable random tests in PrimaryKeyFileStoreTableITCas…
tsreaper Jan 16, 2025
d833074
[test] Fix the unstable testCloneWithSchemaEvolution (#4932)
tsreaper Jan 16, 2025
6a1e477
[hotfix][doc] Add quotes to the partition (#4931)
yangjf2019 Jan 16, 2025
7e768c0
[core] Remove unnecessary toString call in `DataFilePathFactoryTest` …
chenjian2664 Jan 16, 2025
107e5ec
[core] Make FileIOLoader extends Serializable
JingsongLi Jan 17, 2025
dbd129d
[spark] Introduce SparkV2FilterConverter (#4915)
Zouxxyy Jan 17, 2025
587fa28
[rest] Add http conf and ExponentialHttpRetryInterceptor to handle re…
jerry-024 Jan 17, 2025
b152608
[common] A FileIO API to list files iteratively (#4834)
smdsbz Jan 17, 2025
3915684
[core] Make CatalogContext implements Serializable (#4936)
JingsongLi Jan 17, 2025
d1f3d28
[core] Refactor the orphan clean and expire function for external pat…
neuyilan Jan 17, 2025
0a0b055
[core] Introduce DataFilePathFactories to unify cache factories
JingsongLi Jan 17, 2025
59c038a
[test] Fix the unstable testNoChangelogProducerStreamingRandom (#4940)
tsreaper Jan 17, 2025
43abe2d
[flink] Introduce scan bounded to force bounded in streaming job (#4941)
JingsongLi Jan 17, 2025
9a6c3a2
[docs] Fix typo of BIG_ENDIAN (#4945)
wgtmac Jan 20, 2025
c812c27
[hotfix] Minor fix for FileIO.listFilesIterative
JingsongLi Jan 20, 2025
316fa28
[refactor] Clean unused codes in Lock (#4948)
yuzelin Jan 20, 2025
19cafb0
[core] Refine CommitMessage toString (#4950)
smdsbz Jan 20, 2025
3031a9d
[rest] Refactor AuthProviders to remove credential concept (#4959)
JingsongLi Jan 20, 2025
cce5257
[core] Support data token in RESTCatalog (#4944)
jerry-024 Jan 20, 2025
e1669ce
[core] Clear cache when deleting the snapshot (#4966)
Zouxxyy Jan 21, 2025
0866ff8
[iceberg] Support skipping AWS Glue archive (#4962)
siadat Jan 21, 2025
835602c
[hotfix] remove_orphan_files action shouldn't check table argument (t…
yuzelin Jan 21, 2025
b7e63fa
[core] Fix NPE when retracting collect and merge-map (#4960)
yuzelin Jan 21, 2025
14befd5
[hotfix] Update the maven version requirements in the readme (#4955)
yangjf2019 Jan 21, 2025
00992c5
[rest] Refactor RESTTokenFileIO to cache FileIO in static cache (#4965)
JingsongLi Jan 21, 2025
4ef5882
[hotfix] Fix flaky test AppendOnlyFileStoreTableTest#testBatchOrderWi…
yuzelin Jan 21, 2025
878b99f
[doc][spark] Add read metadata columns (#4953)
Zouxxyy Jan 21, 2025
cfb0075
[core] Optimized iterative list implementations for FileIO (#4952)
smdsbz Jan 21, 2025
79154d4
[core] Remove Catalog.fileio method (#4973)
JingsongLi Jan 21, 2025
ed6de3e
[core] Fix that sequence fields are mistakenly aggregated by default …
yuzelin Jan 22, 2025
7b63b5e
[spark] Fix update table with char type (#4972)
Zouxxyy Jan 22, 2025
f3e0a0d
[spark] Fix rollback not correctly identify tag or snapshot (#4947)
xuzifu666 Jan 22, 2025
8821b04
[rest] Optimize partition methods to let rest return table not partit…
JingsongLi Jan 22, 2025
79939ed
[doc] Pypaimon api table_scan plan splits. (#4978)
LinMingQiang Jan 22, 2025
a4de5e7
[filesystem] Support Tencent COSN (#4854)
liujinhui1994 Jan 22, 2025
39a9f68
[core] ObjectRefresh with iterative list and batched commit (#4980)
smdsbz Jan 22, 2025
78cfc72
[doc] Add sys prefix to procedure (#4981)
Zouxxyy Jan 22, 2025
bfd0a0a
[core] Throw exception if increment query with rescale bucket (#4984)
yuzelin Jan 23, 2025
cd32438
[core] Populate more metadata to object table (#4987)
smdsbz Jan 23, 2025
7246435
[hotfix] [docs] Fix cdc doc url and some typos (#4968)
yangjf2019 Jan 23, 2025
4812fd2
[spark] Fallback to spark except query if increment query with rescal…
Zouxxyy Jan 23, 2025
9d6fe5f
[parquet] Refactory parquet reader using spark code. (#4982)
leaves12138 Jan 24, 2025
a779c95
[test][flink] Add tests back in PreAggregationITCase which deleted by…
JingsongLi Jan 24, 2025
8e00805
[parquet] Parquet ColumnarBatch should return ColumnarRowIterator for…
JingsongLi Jan 24, 2025
3290fcc
[parquet] Introduce LongIterator to Parquet RowIndexGenerator (#4991)
JingsongLi Jan 24, 2025
2a404f0
[hotfix] Fix NPE in ColumnarRowIterator.reset
JingsongLi Jan 24, 2025
6cffdef
[cdc] Add option to prevent logging of corrupted records (#4918)
atallahade Jan 24, 2025
f564400
[flink] Replace per record in ReadOperator to work with object reuse
JingsongLi Jan 24, 2025
aab1097
[core] Refactory ColumnarRowIterator using LongIterator. (#4992)
leaves12138 Jan 24, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
14 changes: 14 additions & 0 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -273,6 +273,20 @@ from https://parquet.apache.org/ version 1.14.0
paimon-common/src/main/java/org/apache/paimon/data/variant/GenericVariant.java
paimon-common/src/main/java/org/apache/paimon/data/variant/GenericVariantBuilder.java
paimon-common/src/main/java/org/apache/paimon/data/variant/GenericVariantUtil.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/ParquetColumnVector.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/ParquetReadState.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/ParquetVectorUpdater.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/ParquetVectorUpdaterFactory.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/RowIndexGenerator.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedColumnReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedDeltaBinaryPackedReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedDeltaByteArrayReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedDeltaLengthByteArrayReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedParquetRecordReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedPlainValuesReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedReaderBase.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedRleValuesReader.java
paimon-format/src/main/java/org/apache/paimon/format/parquet/newReader/VectorizedValuesReader.java
from https://spark.apache.org/ version 4.0.0-preview2

MIT License
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ You can join the Paimon community on Slack. Paimon channel is in ASF Slack works

## Building

JDK 8/11 is required for building the project. Maven version >=3.3.1.
JDK 8/11 is required for building the project. Maven version >=3.6.3.

- Run the `mvn clean install -DskipTests` command to build the project.
- Run the `mvn spotless:apply` to format the project (both Java and Scala).
Expand Down
15 changes: 14 additions & 1 deletion docs/content/append-table/streaming.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,20 @@ You can streaming write to the Append table in a very flexible way through Flink
Flink, using it like a queue. The only difference is that its latency is in minutes. Its advantages are very low cost
and the ability to push down filters and projection.

## Automatic small file merging
## Pre small files merging

Pre means that this compact occurs before committing files to the snapshot.

If Flink's checkpoint interval is short (for example, 30 seconds), each snapshot may produce lots of small changelog
files. Too many files may put a burden on the distributed storage cluster.

In order to compact small changelog files into large ones, you can set the table option `precommit-compact = true`.
Default value of this option is false, if true, it will add a compact coordinator and worker operator after the writer
operator, which copies changelog files into large ones.

## Post small files merging

Post means that this compact occurs after committing files to the snapshot.

In streaming writing job, without bucket definition, there is no compaction in writer, instead, will use
`Compact Coordinator` to scan the small files and pass compaction task to `Compact Worker`. In streaming mode, if you
Expand Down
4 changes: 2 additions & 2 deletions docs/content/cdc-ingestion/kafka-cdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
kafka_sync_table
kafka_sync_table \
--warehouse <warehouse-path> \
--database <database-name> \
--table <table-name> \
Expand Down Expand Up @@ -195,7 +195,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
kafka_sync_database
kafka_sync_database \
--warehouse <warehouse-path> \
--database <database-name> \
[--table_mapping <table-name>=<paimon-table-name>] \
Expand Down
4 changes: 2 additions & 2 deletions docs/content/cdc-ingestion/mongo-cdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
mongodb_sync_table
mongodb_sync_table \
--warehouse <warehouse-path> \
--database <database-name> \
--table <table-name> \
Expand Down Expand Up @@ -187,7 +187,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
mongodb_sync_database
mongodb_sync_database \
--warehouse <warehouse-path> \
--database <database-name> \
[--table_prefix <paimon-table-prefix>] \
Expand Down
2 changes: 1 addition & 1 deletion docs/content/cdc-ingestion/postgres-cdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
postgres_sync_table
postgres_sync_table \
--warehouse <warehouse_path> \
--database <database_name> \
--table <table_name> \
Expand Down
4 changes: 2 additions & 2 deletions docs/content/cdc-ingestion/pulsar-cdc.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
pulsar_sync_table
pulsar_sync_table \
--warehouse <warehouse-path> \
--database <database-name> \
--table <table-name> \
Expand Down Expand Up @@ -190,7 +190,7 @@ To use this feature through `flink run`, run the following shell command.
```bash
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-{{< version >}}.jar \
pulsar_sync_database
pulsar_sync_database \
--warehouse <warehouse-path> \
--database <database-name> \
[--table_prefix <paimon-table-prefix>] \
Expand Down
22 changes: 11 additions & 11 deletions docs/content/concepts/spec/fileindex.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,15 +69,15 @@ File index file format. Put all column and offset in the header.
| BODY |
|_____________________________________| _____________________
*
magic: 8 bytes long, value is 1493475289347502L, BIT_ENDIAN
version: 4 bytes int, BIT_ENDIAN
head length: 4 bytes int, BIT_ENDIAN
column number: 4 bytes int, BIT_ENDIAN
column x name: 2 bytes short BIT_ENDIAN and Java modified-utf-8
index number: 4 bytes int (how many column items below), BIT_ENDIAN
index name x: 2 bytes short BIT_ENDIAN and Java modified-utf-8
start pos: 4 bytes int, BIT_ENDIAN
length: 4 bytes int, BIT_ENDIAN
magic: 8 bytes long, value is 1493475289347502L, BIG_ENDIAN
version: 4 bytes int, BIG_ENDIAN
head length: 4 bytes int, BIG_ENDIAN
column number: 4 bytes int, BIG_ENDIAN
column x name: 2 bytes short BIG_ENDIAN and Java modified-utf-8
index number: 4 bytes int (how many column items below), BIG_ENDIAN
index name x: 2 bytes short BIG_ENDIAN and Java modified-utf-8
start pos: 4 bytes int, BIG_ENDIAN
length: 4 bytes int, BIG_ENDIAN
redundant length: 4 bytes int (for compatibility with later versions, in this version, content is zero)
redundant bytes: var bytes (for compatibility with later version, in this version, is empty)
BODY: column index bytes + column index bytes + column index bytes + .......
Expand All @@ -88,7 +88,7 @@ BODY: column index bytes + column index bytes + colu
Define `'file-index.bloom-filter.columns'`.

Content of bloom filter index is simple:
- numHashFunctions 4 bytes int, BIT_ENDIAN
- numHashFunctions 4 bytes int, BIG_ENDIAN
- bloom filter bytes

This class use (64-bits) long hash. Store the num hash function (one integer) and bit set bytes only. Hash bytes type
Expand Down Expand Up @@ -135,7 +135,7 @@ offset: 4 bytes int (when it is negative, it represents t
and its position is the inverse of the negative value)
</pre>

Integer are all BIT_ENDIAN.
Integer are all BIG_ENDIAN.

## Index: Bit-Slice Index Bitmap

Expand Down
6 changes: 3 additions & 3 deletions docs/content/concepts/spec/tableindex.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ Its structure is very simple, only storing hash values in the file:

HASH_VALUE | HASH_VALUE | HASH_VALUE | HASH_VALUE | ...

HASH_VALUE is the hash value of the primary-key. 4 bytes, BIT_ENDIAN.
HASH_VALUE is the hash value of the primary-key. 4 bytes, BIG_ENDIAN.

## Deletion Vectors

Expand All @@ -49,9 +49,9 @@ The deletion file is a binary file, and the format is as follows:

- First, record version by a byte. Current version is 1.
- Then, record <size of serialized bin, serialized bin, checksum of serialized bin> in sequence.
- Size and checksum are BIT_ENDIAN Integer.
- Size and checksum are BIG_ENDIAN Integer.

For each serialized bin:

- First, record a const magic number by an int (BIT_ENDIAN). Current the magic number is 1581511376.
- First, record a const magic number by an int (BIG_ENDIAN). Current the magic number is 1581511376.
- Then, record serialized bitmap. Which is a [RoaringBitmap](https://github.com/RoaringBitmap/RoaringBitmap) (org.roaringbitmap.RoaringBitmap).
42 changes: 21 additions & 21 deletions docs/content/concepts/system-tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -279,45 +279,45 @@ You can query all manifest files contained in the latest snapshot or the specifi
SELECT * FROM my_table$manifests;

/*
+--------------------------------+-------------+------------------+-------------------+---------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id |
+--------------------------------+-------------+------------------+-------------------+---------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 |
| manifest-f4dcab43-ef6b-4713... | 1648 | 1 | 0 | 0 |
+--------------------------------+-------------+------------------+-------------------+---------------+
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id | min_partition_stats | max_partition_stats |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 | {20230315, 00} | {20230315, 20} |
| manifest-f4dcab43-ef6b-4713... | 1648 | 1 | 0 | 0 | {20230115, 00} | {20230316, 23} |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
2 rows in set
*/

-- You can also query the manifest with specified snapshot
SELECT * FROM my_table$manifests /*+ OPTIONS('scan.snapshot-id'='1') */;
/*
+--------------------------------+-------------+------------------+-------------------+---------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id |
+--------------------------------+-------------+------------------+-------------------+---------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 |
+--------------------------------+-------------+------------------+-------------------+---------------+
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id | min_partition_stats | max_partition_stats |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 | {20230315, 00} | {20230315, 20} |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
1 rows in set
*/

- You can also query the manifest with specified tagName
SELECT * FROM my_table$manifests /*+ OPTIONS('scan.tag-name'='tag1') */;
/*
+--------------------------------+-------------+------------------+-------------------+---------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id |
+--------------------------------+-------------+------------------+-------------------+---------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 |
+--------------------------------+-------------+------------------+-------------------+---------------+
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id | min_partition_stats | max_partition_stats |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 | {20230315, 00} | {20230315, 20} |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
1 rows in set
*/

- You can also query the manifest with specified timestamp in unix milliseconds
SELECT * FROM my_table$manifests /*+ OPTIONS('scan.timestamp-millis'='1678883047356') */;
/*
+--------------------------------+-------------+------------------+-------------------+---------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id |
+--------------------------------+-------------+------------------+-------------------+---------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 |
+--------------------------------+-------------+------------------+-------------------+---------------+
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| file_name | file_size | num_added_files | num_deleted_files | schema_id | min_partition_stats | max_partition_stats |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
| manifest-f4dcab43-ef6b-4713... | 12365| 40 | 0 | 0 | {20230315, 00} | {20230315, 20} |
+--------------------------------+-------------+------------------+-------------------+---------------+---------------------+---------------------+
1 rows in set
*/
```
Expand Down
16 changes: 14 additions & 2 deletions docs/content/engines/doris.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ under the License.

This documentation is a guide for using Paimon in Doris.

> More details can be found in [Apache Doris Website](https://doris.apache.org/docs/lakehouse/datalake-analytics/paimon/)
> More details can be found in [Apache Doris Website](https://doris.apache.org/docs/dev/lakehouse/catalogs/paimon-catalog)

## Version

Expand Down Expand Up @@ -65,9 +65,21 @@ CREATE CATALOG `paimon_hms` PROPERTIES (
"hive.metastore.uris" = "thrift://172.21.0.44:7004",
"hadoop.username" = "hadoop"
);

-- Integrate with Aliyun DLF
CREATE CATALOG paimon_dlf PROPERTIES (
'type' = 'paimon',
'paimon.catalog.type' = 'dlf',
'warehouse' = 'oss://paimon-bucket/paimonoss/',
'dlf.proxy.mode' = 'DLF_ONLY',
'dlf.uid' = 'xxxxx',
'dlf.region' = 'cn-beijing',
'dlf.access_key' = 'ak',
'dlf.secret_key' = 'sk'
);
```

See [Apache Doris Website](https://doris.apache.org/docs/lakehouse/datalake-analytics/paimon/) for more examples.
See [Apache Doris Website](https://doris.apache.org/docs/dev/lakehouse/catalogs/paimon-catalog) for more examples.

## Access Paimon Catalog

Expand Down
2 changes: 1 addition & 1 deletion docs/content/engines/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ under the License.
| Flink | 1.15 - 1.20 | ✅ | ✅ | ✅ | ✅(1.17+) | ✅ | ✅ | ✅ | ✅(1.17+) | ❌ | ✅ |
| Spark | 3.2 - 3.5 | ✅ | ✅ | ✅ | ✅ | ✅(3.3+) | ✅(3.3+) | ✅ | ✅ | ✅ | ✅(3.3+) |
| Hive | 2.1 - 3.1 | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Trino | 420 - 439 | ✅ | ✅(427+) | ✅(427+) | ✅(427+) | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Trino | 420 - 440 | ✅ | ✅(427+) | ✅(427+) | ✅(427+) | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| Presto | 0.236 - 0.280 | ✅ | ❌ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
| [StarRocks](https://docs.starrocks.io/docs/data_source/catalog/paimon_catalog/) | 3.1+ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
| [Doris](https://doris.apache.org/docs/lakehouse/datalake-analytics/paimon) | 2.0.6+ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
Expand Down
Loading
Loading