Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve connector docs around Config and SQL support #22803

Merged
merged 2 commits into from
Jul 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 22 additions & 21 deletions docs/src/main/sphinx/connector/bigquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ a few caveats:
- The project ID Google Cloud Project to bill for the export.
- Taken from the service account
* - `bigquery.views-enabled`
- Enables the connector to read from views and not only tables. Please read
- Enables the connector to read from views and not only tables. Please read
[this section](bigquery-reading-from-views) before enabling this feature.
- `false`
* - `bigquery.view-expire-duration`
Expand All @@ -140,18 +140,18 @@ a few caveats:
- The dataset where the materialized view is going to be created.
- The view's project
* - `bigquery.skip-view-materialization`
- Use REST API to access views instead of Storage API. BigQuery `BIGNUMERIC`
- Use REST API to access views instead of Storage API. BigQuery `BIGNUMERIC`
and `TIMESTAMP` types are unsupported.
- `false`
* - `bigquery.view-materialization-with-filter`
- Use filter conditions when materializing views.
- `false`
* - `bigquery.views-cache-ttl`
- Duration for which the materialization of a view will be cached and reused.
- Duration for which the materialization of a view will be cached and reused.
Set to `0ms` to disable the cache.
- `15m`
* - `bigquery.metadata.cache-ttl`
- Duration for which metadata retrieved from BigQuery is cached and reused.
- Duration for which metadata retrieved from BigQuery is cached and reused.
Set to `0ms` to disable the cache.
- `0ms`
* - `bigquery.max-read-rows-retries`
Expand All @@ -174,37 +174,44 @@ a few caveats:
- Enable [query results cache](https://cloud.google.com/bigquery/docs/cached-results).
- `false`
* - `bigquery.arrow-serialization.enabled`
- Enable using Apache Arrow serialization when reading data from BigQuery.
- Enable using Apache Arrow serialization when reading data from BigQuery.
Please read this [section](bigquery-arrow-serialization-support) before using this feature.
- `true`
* - `bigquery.rpc-proxy.enabled`
- Use a proxy for communication with BigQuery.
- `false`
* - `bigquery.rpc-proxy.uri`
- Proxy URI to use if connecting through a proxy.
-
-
* - `bigquery.rpc-proxy.username`
- Proxy user name to use if connecting through a proxy.
-
-
* - `bigquery.rpc-proxy.password`
- Proxy password to use if connecting through a proxy.
-
-
* - `bigquery.rpc-proxy.keystore-path`
- Keystore containing client certificates to present to proxy if connecting
- Keystore containing client certificates to present to proxy if connecting
through a proxy. Only required if proxy uses mutual TLS.
-
-
* - `bigquery.rpc-proxy.keystore-password`
- Password of the keystore specified by `bigquery.rpc-proxy.keystore-path`.
-
-
* - `bigquery.rpc-proxy.truststore-path`
- Truststore containing certificates of the proxy server if connecting
- Truststore containing certificates of the proxy server if connecting
through a proxy.
-
-
* - `bigquery.rpc-proxy.truststore-password`
- Password of the truststore specified by `bigquery.rpc-proxy.truststore-path`.
-
:::

(bigquery-fte-support)=
### Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.


(bigquery-type-mapping)=
## Type mapping

Expand Down Expand Up @@ -379,19 +386,13 @@ the following features:
```{include} sql-delete-limitation.fragment
```

(bigquery-fte-support)=
## Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access BigQuery.

(bigquery-query-function)=
### `query(varchar) -> table`
#### `query(varchar) -> table`

The `query` function allows you to query the underlying BigQuery directly. It
requires syntax native to BigQuery, because the full query is pushed down and
Expand Down
8 changes: 4 additions & 4 deletions docs/src/main/sphinx/connector/cassandra.md
Original file line number Diff line number Diff line change
Expand Up @@ -313,13 +313,13 @@ statements, the connector supports the following features:
- {doc}`/sql/create-table-as`
- {doc}`/sql/drop-table`

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access Cassandra.
.. \_cassandra-query-function:

### `query(varchar) -> table`
#### `query(varchar) -> table`

The `query` function allows you to query the underlying Cassandra directly. It
requires syntax native to Cassandra, because the full query is pushed down and
Expand Down Expand Up @@ -377,8 +377,8 @@ cassandra.allow-drop-table=true

The query text is not parsed by Trino, only passed through, and therefore only
subject to any security or access control of the underlying data source.
For example, the following system call adds the `your_column` to the `your_table`

For example, the following system call adds the `your_column` to the `your_table`
table in the `example` catalog.

```sql
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/clickhouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -336,13 +336,13 @@ statements, the connector supports the following features:
```{include} jdbc-procedures-execute.fragment
```

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access ClickHouse.

(clickhouse-query-function)=
### `query(varchar) -> table`
#### `query(varchar) -> table`

The `query` function allows you to query the underlying database directly. It
requires syntax native to ClickHouse, because the full query is pushed down and
Expand Down
17 changes: 9 additions & 8 deletions docs/src/main/sphinx/connector/delta-lake.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,6 +218,13 @@ The following table describes {ref}`catalog session properties
- `true`
:::

(delta-lake-fte-support)=
### Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.


(delta-lake-type-mapping)=
## Type mapping

Expand Down Expand Up @@ -832,17 +839,11 @@ directly or used in conditional statements.
- `$file_size`
: Size of the file for this row.

(delta-lake-fte-support)=
## Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.

## Table functions
### Table functions

The connector provides the following table functions:

### table_changes
#### table_changes

Allows reading Change Data Feed (CDF) entries to expose row-level changes
between two versions of a Delta Lake table. When the `change_data_feed_enabled`
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/druid.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,13 +127,13 @@ metadata in the Druid database.
```{include} jdbc-procedures-execute.fragment
```

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access Druid.

(druid-query-function)=
### `query(varchar) -> table`
#### `query(varchar) -> table`

The `query` function allows you to query the underlying database directly. It
requires syntax native to Druid, because the full query is pushed down and
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/elasticsearch.md
Original file line number Diff line number Diff line change
Expand Up @@ -403,13 +403,13 @@ The connector provides [globally available](sql-globally-available) and [read
operation](sql-read-operations) statements to access data and metadata in the
Elasticsearch catalog.

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access Elasticsearch.

(elasticsearch-raw-query-function)=
### `raw_query(varchar) -> table`
#### `raw_query(varchar) -> table`

The `raw_query` function allows you to query the underlying database directly.
This function requires [Elastic Query
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/exasol.md
Original file line number Diff line number Diff line change
Expand Up @@ -144,13 +144,13 @@ metadata in the Exasol database.
```{include} jdbc-procedures-execute.fragment
```

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access Exasol.

(exasol-query-function)=
### `query(varchar) -> table`
#### `query(varchar) -> table`

The `query` function allows you to query the underlying database directly. It
requires syntax native to Exasol, because the full query is pushed down and
Expand Down
4 changes: 2 additions & 2 deletions docs/src/main/sphinx/connector/googlesheets.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,12 +141,12 @@ this connector supports the following features:

- {doc}`/sql/insert`

## Table functions
### Table functions

The connector provides specific {doc}`/functions/table` to access Google Sheets.

(google-sheets-sheet-function)=
### `sheet(id, range) -> table`
#### `sheet(id, range) -> table`

The `sheet` function allows you to query a Google Sheet directly without
specifying it as a named table in the metadata sheet.
Expand Down
23 changes: 12 additions & 11 deletions docs/src/main/sphinx/connector/hive.md
Original file line number Diff line number Diff line change
Expand Up @@ -287,6 +287,17 @@ You must enable and configure the specific native file system access. If none is
activated, the [legacy support](file-system-legacy) is used and must be
configured.

(hive-fte-support)=
### Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy
on non-transactional tables.

Read operations are supported with any retry policy on transactional tables.
Write operations and `CREATE TABLE ... AS` operations are not supported with
any retry policy on transactional tables.

(hive-security)=
## Security

Expand Down Expand Up @@ -641,7 +652,7 @@ type conversions.
* - `TIMESTAMP`
- `VARCHAR`, `DATE`
* - `VARBINARY`
- `VARCHAR`
- `VARCHAR`
:::

Any conversion failure results in null, which is the same behavior
Expand Down Expand Up @@ -1103,16 +1114,6 @@ functionality:
- Support all Hive data types and correct mapping to Trino types
- Ability to process custom UDFs

(hive-fte-support)=
## Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy
on non-transactional tables.

Read operations are supported with any retry policy on transactional tables.
Write operations and `CREATE TABLE ... AS` operations are not supported with
any retry policy on transactional tables.

## Performance

Expand Down
17 changes: 9 additions & 8 deletions docs/src/main/sphinx/connector/iceberg.md
Original file line number Diff line number Diff line change
Expand Up @@ -178,6 +178,13 @@ implementation is used:
- `true`
:::

(iceberg-fte-support)=
### Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.


(iceberg-file-system-configuration)=
## File system access configuration

Expand Down Expand Up @@ -1534,17 +1541,11 @@ use the data from the storage tables, even after the grace period expired.
Dropping a materialized view with {doc}`/sql/drop-materialized-view` removes
the definition and the storage table.

(iceberg-fte-support)=
## Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.

## Table functions
### Table functions

The connector supports the table functions described in the following sections.

### table_changes
#### table_changes

Allows reading row-level changes between two versions of an Iceberg table.
The following query shows an example of displaying the changes of the `t1`
Expand Down
16 changes: 8 additions & 8 deletions docs/src/main/sphinx/connector/mariadb.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,12 @@ properties files.
```{include} non-transactional-insert.fragment
```

(mariadb-fte-support)=
### Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.

## Querying MariaDB

The MariaDB connector provides a schema for every MariaDB *database*.
Expand Down Expand Up @@ -299,26 +305,20 @@ statements, the connector supports the following features:
```{include} sql-delete-limitation.fragment
```

(mariadb-fte-support)=
## Fault-tolerant execution support

The connector supports {doc}`/admin/fault-tolerant-execution` of query
processing. Read and write operations are both supported with any retry policy.

### Procedures

```{include} jdbc-procedures-flush.fragment
```
```{include} jdbc-procedures-execute.fragment
```

## Table functions
### Table functions

The connector provides specific {doc}`table functions </functions/table>` to
access MariaDB.

(mariadb-query-function)=
### `query(varchar) -> table`
#### `query(varchar) -> table`

The `query` function allows you to query the underlying database directly. It
requires syntax native to MariaDB, because the full query is pushed down and
Expand Down
Loading