Skip to content

Commit

Permalink
Adds documentation for ClickHouse support
Browse files Browse the repository at this point in the history
  • Loading branch information
linghengqian committed Nov 24, 2024
1 parent f72ebcb commit d2cfa11
Show file tree
Hide file tree
Showing 12 changed files with 384 additions and 42 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,7 @@ Caused by: java.io.UnsupportedEncodingException: Codepage Cp1252 is not supporte
原则上,ShardingSphere 的 GraalVM Native Image 集成不希望使用 classifier 为 `all` 的 `com.clickhouse:clickhouse-jdbc`,
因为 Uber Jar 会导致采集重复的 GraalVM Reachability Metadata。
可能的配置例子如下,

```xml
<project>
<dependencies>
Expand All @@ -287,8 +288,6 @@ Caused by: java.io.UnsupportedEncodingException: Codepage Cp1252 is not supporte
</project>
```

ClickHouse 不支持 ShardingSphere 集成级别的本地事务,XA 事务和 Seata AT 模式事务,更多讨论位于 https://github.com/ClickHouse/clickhouse-docs/issues/2300 。

7. 受 https://github.com/grpc/grpc-java/issues/10601 影响,用户如果在项目中引入了 `org.apache.hive:hive-jdbc`,
则需要在项目的 classpath 的 `META-INF/native-image/io.grpc/grpc-netty-shaded` 文件夹下创建包含如下内容的文件 `native-image.properties`,

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -277,6 +277,7 @@ users need to manually introduce the relevant optional modules and the ClickHous
In principle, ShardingSphere's GraalVM Native Image integration does not want to use `com.clickhouse:clickhouse-jdbc` with classifier `all`,
because Uber Jar will cause the collection of duplicate GraalVM Reachability Metadata.
Possible configuration examples are as follows,

```xml
<project>
<dependencies>
Expand All @@ -299,8 +300,6 @@ Possible configuration examples are as follows,
</dependencies>
</project>
```
ClickHouse does not support local transactions, XA transactions, and Seata AT mode transactions at the ShardingSphere integration level.
More discussion is at https://github.com/ClickHouse/clickhouse-docs/issues/2300 .

7. Affected by https://github.com/grpc/grpc-java/issues/10601 , should users incorporate `org.apache.hive:hive-jdbc` into their project,
it is imperative to create a file named `native-image.properties` within the directory `META-INF/native-image/io.grpc/grpc-netty-shaded` of the classpath,
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
+++
title = "ClickHouse"
weight = 6
+++

## 背景信息

ShardingSphere 默认情况下不提供对 `com.clickhouse.jdbc.ClickHouseDriver``driverClassName` 的支持。
ShardingSphere 对 ClickHouse JDBC Driver 的支持位于可选模块中。

## 前提条件

要在 ShardingSphere 的配置文件为数据节点使用类似 `jdbc:ch://localhost:8123/demo_ds_0``jdbcUrl`
可能的 Maven 依赖关系如下,

```xml
<dependencies>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc</artifactId>
<version>${shardingsphere.version}</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-parser-sql-clickhouse</artifactId>
<version>${shardingsphere.version}</version>
</dependency>
<dependency>
<groupId>com.clickhouse</groupId>
<artifactId>clickhouse-jdbc</artifactId>
<classifier>http</classifier>
<version>0.6.3</version>
</dependency>
</dependencies>
```

## 配置示例

### 启动 ClickHouse

编写 Docker Compose 文件来启动 ClickHouse。

```yaml
services:
clickhouse-server:
image: clickhouse/clickhouse-server:24.10.2.80
ports:
- "8123:8123"
```
### 创建业务表
通过第三方工具在 ClickHouse 内创建业务库与业务表。
以 DBeaver Community 为例,若使用 Ubuntu 22.04.4,可通过 Snapcraft 快速安装,
```shell
sudo apt update && sudo apt upgrade -y
sudo snap install dbeaver-ce
snap run dbeaver-ce
```

在 DBeaver Community 内,使用 `jdbc:ch://localhost:8123/default``jdbcUrl``default``username` 连接至 ClickHouse,
`password` 留空。
执行如下 SQL,

```sql
-- noinspection SqlNoDataSourceInspectionForFile
CREATE DATABASE demo_ds_0;
CREATE DATABASE demo_ds_1;
CREATE DATABASE demo_ds_2;
```

分别使用 `jdbc:ch://localhost:8123/demo_ds_0`
`jdbc:ch://localhost:8123/demo_ds_1``jdbc:ch://localhost:8123/demo_ds_2``jdbcUrl` 连接至 ClickHouse 来执行如下 SQL,

```sql
-- noinspection SqlNoDataSourceInspectionForFile
create table IF NOT EXISTS t_order (
order_id Int64 NOT NULL DEFAULT rand(),
order_type Int32,
user_id Int32 NOT NULL,
address_id Int64 NOT NULL,
status String
) engine = MergeTree
primary key (order_id)
order by (order_id);

TRUNCATE TABLE t_order;
```

### 在业务项目创建 ShardingSphere 数据源

在业务项目引入`前提条件`涉及的依赖后,在业务项目的 classpath 上编写 ShardingSphere 数据源的配置文件`demo.yaml`

```yaml
dataSources:
ds_0:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.clickhouse.jdbc.ClickHouseDriver
jdbcUrl: jdbc:ch://localhost:8123/demo_ds_0
ds_1:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.clickhouse.jdbc.ClickHouseDriver
jdbcUrl: jdbc:ch://localhost:8123/demo_ds_1
ds_2:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.clickhouse.jdbc.ClickHouseDriver
jdbcUrl: jdbc:ch://localhost:8123/demo_ds_2
rules:
- !SHARDING
tables:
t_order:
actualDataNodes:
keyGenerateStrategy:
column: order_id
keyGeneratorName: snowflake
defaultDatabaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: inline
shardingAlgorithms:
inline:
type: INLINE
props:
algorithm-expression: ds_${user_id % 2}
keyGenerators:
snowflake:
type: SNOWFLAKE
```
### 享受集成
创建 ShardingSphere 的数据源以享受集成,
```java
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
public class ExampleUtils {
void test() throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:shardingsphere:classpath:demo.yaml");
config.setDriverClassName("org.apache.shardingsphere.driver.ShardingSphereDriver");
try (HikariDataSource dataSource = new HikariDataSource(config);
Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement()) {
statement.execute("INSERT INTO t_order (user_id, order_type, address_id, status) VALUES (1, 1, 1, 'INSERT_TEST')");
statement.executeQuery("SELECT * FROM t_order");
statement.execute("alter table t_order delete where order_id=1");
}
}
}
```

## 使用限制

### SQL 限制

ShardingSphere JDBC DataSource 尚不支持执行 ClickHouse 的 `CREATE TABLE` 语句和 `TRUNCATE TABLE` 语句。
用户应考虑为 ShardingSphere 提交包含单元测试的 PR。

### 事务限制

ClickHouse 不支持 ShardingSphere 集成级别的本地事务,XA 事务和 Seata AT 模式事务,
更多讨论位于 https://github.com/ClickHouse/clickhouse-docs/issues/2300

### 嵌入式 ClickHouse 限制

嵌入式 ClickHouse 尚未发布 Java 客户端,ShardingSphere 不针对 SNAPSHOT 版本的嵌入式 ClickHouse `chDB` 做集成测试。
参考 https://github.com/chdb-io/chdb-java
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
+++
title = "ClickHouse"
weight = 6
+++

## Background Information

ShardingSphere does not provide support for `driverClassName` of `com.clickhouse.jdbc.ClickHouseDriver` by default.
ShardingSphere's support for ClickHouse JDBC Driver is in the optional module.

## Prerequisites

To use a `jdbcUrl` like `jdbc:ch://localhost:8123/demo_ds_0` for the data node in the ShardingSphere configuration file,
the possible Maven dependencies are as follows,

```xml
<dependencies>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc</artifactId>
<version>${shardingsphere.version}</version>
</dependency>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-parser-sql-clickhouse</artifactId>
<version>${shardingsphere.version}</version>
</dependency>
<dependency>
<groupId>com.clickhouse</groupId>
<artifactId>clickhouse-jdbc</artifactId>
<classifier>http</classifier>
<version>0.6.3</version>
</dependency>
</dependencies>
```

## Configuration example

### Start ClickHouse

Write a Docker Compose file to start ClickHouse.

```yaml
services:
clickhouse-server:
image: clickhouse/clickhouse-server:24.10.2.80
ports:
- "8123:8123"
```
### Create business tables
Use a third-party tool to create a business database and business table in ClickHouse.
Taking DBeaver Community as an example, if you use Ubuntu 22.04.4, you can quickly install it through Snapcraft.
```shell
sudo apt update && sudo apt upgrade -y
sudo snap install dbeaver-ce
snap run dbeaver-ce
```

In DBeaver Community, use `jdbcUrl` of `jdbc:ch://localhost:8123/default`, `username` of `default` to connect to ClickHouse,
and leave `password` blank.
Execute the following SQL,

```sql
-- noinspection SqlNoDataSourceInspectionForFile
CREATE DATABASE demo_ds_0;
CREATE DATABASE demo_ds_1;
CREATE DATABASE demo_ds_2;
```

Use `jdbcUrl` of `jdbc:ch://localhost:8123/demo_ds_0`,
`jdbc:ch://localhost:8123/demo_ds_1` and `jdbc:ch://localhost:8123/demo_ds_2`
to connect to ClickHouse and execute the following SQL.

```sql
-- noinspection SqlNoDataSourceInspectionForFile
create table IF NOT EXISTS t_order (
order_id Int64 NOT NULL DEFAULT rand(),
order_type Int32,
user_id Int32 NOT NULL,
address_id Int64 NOT NULL,
status String
) engine = MergeTree
primary key (order_id)
order by (order_id);

TRUNCATE TABLE t_order;
```

### Create ShardingSphere data source in business project

After the business project introduces the dependencies involved in `prerequisites`,
write the ShardingSphere data source configuration file `demo.yaml` on the classpath of the business project.

```yaml
dataSources:
ds_0:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.clickhouse.jdbc.ClickHouseDriver
jdbcUrl: jdbc:ch://localhost:8123/demo_ds_0
ds_1:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.clickhouse.jdbc.ClickHouseDriver
jdbcUrl: jdbc:ch://localhost:8123/demo_ds_1
ds_2:
dataSourceClassName: com.zaxxer.hikari.HikariDataSource
driverClassName: com.clickhouse.jdbc.ClickHouseDriver
jdbcUrl: jdbc:ch://localhost:8123/demo_ds_2
rules:
- !SHARDING
tables:
t_order:
actualDataNodes:
keyGenerateStrategy:
column: order_id
keyGeneratorName: snowflake
defaultDatabaseStrategy:
standard:
shardingColumn: user_id
shardingAlgorithmName: inline
shardingAlgorithms:
inline:
type: INLINE
props:
algorithm-expression: ds_${user_id % 2}
keyGenerators:
snowflake:
type: SNOWFLAKE
```
### Enjoy integration
Create a ShardingSphere data source to enjoy integration,
```java
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
public class ExampleUtils {
void test() throws SQLException {
HikariConfig config = new HikariConfig();
config.setJdbcUrl("jdbc:shardingsphere:classpath:demo.yaml");
config.setDriverClassName("org.apache.shardingsphere.driver.ShardingSphereDriver");
try (HikariDataSource dataSource = new HikariDataSource(config);
Connection connection = dataSource.getConnection();
Statement statement = connection.createStatement()) {
statement.execute("INSERT INTO t_order (user_id, order_type, address_id, status) VALUES (1, 1, 1, 'INSERT_TEST')");
statement.executeQuery("SELECT * FROM t_order");
statement.execute("alter table t_order delete where order_id=1");
}
}
}
```

## Usage Limitations

### SQL Limitations

ShardingSphere JDBC DataSource does not yet support the execution of ClickHouse's `CREATE TABLE` statement and `TRUNCATE TABLE` statement.
Users should consider submitting a PR containing unit tests for ShardingSphere.

### Transaction Limitations

ClickHouse does not support ShardingSphere integration-level local transactions, XA transactions, and Seata AT mode transactions.
More discussions are at https://github.com/ClickHouse/clickhouse-docs/issues/2300 .

### Embedded ClickHouse Limitations

Embedded ClickHouse has not yet released a Java client,
and ShardingSphere does not do integration tests for the SNAPSHOT version of embedded ClickHouse `chDB`.
Refer to https://github.com/chdb-io/chdb-java .
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,10 @@ ShardingSphere 对 HiveServer2 JDBC Driver 的支持位于可选模块中。
<groupId>com.fasterxml.woodstox</groupId>
<artifactId>woodstox-core</artifactId>
</exclusion>
<exclusion>
<groupId>org.apache.commons</groupId>
<artifactId>commons-text</artifactId>
</exclusion>
</exclusions>
</dependency>
</dependencies>
Expand Down
Loading

0 comments on commit d2cfa11

Please sign in to comment.