diff --git a/content/en/docs/user-guide/cluster_deployment/with_meta.md b/content/en/docs/user-guide/cluster_deployment/with_meta.md index 1141a465..1023604c 100644 --- a/content/en/docs/user-guide/cluster_deployment/with_meta.md +++ b/content/en/docs/user-guide/cluster_deployment/with_meta.md @@ -187,9 +187,28 @@ timeout = "5s" server_addrs = ['http://{HoraeMetaAddr}:2379'] ``` +### Compaction Offload + +Compaction offload is also supported. To enable compaction offload, the corresponding compaction mode with node picker and endpoint should be configured. + +- `node_picker`: There are two types of node picker -- `Local` and `Remote`(WIP). + - When the `Local` is setted, the local compaction task would be offloaded to the specific remote compaction server, which decided by `endpoint`. +- `endpoint`: The endpoint, in the form `addr:port`, indicating the _grpc port_ of the remote compaction server. + +Here is an example for it: + +```toml +[analytic.compaction_mode] +compaction_mode = "Offload" +node_picker = "Local" +endpoint = "{RemoteCompactionServerAddr}:{RemoteCompactionServerGrpcPort}" +``` + +A Compaction Server, responsible for executing the compaction task, is also needed. Currently `horaedb-server` will act as this role, in the future we can move it to an independent service. + ### Complete Config of HoraeDB -With all the parts of the configurations mentioned above, a runnable complete config for HoraeDB can be made. In order to make the HoraeDB cluster runnable, we can decide to adopt RocksDB-based WAL and local-disk-based Object Storage: +With all the parts of the configurations mentioned above, a runnable complete config for HoraeDB can be made. In order to make the HoraeDB cluster runnable, we can decide to adopt RocksDB-based WAL and local-disk-based Object Storage without compaction offload: ```toml [server]