diff --git a/docs-2.0/2.quick-start/3.1add-storage-hosts.md b/docs-2.0/2.quick-start/3.1add-storage-hosts.md index 8199037ec6b..b3bd0e0b473 100644 --- a/docs-2.0/2.quick-start/3.1add-storage-hosts.md +++ b/docs-2.0/2.quick-start/3.1add-storage-hosts.md @@ -21,21 +21,30 @@ You have [connected to NebulaGraph](3.connect-to-nebula-graph.md). ADD HOSTS : [,: ...]; ``` - - Example: + {{ent.ent_begin}} + + If enabling the [Zone](../../4.deployment-and-installation/5.zone.md) feature, you still need to specify `INTO ZONE ` to add Storage hosts, otherwise the Storage hosts will fail to be added. ```ngql - nebula> ADD HOSTS 192.168.10.100:9779, 192.168.10.101:9779, 192.168.10.102:9779; + ADD HOSTS : [,: ...] INTO ZONE ; + ``` + + Example: + + ```ngql + nebula> ADD HOSTS 192.168.8.111:9779,192.168.8.112:9779 INTO ZONE az1; ``` + {{ent.ent_end}} !!! caution - Make sure that the IP you added is the same as the IP configured for `local_ip` in the `nebula-storaged.conf` file. Otherwise, the Storage service will fail to start. For information about configurations, see [Configurations](../5.configurations-and-logs/1.configurations/1.configurations.md). + Make sure that the IP you added is the same as the IP configured for `local_ip` in the `nebula-storaged.conf` file. Otherwise, the Storage service will fail to start. For information about configurations, see [Configurations](../5.configurations-and-logs/1.configurations/1.configurations.md). 2. Check the status of the hosts to make sure that they are all online. diff --git a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md index b2a4df93a6f..ab3bef92392 100644 --- a/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md +++ b/docs-2.0/2.quick-start/6.cheatsheet-for-ngql.md @@ -435,10 +435,9 @@ |Syntax|Description| |-|-| |`BALANCE LEADER`| Starts a job to balance the distribution of all the storage leaders in graph spaces. It returns the job ID.| - + |`BALANCE DATA`| Starts a job to balance the distribution of all the storage partitions in graph spaces. It returns the job ID. **For enterprise edition only.**| + |`BALANCE DATA REMOVE [,: ...]`| Starts a job to migrate the specified storage partitions. The default port is `9779`. **For enterprise edition only.**| + |`BALANCE DATA IN ZONE [REMOVE : [,: ...]]`| Starts a job to balance the distribution of storage partitions in each zone in the current graph space. It returns the job ID. You can use the `REMOVE` option to specify the partitions of storage services that you want to migrate to other storage services. **For enterprise edition only.**| * [Job statements](../3.ngql-guide/4.job-statements.md) @@ -448,14 +447,11 @@ | `SUBMIT JOB COMPACT` | Triggers the long-term RocksDB `compact` operation. | | `SUBMIT JOB FLUSH` | Writes the RocksDB memfile in the memory to the hard disk. | | `SUBMIT JOB STATS` | Starts a job that makes the statistics of the current graph space. Once this job succeeds, you can use the `SHOW STATS` statement to list the statistics. | + | `SUBMIT JOB BALANCE DATA IN ZONE`| Starts a job to balance partition replicas within each Zone. **For enterprise edition only.**| | `SHOW JOB ` | Shows the information about a specific job and all its tasks in the current graph space. The Meta Service parses a `SUBMIT JOB` request into multiple tasks and assigns them to the nebula-storaged processes. | | `SHOW JOBS` | Lists all the unexpired jobs in the current graph space. | | `STOP JOB` | Stops jobs that are not finished in the current graph space. | | `RECOVER JOB` | Re-executes the failed jobs in the current graph space and returns the number of recovered jobs. | - * [Kill queries](../3.ngql-guide/17.query-tuning-statements/6.kill-query.md) diff --git a/docs-2.0/20.appendix/learning-path.md b/docs-2.0/20.appendix/learning-path.md index 936fbf96341..71a6e999efd 100644 --- a/docs-2.0/20.appendix/learning-path.md +++ b/docs-2.0/20.appendix/learning-path.md @@ -141,13 +141,14 @@ After completing the NebulaGraph learning path, taking [NebulaGraph Certificatio | Document | | ------------------------------------------------------------ | |[Backup&Restore](../backup-and-restore/nebula-br/1.what-is-br.md)| - + |[Zone](../4.deployment-and-installation/5.zone.md)| + {{ent.ent_end}} - SSL encryption diff --git a/docs-2.0/3.ngql-guide/4.job-statements.md b/docs-2.0/3.ngql-guide/4.job-statements.md index e865e44366e..a006f771fdf 100644 --- a/docs-2.0/3.ngql-guide/4.job-statements.md +++ b/docs-2.0/3.ngql-guide/4.job-statements.md @@ -52,6 +52,62 @@ nebula> SUBMIT JOB BALANCE DATA REMOVE 192.168.8.100:9779; +------------+ ``` +## SUBMIT JOB BALANCE DATA IN ZONE + +!!! enterpriseonly + + Only available for the NebulaGraph Enterprise Edition. + +`SUBMIT JOB BALANCE DATA IN ZONE` statement starts a job to balance partition replicas within each Zone. It returns the job ID. + + + +For details on zones, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + +For example: + +```ngql +# Balance partition replicas within each Zone in the current space. +nebula> SUBMIT JOB BALANCE DATA IN ZONE; ++------------+ +| New Job Id | ++------------+ +| 25 | ++------------+ +``` + + + +## SUBMIT JOB BALANCE DATA IN ZONE REMOVE + +!!! enterpriseonly + + Only available for the NebulaGraph Enterprise Edition. + +`SUBMIT JOB BALANCE DATA IN ZONE REMOVE` statement starts a job to clear the partitions on specified Storage nodes in Zones in the current graph space. It returns the job ID. Before clearing the Storage nodes, make sure that the remaining Storage nodes in Zones can meet the set number of replicas. For example, if the number of replicas is set to 3, make sure that the remaining Storage nodes are greater than or equal to 3 before executing this command. + +For details on Zones, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + +For example: + +```ngql +# Clear the partitions on the specified Storage nodes. +nebula> SUBMIT JOB BALANCE DATA IN ZONE REMOVE 192.168.10.101:9779,192.168.10.102:9779; ++------------+ +| New Job Id | ++------------+ +| 26 | ++------------+ +``` + {{ ent.ent_end }} ## SUBMIT JOB BALANCE LEADER @@ -70,14 +126,14 @@ nebula> SUBMIT JOB BALANCE LEADER; ``` !!! caution @@ -58,6 +55,17 @@ CREATE SPACE [IF NOT EXISTS] ( `graph_space_name`, `partition_num`, `replica_factor`, `vid_type`, and `comment` cannot be modified once set. To modify them, drop the current working graph space with [`DROP SPACE`](./5.drop-space.md) and create a new one with `CREATE SPACE`. +{{ent.ent_begin}} + +When creating a graph space, the system will automatically recognize the value of `--zone_list` in the Meta configuration file, which determines whether the Zone feature is enabled: + + - If the value is empty, it means the Zone feature is not enabled. In this case, the graph space will be created without specifying Zones. + - If the value is not empty, and the number of Zones in `--zone_list` is equal to the number of replicas specified by `replica_factor`, the replicas of each partition in the graph space will be evenly distributed across the Zones specified in `--zone_list`. If the specified number of replicas is not equal to the number of Zones, the creation of the graph space will fail. + +For more details on Zones, see [Manage Zones](../../4.deployment-and-installation/5.zone.md). + +{{ent.ent_end}} + ### Clone graph spaces ```ngql diff --git a/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md b/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md index b0ade55302d..63839b2ee63 100644 --- a/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md +++ b/docs-2.0/3.ngql-guide/9.space-statements/4.describe-space.md @@ -23,14 +23,18 @@ nebula> DESCRIBE SPACE basketballplayer; +----+--------------------+------------------+----------------+---------+------------+--------------------+---------+ ``` - +{{ent.ent_end}} diff --git a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md index 45109338d7b..4603060c98a 100644 --- a/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md +++ b/docs-2.0/4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md @@ -264,6 +264,18 @@ Users can refer to the content of the following configurations, which only show --port=9779 ``` +{{ent.ent_begin}} +### (Optional) Configure Zones + +!!! enterpriseonly + + This section is only applicable to NebulaGraph Enterprise. + +A Zone is a logical rack for Storage nodes. You can set up Zones and add specified Storage nodes into these Zones. By configuring the Graph service to directionally access a given Zone, resource isolation and directed data access can be achieved, thereby reducing traffic consumption and cutting costs. + +For details, see [Manage Zones](../../4.deployment-and-installation/5.zone.md). +{{ent.ent_end}} + ### Start the cluster Start the corresponding service on **each machine**. Descriptions are as follows. diff --git a/docs-2.0/4.deployment-and-installation/5.zone.md b/docs-2.0/4.deployment-and-installation/5.zone.md index ae98ed5200d..73d371c3b95 100644 --- a/docs-2.0/4.deployment-and-installation/5.zone.md +++ b/docs-2.0/4.deployment-and-installation/5.zone.md @@ -1,117 +1,243 @@ -# Manage zone +# Manage Zones + +!!! enterpriseonly + + This feature is only available in the NebulaGraph Enterprise Edition. NebulaGraph supports the zone feature to manage Storage services in a cluster to achieve resource isolation, which is known as logical rack. ## Background -!!! compatibility +Within NebulaGraph, you can set up multiple Zones, with each Zone containing one or more Storage nodes. When creating a graph space, the system automatically recognizes these set Zones and stores graph space data on the Storage nodes within these Zones. - From NebulaGraph version 3.0.0, the Storage services added in the configuration files **CANNOT** be read or written directly. The configuration files only register the Storage services into the Meta services. You must run the `ADD HOSTS` command to read and write data on Storage servers. In NebulaGraph Cloud clusters, the Storage services are added automatically. +It's important to note that when creating a graph space, you need to specify the number of replicas for data partitioning. At this point, **the specified number of partition replicas and the number of set Zones must be equal, otherwise, the graph space cannot be created**. This is because NebulaGraph evenly distributes the replicas of every graph space partition across these set Zones. -Users can add the Storage services to a zone. Users specify a zone when creating a graph space, the graph space will be created in all the Storage services of the zone. Partitions and replicas are evenly stored in each zone. As shown in the figure below. +Since each Zone contains a complete set of graph space data partitions, at least one Storage node is required within each Zone to store these data partitions. -![Zone figure](https://docs-cdn.nebula-graph.com.cn/figures/zone1.png) +The partition replicas in NebulaGraph achieve strong consistency through the [Raft](../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft_1) protocol. It's recommended to use an odd number of partition replicas, and therefore, it's also suggested to set an odd number of Zones. -Six machines with each machine having one Storage service are grouped in pairs and added to three zones. Specify these three zones to create graph space S1, the partitions and replicas are evenly stored in zone1~zone3. +Taking the following picture as an example, when creating a graph space (S1), the data is partitioned into 3 partitions, with 3 replicas for each partition, and the number of Zones is also 3. Six machines hosting the Storage service are paired up and added to these 3 Zones. When creating the graph space, NebulaGraph stores the 3 replicas of each partition evenly across zone1, zone2, and zone3, and each Zone contains a complete set of graph space data partitions (Part1, Part2, and Part3). -## Scenarios +example_for_zones + +To reduce cost of cross-Zone network traffic, and increase data transfer speed (Intra-zone network usually has a lower latency than inter-zone network), you can configure the Graph service to prioritize intra-zone data access. Each Graphd will then prioritize to access the partition replica in the same zone as specified by Graphd if there is any. As an example, suppose Graphd A and Graphd B are located in zone1, Graphd C and Graphd D are in zone2, and Graphd E is in zone3. You can configure Graphd A and Graphd B to prioritize accessing data in zone1, Graphd C and Graphd D to prioritize accessing data in zone2, and Graphd E to prioritize accessing data in zone3. This helps reduce the cost of cross-zone network traffic and improves data transfer speed. + +example_for_intra_zone -- Create a graph space in some specified Storage services to isolate resources. + + +## Scenarios + +- Resource isolation. You can create a graph space on specified storage nodes to achieve resource isolation. +- Rolling upgrade. You need to stop one or more servers to update them, and then put them into use again until all servers in the cluster are updated to the new version. +- Cost saving. Allocate graph space data to different Zones, and control the client to access the replica data in the specified Zone to reduce traffic consumption and improve access efficiency. ## Syntax -### ADD HOSTS...INTO NEW ZONE +- Before enabling the Zone feature, clear any existing data in the cluster. See **Enabling Zone** for details. +- Each Storage node must belong to one, and only one, Zone. However, a Zone can have multiple Storage nodes. Storage nodes should outnumber or equal Zones. +- The number of Zones must equal the number of partition replicas; otherwise, the graph space cannot be created. +- The number of Zones is recommended to be odd. +- Adjusting the number of Zones isn't allowed. +- Zone name modifications are unsupported. Add the Storage services to a new zone. -```ngql -ADD HOSTS : [,: ...] [INTO NEW ZONE ""]; -``` +1. In the configuration file `nebula-metad.conf` of the Meta service, set `--zone_list` to Zone names to be added, such as `--zone_list=zone1, zone2, zone3`. Example: -```ngql -nebula> ADD HOSTS 192.168.10.100:9779, 192.168.10.101:9779, 192.168.10.102:9779; -``` + Once the value of `--zone_list` is configured and the Meta service is started, it cannot be modified, otherwise, the Meta service will fail to restart. + + !!! note + + - The number of Zones specified in `--zone_list` is recommended to be odd and must be less than or equal to the number of Storage nodes. When `--zone_list` is empty, it indicates that the Zone feature is disabled. + - Consider the replica settings when setting the number of Zones, since the number of Zones should match the replica count. For example, with 3 replicas, you must have 3 Zones. + - If the name of a Zone contains special characters (excluding underscores), reserved keywords, or starts with a number, you need to enclose the Zone name in backticks (`) when specifying the Zone name in a query statement; the Zone name cannot contain English periods (.); multiple Zone names must be separated by commas (,). + + For more information about the Meta configuration file, see [Meta service configuration](../5.configurations-and-logs/1.configurations/2.meta-config.md). + +2. Restart the Meta service. + + +## Specify intra Zone data access -If `INTO NEW ZONE ""` is not used, each Storage service is added to an independent zone created automatically, the name format of which is `default_zone__`. +1. Enable the Zone feature. For details, see **Enable Zone** above. +2. In the configuration file `nebula-graphd.conf` of the Graph service, add the following configuration: + + 1. Set the `--assigned_zone` to the name of the Zone where the Graphd is assigned, such as `--assigned_zone=zone1`. + + !!! note + + - Different Graph services can set different values for `--assigned_zone`, but the value of `--assigned_zone` must be one of the values in `--zone_list`. In production, it is recommended to use the actual zone that a Graphd locates to reduce management complexity. Of course, it must be within the `zone_list`. Otherwise, intra zone reading may not take effect. + - The value of `--assigned_zone` is a string and does not support English commas (,). + - When `--assigned_zone` is empty, it means reading from leader replicas. + + 2. Set `--prioritize_intra_zone_reading` to `true` to prioritize intra zone data reading. When reading fails in the Zone specified by `--assigned_zone`, an error occurs depending on the value of `stick_to_intra_zone_on_failure`. + + !!! caution + + It is recommended that the values of `--prioritize_intra_zone_reading` in different Graph services be consistent, otherwise, the load of Storage nodes will be unbalanced and unknown risks will occur. + + For details on the Graph configuration, see [Graph service configuration](../5.configurations-and-logs/1.configurations/3.graph-config.md). + +3. Restart the Graph service. + + +## Zone-related commands + +!!! note + + Make sure that the Zone feature is enabled and the `--zone_list` is configured before executing Zone-related commands. For details, see **Enable Zone** above. + +### View all Zone information ```ngql nebula> SHOW ZONES; -+------------------------------------+------------------+------+ -| Name | Host | Port | -+------------------------------------+------------------+------+ -| "default_zone_192.168.10.100_9779" | "192.168.10.100" | 9779 | -| "default_zone_192.168.10.101_9779" | "192.168.10.101" | 9779 | -| "default_zone_192.168.10.102_9779" | "192.168.10.102" | 9779 | -+------------------------------------+------------------+------+ ++--------+-----------------+------+ +| Name | Host | Port | ++--------+-----------------+------+ +| "az1" | "192.168.8.111" | 9779 | +| "az1" | "192.168.8.112" | 9779 | +| "az2" | "192.168.8.113" | 9779 | +| "az3" | "192.168.8.114" | 9779 | ++--------+-----------------+------+ ``` -### ADD HOSTS...INTO ZONE +Run `SHOW ZONES` in the current graph space to view all Zone information. The Zone information includes the name of the Zone, the IP address (or domain name) and the port number of the storage node in the Zone. -Add the Storage services to an existing zone. +### View the specified Zone - +For example: ```ngql -ADD HOSTS : [,: ...] INTO ZONE ""; +nebula> DESC ZONE az1 ++-----------------+------+ +| Hosts | Port | ++-----------------+------+ +| "192.168.8.111" | 7779 | +| "192.168.8.112" | 9779 | ++-----------------+------+ ``` -### DROP HOSTS +### Create a space in the specified Zones -Delete the Storage services from cluster. +The syntax for creating a graph space within a Zone is the same as in [Creating Graph Space](../3.ngql-guide/9.space-statements/1.create-space.md). -!!! note +However, during graph space creation, the system automatically recognizes the `--zone_list` value from the Meta configuration file. If this value is not empty and the number of Zones matches the partition replica count specified by `replica_factor`, the graph space's replicas will be evenly distributed across the Zones in `--zone_list`. If the specified replica count doesn't match the number of Zones, graph space creation will fail. + +If the value of `--zone_list` is empty, the Zone feature is not enabled, and the graph space will be created without specifying Zones. - You can not delete an in-use Storage service directly. You need to delete the associated graph space before deleting the Storage service. +### Check the Zones for the specified graph space ```ngql -DROP HOSTS : [,: ...]; +DESC SPACE ; ``` -### SHOW ZONES +For example: + +```ngql +nebula> DESC SPACE my_space_1 ++----+--------------+------------------+----------------+---------+------------+--------------------+---------+---------+ +| ID | Name | Partition Number | Replica Factor | Charset | Collate | Vid Type | Zones | Comment | ++----+--------------+------------------+----------------+---------+------------+--------------------+---------+---------+ +| 22 | "my_space_1" | 10 | 1 | "utf8" | "utf8_bin" | "FIXED_STRING(30)" | ["az1"] | | ++----+--------------+------------------+----------------+---------+------------+--------------------+---------+---------+ +``` -View all zones. +### Add Storage nodes to the specified Zone ```ngql -SHOW ZONES; +ADD HOSTS : [,: ...] INTO ZONE ; ``` -### DESC ZONE +- After enabling the Zone feature, you must include the `INTO ZONE` clause when executing the `ADD HOSTS` command; otherwise, adding a Storage node will fail. +- A Storage node can belong to only one Zone, but a single Zone can encompass multiple different Storage nodes. + + +For example: -View a specified zone. +```ngql +nebula> ADD HOSTS 192.168.8.111:9779,192.168.8.112:9779 INTO ZONE az1; +``` + +### Balance the Zone replicas ```ngql -DESCRIBE ZONE ""; -DESC ZONE ""; +BALANCE DATA IN ZONE; ``` -### RENAME ZONE +!!! note + + Specify a space before executing this command. + +After enabling the Zone feature, run `BALANCE DATA IN ZONE` to balance the partition replicas within each Zone. -Rename a zone. +For example: ```ngql -RENAME ZONE "" TO ""; +nebula> USE my_space_1; +nebula> BALANCE DATA IN ZONE; ``` -### DROP ZONE +### Migrate partitions from the Storage nodes in the specified Zones to other Storage nodes -Delete a zone. +```ngql +BALANCE DATA IN ZONE REMOVE : [,: ...] +``` !!! note - You can delete a zone only when there are no partitions in the zone. + - You must specify a space before executing this command. + - Make sure that the number of other Storage nodes is sufficient to meet the set number of partition replicas. When the number of Storage nodes is insufficient, the removal will fail. Run `SHOW JOBS ` to view the status of the removal task. When `FINISHED` is returned, the removal task is completed. + + +For example: + +```ngql +nebula> USE my_space_1; +nebula> BALANCE DATA IN ZONE REMOVE 192.168.8.111:9779; ++------------+ +| New Job Id | ++------------+ +| 34 | ++------------+ + +# To view the status of the removal task: +nebula> SHOW JOBS 34 ++--------+----------------+------------+----------------------------+----------------------------+ +| Job Id | Command | Status | Start Time | Stop Time | ++--------+----------------+------------+----------------------------+----------------------------+ +| 33 | "DATA_BALANCE" | "FINISHED" | 2023-09-01T08:03:16.000000 | 2023-09-01T08:03:16.000000 | ++--------+----------------+------------+----------------------------+----------------------------+ +``` + +### Drop Storage nodes from the specified Zone + +```ngql +DROP HOSTS : [,: ...]; +``` + +### SHOW ZONES + + - You cannot directly drop a Storage node that is in use. You need to first drop the associated graph space before dropping the Storage nodes. See [drop space](../3.ngql-guide/9.space-statements/5.drop-space.md) for details. + - Make sure the number of remaining Storage nodes outnumbers or equals that of Zones after removing a node, otherwise, the graph space will be unavailable. + +For example: ```ngql -DROP ZONE ""; +SHOW ZONES; ``` + ### MERGE ZONE...INTO Merge Storage services in multiple zones into a new zone. diff --git a/docs-2.0/4.deployment-and-installation/manage-storage-host.md b/docs-2.0/4.deployment-and-installation/manage-storage-host.md index 8d4a2950e42..3ae0b6400f1 100644 --- a/docs-2.0/4.deployment-and-installation/manage-storage-host.md +++ b/docs-2.0/4.deployment-and-installation/manage-storage-host.md @@ -29,6 +29,12 @@ nebula> ADD HOSTS "": [,"": ...]; - Ensure that the storage host to be added is not used by any other cluster, otherwise, the storage adding operation will fail. +{{ent.ent_begin}} + +When adding a Storage host to a cluster with the Zone feature enabled, you must specify the `INTO ZONE` option; otherwise, the addition of the Storage node will fail. For more details, see [Managing Zones](5.zone.md). + +{{ent.ent_end}} + ## Drop Storage hosts Delete the Storage hosts from cluster. diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md index eb72ccaab3e..4cf4505bbfb 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/2.meta-config.md @@ -88,7 +88,7 @@ For all parameters and their current values, see [Configurations](1.configuratio | Name | Predefined Value | Description |Whether supports runtime dynamic modifications| | :------------------------- | :-------------------- | :---------------------------------------------------------------------------- |:----------------- | -|`default_parts_num` | `100` | Specifies the default partition number when creating a new graph space. | No| +|`default_parts_num` | `10` | Specifies the default partition number when creating a new graph space. | No| |`default_replica_factor` | `1` | Specifies the default replica number when creating a new graph space. | No| ## RocksDB options configurations @@ -111,4 +111,10 @@ For all parameters and their current values, see [Configurations](1.configuratio |`ng_black_box_dump_period_seconds` |`5` |The time interval for Nebula-BBox to collect metric data. Unit: Second.| No| |`ng_black_box_file_lifetime_seconds` |`1800` |Storage time for Nebula-BBox files generated after collecting metric data. Unit: Second.| Yes| +## Zone configurations + +| Name | Predefined Value | Description |Whether supports runtime dynamic modifications| +| :-------------------- | :----- | :------------------- | :--------------------- | +| `zone_list` | Empty | A list of Zone names. When the value is not empty, the Zone feature is enabled. For details, see [Manage Zones](../../4.deployment-and-installation/5.zone.md).| No | + {{ ent.ent_end }} diff --git a/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md b/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md index 40e306a94ed..5dea83bba35 100644 --- a/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md +++ b/docs-2.0/5.configurations-and-logs/1.configurations/3.graph-config.md @@ -181,3 +181,23 @@ For more information about audit log, see [Audit log](../2.log-management/audit- |`min_batch_size` |`8192` | The minimum batch size for processing the dataset. Takes effect only when `max_job_size` is greater than 1.|Yes| |`optimize_appendvertices` |`false` | When enabled, the `MATCH` statement is executed without filtering dangling edges.|Yes| |`path_batch_size` |`10000` | The number of paths constructed per thread.|Yes| + +{{ ent.ent_begin }} + +## http2 configurations + +| Name | Predefined value | Description |Whether supports runtime dynamic modifications| +| :------------------- | :------------------------ | :------------------------------------------ |:------------------| +|`enable_http2_routing` |`false` |Whether to enable HTTP2 for RPC communications. Enabling it will slightly affect performance.|Yes| +|`stream_timeout_ms` |`30000` | The timeout for the HTTP stream. Unit: ms.|Yes| + + +## Zone Configuration + +| Name | Default Value | Description | Runtime Dynamic Modification Supported | +| :-------------------------------- | :------------ | :-------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------- | +| `assigned_zone` | Empty | When the Zone feature is enabled, set the Zone where the graphd to be located. See [Managing Zones](../../4.deployment-and-installation/5.zone.md) for details. | No | +| `prioritize_intra_zone_reading` | `false` | When set to `true`, prioritize to send queries to the Storage services in the same Zone. If reading fails, it depends on the value of `stick_to_intra_zone_on_failure` to determine whether to send requests to the leader partition replicas.
When set to `false`, data is read from the leader partition replicas. | No | +| `stick_to_intra_zone_on_failure` | `false` | When set to `true`, stick to intra-zone routing if unable to find the storaged hosting the requested partition replica in the same Zone.
When set to `false`, sending requests to leader partition replicas. | No | + +{{ ent.ent_end }} diff --git a/docs-2.0/8.service-tuning/load-balance.md b/docs-2.0/8.service-tuning/load-balance.md index 6a06ccbdc27..85f43007ade 100644 --- a/docs-2.0/8.service-tuning/load-balance.md +++ b/docs-2.0/8.service-tuning/load-balance.md @@ -9,15 +9,18 @@ You can use the `SUBMIT JOB BALANCE` statement to balance the distribution of pa {{ ent.ent_begin }} ## Balance partition distribution +The `SUBMIT JOB BALANCE DATA` command starts a job to balance the distribution of storage partitions in the current graph space by creating and executing a set of subtasks. + !!! enterpriseonly Only available for the NebulaGraph Enterprise Edition. !!! note - If the current graph space already has a `SUBMIT JOB BALANCE DATA` job in the `FAILED` status, you can restore the `FAILED` job, but cannot start a new `SUBMIT JOB BALANCE DATA` job. If the job continues to fail, manually stop it, and then you can start a new one. + - If the current graph space already has a `SUBMIT JOB BALANCE DATA` job in the `FAILED` status, you can restore the `FAILED` job, but cannot start a new `SUBMIT JOB BALANCE DATA` job. If the job continues to fail, manually stop it, and then you can start a new one. + - The following example introduces the methods of balanced partition distribution for storage nodes with the Zone feature disabled. When the Zone feature is enabled, balanced partition distribution is performed across zones by specifying the `IN ZONE` clause. For details, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + -The `SUBMIT JOB BALANCE DATA` commands starts a job to balance the distribution of storage partitions in the current graph space by creating and executing a set of subtasks. ### Examples @@ -103,6 +106,8 @@ To restore a balance job in the `FAILED` or `STOPPED` status, run `RECOVER JOB < To migrate specified partitions and scale in the cluster, you can run `SUBMIT JOB BALANCE DATA REMOVE [,: ...]`. +To migrate specified partitions for Zone-enabled clusters, you need to add the `IN ZONE` clause. For example, `SUBMIT JOB BALANCE DATA IN ZONE REMOVE [,: ...]`. For details, see [Manage Zones](../4.deployment-and-installation/5.zone.md). + For example, to migrate the partitions in server `192.168.8.100:9779`, the command as following: ```ngql @@ -118,134 +123,11 @@ nebula> SHOW HOSTS; !!! note - This command migrates partitions to other storage hosts but does not delete the current storage host from the cluster. To delete the Storage hosts from cluster, see [Manage Storage hosts](../4.deployment-and-installation/manage-storage-host.md). + This command migrates partitions to other storage hosts but does not delete the current storage host from the cluster. To delete the Storage hosts from a cluster, see [Manage Storage hosts](../4.deployment-and-installation/manage-storage-host.md). {{ ent.ent_end }} - ## Balance leader distribution To balance the raft leaders, run `SUBMIT JOB BALANCE LEADER`. It will start a job to balance the distribution of all the storage leaders in all graph spaces. diff --git a/mkdocs.yml b/mkdocs.yml index 19a0c58aaee..5b24227e8df 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -107,6 +107,7 @@ plugins: - 3.ngql-guide/8.clauses-and-options/joins.md - 20.appendix/release-notes/dashboard-ent-release-note.md - 20.appendix/release-notes/explorer-release-note.md + - 4.deployment-and-installation/5.zone.md # ent.end # comm.begin @@ -305,13 +306,13 @@ nav: # - Manage licenses: 9.about-license/4.manage-license.md #ent - Quick start: - - Deploy NebulaGraph using Docker: 2.quick-start/1.quick-start-workflow.md - - Deploy NebulaGraph on-premise: - - Step 1 Install NebulaGraph: 2.quick-start/2.install-nebula-graph.md - - Step 2 Manage NebulaGraph Service: 2.quick-start/5.start-stop-service.md - - Step 3 Connect to NebulaGraph: 2.quick-start/3.connect-to-nebula-graph.md - - Step 4 Register the Storage Service: 2.quick-start/3.1add-storage-hosts.md - - Step 5 Use nGQL (CRUD): 2.quick-start/4.nebula-graph-crud.md +# - Deploy NebulaGraph using Docker: 2.quick-start/1.quick-start-workflow.md +# - Deploy NebulaGraph on-premise: + - Step 1 Install NebulaGraph: 2.quick-start/2.install-nebula-graph.md + - Step 2 Manage NebulaGraph Service: 2.quick-start/5.start-stop-service.md + - Step 3 Connect to NebulaGraph: 2.quick-start/3.connect-to-nebula-graph.md + - Step 4 Register the Storage Service: 2.quick-start/3.1add-storage-hosts.md + - Step 5 Use nGQL (CRUD): 2.quick-start/4.nebula-graph-crud.md - nGQL cheatsheet: 2.quick-start/6.cheatsheet-for-ngql.md - nGQL guide: @@ -479,10 +480,10 @@ nav: - Local multi-node installation: 4.deployment-and-installation/2.compile-and-install-nebula-graph/deploy-nebula-graph-cluster.md - Install using Docker Compose: 4.deployment-and-installation/2.compile-and-install-nebula-graph/3.deploy-nebula-graph-with-docker-compose.md - Install with ecosystem tools: 4.deployment-and-installation/2.compile-and-install-nebula-graph/6.deploy-nebula-graph-with-peripherals.md - - Manage Service: 4.deployment-and-installation/manage-service.md - - Connect to Service: 4.deployment-and-installation/connect-to-nebula-graph.md - - Manage Storage host: 4.deployment-and-installation/manage-storage-host.md -# - Manage zone: 4.deployment-and-installation/5.zone.md + - Manage services: 4.deployment-and-installation/manage-service.md + - Connect to services: 4.deployment-and-installation/connect-to-nebula-graph.md + - Manage Storage hosts: 4.deployment-and-installation/manage-storage-host.md +# - Manage Zones: 4.deployment-and-installation/5.zone.md - Upgrade: - Upgrade NebulaGraph Community to the latest version: 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-graph-to-latest.md # - Upgrade NebulaGraph from v3.x to v3.4 (Community Edition): 4.deployment-and-installation/3.upgrade-nebula-graph/upgrade-nebula-from-300-to-latest.md