Skip to content

Commit

Permalink
Merge branch 'master' into QingZ11-patch-3
Browse files Browse the repository at this point in the history
  • Loading branch information
ChrisChen2023 authored Feb 1, 2024
2 parents 9e67476 + 3c9ec6a commit bf0a309
Show file tree
Hide file tree
Showing 48 changed files with 432 additions and 167 deletions.
26 changes: 11 additions & 15 deletions docs-2.0-en/20.appendix/0.FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -351,33 +351,29 @@ $ ./nebula-graphd --version

#### Increase or decrease the number of Meta, Graph, or Storage nodes

NebulaGraph {{ nebula.release }} does not provide any commands or tools to support automatic scale out/in. You can refer to the following steps:
- NebulaGraph {{ nebula.release }} does not provide any commands or tools to support automatic scale out/in. You can refer to the following steps:

1. Scale out and scale in metad: The metad process can not be scaled out or scale in. The process cannot be moved to a new machine. You cannot add a new metad process to the service.
- Scale out and scale in metad: The metad process can not be scaled out or scale in. The process cannot be moved to a new machine. You cannot add a new metad process to the service.

!!! note
!!! note

You can use the [Meta transfer script tool](https://github.com/vesoft-inc/nebula/blob/master/scripts/meta-transfer-tools.sh) to migrate Meta services. Note that the Meta-related settings in the configuration files of Storage and Graph services need to be modified correspondingly.

2. Scale in graphd: Remove the IP of the graphd process from the code in the client. Close this graphd process.
- Scale in graphd: Remove the IP of the graphd process from the code in the client. Close this graphd process.

3. Scale out graphd: Prepare the binary and config files of the graphd process in the new host. Modify the config files and add all existing addresses of the metad processes. Then start the new graphd process.
- Scale out graphd: Prepare the binary and config files of the graphd process in the new host. Modify the config files and add all existing addresses of the metad processes. Then start the new graphd process.

4. Scale in storaged: See [Balance remove command](../8.service-tuning/load-balance.md). After the command is finished, stop this storaged process.
- Scale in storaged: See [Balance remove command](../8.service-tuning/load-balance.md). After the command is finished, stop this storaged process.

!!! caution
!!! caution

- Before executing this command to migrate the data in the specified Storage node, make sure that the number of other Storage nodes is sufficient to meet the set replication factor. For example, if the replication factor is set to 3, then before executing this command, make sure that the number of other Storage nodes is greater than or equal to 3.
- Before executing this command to migrate the data in the specified Storage node, make sure that the number of other Storage nodes is sufficient to meet the set replication factor. For example, if the replication factor is set to 3, then before executing this command, make sure that the number of other Storage nodes is greater than or equal to 3.

- If there are multiple space partitions in the Storage node to be migrated, execute this command in each space to migrate all space partitions in the Storage node.

5. Scale out storaged: Prepare the binary and config files of the storaged process in the new host, Modify the config files and add all existing addresses of the metad processes. Then register the storaged process to the metad, and then start the new storaged process. For details, see [Register storaged services](../2.quick-start/3.1add-storage-hosts.md).

You also need to run [Balance Data and Balance leader](../8.service-tuning/load-balance.md) after scaling in/out storaged.
- If there are multiple space partitions in the Storage node to be migrated, execute this command in each space to migrate all space partitions in the Storage node.

You can scale Graph and Storage services with Dashboard Enterprise Edition. For details, see [Scale](../nebula-dashboard-ent/4.cluster-operator/operator/scale.md).
- Scale out storaged: Prepare the binary and config files of the storaged process in the new host, modify the config files and add all existing addresses of the metad processes. Then register the storaged process to the metad, and then start the new storaged process. For details, see [Register storaged services](../2.quick-start/3.1add-storage-hosts.md).

You can also use NebulaGraph Operator to scale Graph and Storage services. For details, see [Deploy NebulaGraph clusters](../k8s-operator/4.cluster-administration/4.1.installation/4.1.1.cluster-install.md).
You also need to run [Balance Data and Balance leader](../8.service-tuning/load-balance.md) after scaling in/out storaged.

#### Add or remove disks in the Storage nodes

Expand Down
34 changes: 29 additions & 5 deletions docs-2.0-en/20.appendix/6.eco-tool-version.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ Docker Compose can quickly deploy NebulaGraph clusters. For how to use it, pleas
|:---|:---|
| {{ nebula.tag }} | {{bench.tag}} |

## API, SDK
## API and SDK

!!! compatibility

Expand All @@ -117,9 +117,33 @@ Docker Compose can quickly deploy NebulaGraph clusters. For how to use it, pleas
| {{ nebula.tag }}| [Java](https://github.com/vesoft-inc/nebula-java/releases/tag/{{java.tag}}) |
| {{ nebula.tag }}| [HTTP](https://github.com/vesoft-inc/nebula-http-gateway/releases/tag/{{gateway.tag}}) |

## Not Released
## Other utilities and tools

The following are useful utilities and tools contributed and maintained by community users.
* ORM
* Spring Boot-based ORM: [NGBATIS](https://github.com/nebula-contrib/ngbatis)
* Swagger Springboot Demo: [nebula-swagger-demo](https://github.com/nebula-contrib/nebula-swagger-demo)
* Java ORM: [graph-ocean](https://github.com/nebula-contrib/graph-ocean)
* JDBC Connector: [nebula-jdbc](https://github.com/nebula-contrib/nebula-jdbc)
* Python ORM: [nebula-carina](https://github.com/nebula-contrib/nebula-carina)
* Golang ORM: [norm](https://github.com/nebula-contrib/norm)
* Data processing
* Stream ETL: [nebula-real-time-exchange](https://github.com/nebula-contrib/nebula-real-time-exchange)
* DataX Plugin: [nebula-datax-plugin](https://github.com/nebula-contrib/nebula-datax-plugin)
* Backend services
* Infrastructure services:[graph-gateway](https://github.com/nebula-contrib/graph-gateway)
* Quick deployment
* Getting started with NebulaGraph in Docker Desktop: [nebulagraph-docker-ext](https://github.com/nebula-contrib/nebulagraph-docker-ext)
* Running NebulaGraph in a browser: [nebulagraph-lite](https://github.com/nebula-contrib/nebulagraph-lite)
* Testing
* Java testing library: [testcontainers-nebula](https://github.com/nebula-contrib/testcontainers-nebula)
* Clients
* Scala client: [zio-nebula](https://github.com/nebula-contrib/zio-nebula)
* Node.js client: [nebula-node](https://github.com/nebula-contrib/nebula-node)
* PHP client: [nebula-php](https://github.com/nebula-contrib/nebula-php)
* .NET client: [nebula-net](https://github.com/nebula-contrib/nebula-net)
* Terminal
* Nebula-console plugin for JetBrains IDEs: [nebula-console-intellij-plugin](https://github.com/nebula-contrib/nebula-console-intellij-plugin)


- [Rust Client](https://github.com/vesoft-inc/nebula-rust)
- [Node.js Client](https://github.com/vesoft-inc/nebula-node)
- Object Graph Mapping Library (OGM, or ORM)

Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ nGQL allows you to reference edge properties, including user-defined edge proper

| Parameter | Description |
| :---------- | :------------------ |
| `$-` | Used to get the output results of the statement before the pipe in the composite query. For more information, see [Pipe](4.pipe.md). |
| `$-` | Used to get the output results of the statement before the pipe in the composite query. For more information, see [Pipe](../5.operators/4.pipe.md). |

## Examples

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ You can also use the property reference symbols (`$^` and `$$`) instead of the `

- `$$` represents the data of the end vertex at the end of exploration.

`properties($^)` and `properties($$)` are generally used in `GO` statements. For more information, see [Property reference](../5.operators/5.property-reference.md).
`properties($^)` and `properties($$)` are generally used in `GO` statements. For more information, see [Property reference](../4.variable-and-composite-queries/3.property-reference.md).

!!! caution

Expand Down Expand Up @@ -133,7 +133,7 @@ nebula> GO FROM "player100" OVER follow \

!!! note

The semantics of the query for the starting vertex with src(edge) and [properties(`$^`)](../5.operators/5.property-reference.md) are different. src(edge) indicates the starting vertex ID of the edge in the graph database, while properties(`$^`) indicates the data of the starting vertex where you start to expand the graph, such as the data of the starting vertex `player100` in the above GO statement.
The semantics of the query for the starting vertex with src(edge) and [properties(`$^`)](../4.variable-and-composite-queries/3.property-reference.md) are different. src(edge) indicates the starting vertex ID of the edge in the graph database, while properties(`$^`) indicates the data of the starting vertex where you start to expand the graph, such as the data of the starting vertex `player100` in the above GO statement.

### dst(edge)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ For more information, see [LOOKUP ON](5.lookup.md).
**Use case:** Complex graph traversals, such as finding friends of a vertex, friends' friends, etc.

**Note:**
- Use [property reference symbols](../5.operators/5.property-reference.md) (`$^` and `$$`) to return properties of the starting or target vertices, e.g., `YIELD $^.player.name`.
- Use [property reference symbols](../4.variable-and-composite-queries/3.property-reference.md) (`$^` and `$$`) to return properties of the starting or target vertices, e.g., `YIELD $^.player.name`.
- Use the functions `properties($^)` and `properties($$)` to return all properties of the starting or target vertices. Specify property names in the function to return specific properties, e.g., `YIELD properties($^).name`.
- Use the functions `src(edge)` and `dst(edge)` to return the starting or destination vertex ID of an edge, e.g., `YIELD src(edge)`.

Expand Down
2 changes: 1 addition & 1 deletion docs-2.0-en/3.ngql-guide/8.clauses-and-options/where.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ The `WHERE` clause usually works in the following queries:
## Basic usage

!!! note
In the following examples, `$$` and `$^` are reference operators. For more information, see [Operators](../5.operators/5.property-reference.md).
In the following examples, `$$` and `$^` are reference operators. For more information, see [Operators](../4.variable-and-composite-queries/3.property-reference.md).

### Define conditions with boolean operators

Expand Down
2 changes: 1 addition & 1 deletion docs-2.0-en/connector/nebula-spark-connector.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ NebulaGraph Spark Connector applies to the following scenarios:
- Read data from {{nebula.name}} for analysis and computation.
- Write data back to {{nebula.name}} after analysis and computation.
- Migrate the data of {{nebula.name}}.
- Graph computing with [NebulaGraph Algorithm](graph-computing/nebula-algorithm.md).
- Graph computing with [NebulaGraph Algorithm](../graph-computing/nebula-algorithm.md).

## Benefits

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,41 @@ Example:
java -cp nebula-exchange_spark_2.4-3.0-SNAPSHOT.jar com.vesoft.exchange.common.GenerateConfigTemplate -s csv -p /home/nebula/csv_application.conf
```

## Using an encrypted password

You can use either a plaintext password or an RSA encrypted password when setting the password for connecting to {{nebula.name}} in the configuration file.

To use an RSA-encrypted password, you need to configure the following settings in the configuration file:

- `nebula.pswd` is configured as the RSA encrypted password.
- `nebula.privateKey` is configured as the key for RSA encryption.
- `nebula.enableRSA` is configured as `true`.

Users can use their own tools for encryption, or they can use the encryption tool provided in Exchange's JAR package, for example:

```bash
spark-submit --master local --class com.vesoft.exchange.common.PasswordEncryption nebula-exchange_spark_2.4-3.0-SNAPSHOT.jar -p nebula
```

The results returned are as follows:

```bash
=================== public key begin ===================
MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCLl7LaNSEXlZo2hYiJqzxgyFBQdkxbQXYU/xQthsBJwjOPhkiY37nokzKnjNlp6mv5ZUomqxLsoNQHEJ6BZD4VPiaiElFAkTD+gyul1v8f3A446Fr2rnVLogWHnz8ECPt7X8jwmpiKOXkOPIhqU5E0Cua+Kk0nnVosbos/VShfiQIDAQAB
=================== public key end ===================


=================== private key begin ===================
MIICeAIBADANBgkqhkiG9w0BAQEFAASCAmIwggJeAgEAAoGBAIuXsto1IReVmjaFiImrPGDIUFB2TFtBdhT/FC2GwEnCM4+GSJjfueiTMqeM2Wnqa/llSiarEuyg1AcQnoFkPhU+JqISUUCRMP6DK6XW/x/cDjjoWvaudUuiBYefPwQI+3tfyPCamIo5eQ48iGpTkTQK5r4qTSedWixuiz9VKF+JAgMBAAECgYADWbfEPwQ1UbTq3Bej3kVLuWMcG0rH4fFYnaq5UQOqgYvFRR7W9H+80lOj6+CIB0ViLgkylmaU4WNVbBOx3VsUFFWSqIIIviKubg8m8ey7KAd9X2wMEcUHi4JyS2+/WSacaXYS5LOmMevvuaOwLEV0QmyM+nNGRIjUdzCLR1935QJBAM+IF8YD5GnoAPPjGIDS1Ljhu/u/Gj6/YBCQKSHQ5+HxHEKjQ/YxQZ/otchmMZanYelf1y+byuJX3NZ04/KSGT8CQQCsMaoFO2rF5M84HpAXPi6yH2chbtz0VTKZworwUnpmMVbNUojf4VwzAyOhT1U5o0PpFbpi+NqQhC63VUN5k003AkEArI8vnVGNMlZbvG7e5/bmM9hWs2viSbxdB0inOtv2g1M1OV+B2gp405ru0/PNVcRV0HQFfCuhVfTSxmspQoAihwJBAJW6EZa/FZbB4JVxreUoAr6Lo8dkeOhT9M3SZbGWZivaFxot/Cp/8QXCYwbuzrJxjqlsZUeOD6694Uk08JkURn0CQQC8V6aRa8ylMhLJFkGkMDHLqHcQCmY53Kd73mUu4+mjMJLZh14zQD9ydFtc0lbLXTeBAMWV3uEdeLhRvdAo3OwV
=================== private key end ===================


=================== encrypted password begin ===================
Io+3y3mLOMnZJJNUPHZ8pKb4VfTvg6wUh6jSu5xdmLAoX/59tK1HTwoN40aOOWJwa1a5io7S4JqcX/jEcAorw7pelITr+F4oB0AMCt71d+gJuu3/lw9bjUEl9tF4Raj82y2Dg39wYbagN84fZMgCD63TPiDIevSr6+MFKASpGrY=
=================== encrypted password end ===================
check: the real password decrypted by private key and encrypted password is: nebula
```

## Configuration instructions

Before configuring the `application.conf` file, it is recommended to copy the file name `application.conf` and then edit the file name according to the file type of a data source. For example, change the file name to `csv_application.conf` if the file type of the data source is CSV.
Expand Down Expand Up @@ -63,7 +98,9 @@ Users only need to configure parameters for connecting to Hive if Spark and Hive
|`nebula.address.graph`|list\[string\]|`["127.0.0.1:9669"]`|Yes|The addresses of all Graph services, including IPs and ports, separated by commas (,). Example: `["ip1:port1","ip2:port2","ip3:port3"]`.|
|`nebula.address.meta`|list\[string\]|`["127.0.0.1:9559"]`|Yes|The addresses of all Meta services, including IPs and ports, separated by commas (,). Example: `["ip1:port1","ip2:port2","ip3:port3"]`.|
|`nebula.user`|string|-|Yes|The username with write permissions for NebulaGraph.|
|`nebula.pswd`|string|-|Yes|The account password.|
|`nebula.pswd`|string|-|Yes|The account password. The password can be plaintext or RSA encrypted. To use an RSA encrypted password, you need to set `enableRSA` and `privateKey`. For how to encrypt a password, see **Using an encrypted password** above.|
|`nebula.enableRSA`|bool|`false`|No|Whether to use an RSA encrypted password.|
|`nebula.privateKey`|string|-|No|The key used to encrypt the password using RSA.|
|`nebula.space`|string|-|Yes|The name of the graph space where data needs to be imported.|
|`nebula.ssl.enable.graph`|bool|`false`|Yes|Enables the [SSL encryption](https://en.wikipedia.org/wiki/Transport_Layer_Security) between Exchange and Graph services. If the value is `true`, the SSL encryption is enabled and the following SSL parameters take effect. If Exchange is run on a multi-machine cluster, you need to store the corresponding files in the same path on each machine when setting the following SSL-related paths.|
|`nebula.ssl.sign`|string|`ca`|Yes|Specifies the SSL sign. Optional values are `ca` and `self`.|
Expand Down Expand Up @@ -229,6 +266,10 @@ For different data sources, the vertex configurations are different. There are m
|`tags.service`|string|-|Yes|The Kafka server address.|
|`tags.topic`|string|-|Yes|The message type.|
|`tags.interval.seconds`|int|`10`|Yes|The interval for reading messages. Unit: seconds.|
|`tags.securityProtocol`|string|-|No| Kafka security protocol. |
|`tags.mechanism`|string|-|No| The security certification mechanism provided by SASL of Kafka. |
|`tags.kerberos`|bool|`false`|No| Whether to enable Kerberos for security certification. If `tags.mechanism` is `kerberos`, this parameter must be set to `true`.|
|`tags.kerberosServiceName`|string|-|No| Kerberos service name. If `tags.kerberos` is `true`, this parameter must be set. |

#### Specific parameters for generating SST files

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,6 +112,11 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf`
# The account entered must have write permission for the NebulaGraph space.
user: root
pswd: nebula
# Whether to use a password encrypted with RSA.
# enableRSA: true
# The key used to encrypt the password using RSA.
# privateKey: ""
# Fill in the name of the graph space you want to write data to in the NebulaGraph.
space: basketballplayer
connection: {
Expand Down Expand Up @@ -317,7 +322,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf`
Run the following command to import ClickHouse data into NebulaGraph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md).

```bash
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange-{{exchange.release}}.jar_path> -c <clickhouse_application.conf_path>
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange.jar_path> -c <clickhouse_application.conf_path>
```

!!! note
Expand All @@ -327,7 +332,7 @@ ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchan
For example:

```bash
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-exchange/nebula-exchange/target/nebula-exchange-{{exchange.release}}.jar -c /root/nebula-exchange/nebula-exchange/target/classes/clickhouse_application.conf
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-exchange/nebula-exchange/target/nebula-exchange_spark_2.4-{{exchange.release}}.jar -c /root/nebula-exchange/nebula-exchange/target/classes/clickhouse_application.conf
```

You can search for `batchSuccess.<tag_name/edge_name>` in the command output to check the number of successes. For example, `batchSuccess.follow: 300`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,10 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf`
# The account entered must have write permission for the NebulaGraph space.
user: root
pswd: nebula
# Whether to use a password encrypted with RSA.
# enableRSA: true
# The key used to encrypt the password using RSA.
# privateKey: ""
# Fill in the name of the graph space you want to write data to in the NebulaGraph.
space: basketballplayer
Expand Down Expand Up @@ -346,7 +350,7 @@ After Exchange is compiled, copy the conf file `target/classes/application.conf`
Run the following command to import CSV data into NebulaGraph. For descriptions of the parameters, see [Options for import](../parameter-reference/ex-ug-para-import-command.md).

```bash
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange-{{exchange.release}}.jar_path> -c <csv_application.conf_path>
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange <nebula-exchange.jar_path> -c <csv_application.conf_path>
```

!!! note
Expand All @@ -356,7 +360,7 @@ ${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchan
For example:

```bash
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-exchange/nebula-exchange/target/nebula-exchange-{{exchange.release}}.jar -c /root/nebula-exchange/nebula-exchange/target/classes/csv_application.conf
${SPARK_HOME}/bin/spark-submit --master "local" --class com.vesoft.nebula.exchange.Exchange /root/nebula-exchange/nebula-exchange/target/nebula-exchange_spark_2.4-{{exchange.release}}.jar -c /root/nebula-exchange/nebula-exchange/target/classes/csv_application.conf
```

You can search for `batchSuccess.<tag_name/edge_name>` in the command output to check the number of successes. For example, `batchSuccess.follow: 300`.
Expand Down
Loading

0 comments on commit bf0a309

Please sign in to comment.