diff --git a/3.8.0/2.quick-start/6.cheatsheet-for-ngql/index.html b/3.8.0/2.quick-start/6.cheatsheet-for-ngql/index.html index a2569da402..854f072553 100644 --- a/3.8.0/2.quick-start/6.cheatsheet-for-ngql/index.html +++ b/3.8.0/2.quick-start/6.cheatsheet-for-ngql/index.html @@ -9941,7 +9941,7 @@

Query tuning statements PROFILE PROFILE [format="row" | "dot"] <your_nGQL_statement> -PROFILE format="row" SHOW TAGS
EXPLAIN format="dot" SHOW TAGS +PROFILE format="row" SHOW TAGS
PROFILE format="dot" SHOW TAGS Executes the statement, then outputs the execution plan as well as the execution profile. @@ -10067,7 +10067,7 @@

Operation and maintenance statemen Last update: - January 30, 2024 + June 18, 2024 diff --git a/3.8.0/index.html b/3.8.0/index.html index 537e0b0316..2233b0a426 100644 --- a/3.8.0/index.html +++ b/3.8.0/index.html @@ -8291,7 +8291,7 @@

Welcome to NebulaGraph 3.8.0 Documentation

Note

-

This manual is revised on 2024-6-13, with GitHub commit d6982edc39.

+

This manual is revised on 2024-6-18, with GitHub commit 3f6ae4e1ac.

NebulaGraph is a distributed, scalable, and lightning-fast graph database. It is the optimal solution in the world capable of hosting graphs with dozens of billions of vertices (nodes) and trillions of edges (relationships) with millisecond latency.

Getting started

diff --git a/3.8.0/pdf/NebulaGraph-EN.pdf b/3.8.0/pdf/NebulaGraph-EN.pdf index 28f5e6dc1f..3dbf7d5e34 100644 Binary files a/3.8.0/pdf/NebulaGraph-EN.pdf and b/3.8.0/pdf/NebulaGraph-EN.pdf differ diff --git a/3.8.0/search/search_index.json b/3.8.0/search/search_index.json index 4335570af8..b5ba53cb2d 100644 --- a/3.8.0/search/search_index.json +++ b/3.8.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome to NebulaGraph 3.8.0 Documentation","text":"

Note

This manual is revised on 2024-6-13, with GitHub commit d6982edc39.

NebulaGraph is a distributed, scalable, and lightning-fast graph database. It is the optimal solution in the world capable of hosting graphs with dozens of billions of vertices (nodes) and trillions of edges (relationships) with millisecond latency.

"},{"location":"#getting_started","title":"Getting started","text":""},{"location":"#release_notes","title":"Release notes","text":""},{"location":"#other_sources","title":"Other Sources","text":""},{"location":"#symbols_used_in_this_manual","title":"Symbols used in this manual","text":"

Note

Additional information or operation-related notes.

Caution

May have adverse effects, such as causing performance degradation or triggering known minor problems.

Warning

May lead to serious issues, such as data loss or system crash.

Danger

May lead to extremely serious issues, such as system damage or information leakage.

Compatibility

The compatibility notes between nGQL and openCypher, or between the current version of nGQL and its prior ones.

Enterpriseonly

Differences between the NebulaGraph Community and Enterprise editions.

"},{"location":"#modify_errors","title":"Modify errors","text":"

This NebulaGraph manual is written in the Markdown language. Users can click the pencil sign on the upper right side of each document title and modify errors.

"},{"location":"nebula-bench/","title":"NebulaGraph Bench","text":"

NebulaGraph Bench is a performance test tool for NebulaGraph using the LDBC data set.

"},{"location":"nebula-bench/#scenario","title":"Scenario","text":" "},{"location":"nebula-bench/#release_note","title":"Release note","text":"

Release

"},{"location":"nebula-bench/#test_process","title":"Test process","text":"

For detailed usage instructions, see NebulaGraph Bench.

"},{"location":"nebula-console/","title":"NebulaGraph Console","text":"

NebulaGraph Console is a native CLI client for NebulaGraph. It can be used to connect a NebulaGraph cluster and execute queries. It also supports special commands to manage parameters, export query results, import test datasets, etc.

"},{"location":"nebula-console/#compatibility_with_nebulagraph","title":"Compatibility with NebulaGraph","text":"

See github.

"},{"location":"nebula-console/#obtain_nebulagraph_console","title":"Obtain NebulaGraph Console","text":"

You can obtain NebulaGraph Console in the following ways:

"},{"location":"nebula-console/#nebulagraph_console_functions","title":"NebulaGraph Console functions","text":""},{"location":"nebula-console/#connect_to_nebulagraph","title":"Connect to NebulaGraph","text":"

To connect to NebulaGraph with the nebula-console file, use the following syntax:

<path_of_console> -addr <ip> -port <port> -u <username> -p <password>\n

For example:

Parameter descriptions are as follows:

Parameter Description -h/-help Shows the help menu. -addr/-address Sets the IP or hostname of the Graph service. The default address is 127.0.0.1. -P/-port Sets the port number of the graphd service. The default port number is 9669. -u/-user Sets the username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. -p/-password Sets the password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. If not specified, a prompt appears requesting the password. -t/-timeout Sets an integer-type timeout threshold of the connection. The unit is millisecond. The default value is 120. -e/-eval Sets a string-type nGQL statement. The nGQL statement is executed once the connection succeeds. The connection stops after the result is returned. -f/-file Sets the path of an nGQL file. The nGQL statements in the file are executed once the connection succeeds. The result will be returned and the connection stops then. -enable_ssl Enables SSL encryption when connecting to NebulaGraph. -ssl_root_ca_path Sets the path to the root cerificate signed by a private Certifcate Authority (CA). -ssl_cert_path Sets the path to the certificate of the client. -ssl_private_key_path Sets the path to the private key of the client. -ssl_insecure_skip_verify Specifies whether the client skips verifying the server's certificate chain and hostname. The default is false. If set to true, any certificate chain and hostname provided by the server is accepted.

For information on more parameters, see the project repository.

"},{"location":"nebula-console/#manage_parameters","title":"Manage parameters","text":"

You can save parameters for parameterized queries.

Note

"},{"location":"nebula-console/#export_query_results","title":"Export query results","text":"

Export query results, which can be saved as a CSV file, DOT file, and a format of Profile or Explain.

Note

"},{"location":"nebula-console/#import_a_testing_dataset","title":"Import a testing dataset","text":"

The testing dataset is named basketballplayer. To view details about the schema and data, use the corresponding SHOW command.

The command to import a testing dataset is as follows:

nebula> :play basketballplayer\n
"},{"location":"nebula-console/#run_a_command_multiple_times","title":"Run a command multiple times","text":"

To run a command multiple times, use the following command:

nebula> :repeat N\n

The example is as follows:

nebula> :repeat 3\nnebula> GO FROM \"player100\" OVER follow YIELD dst(edge);\n+-------------+\n| dst(EDGE)   |\n+-------------+\n| \"player101\" |\n| \"player125\" |\n+-------------+\nGot 2 rows (time spent 2602/3214 us)\n\nFri, 20 Aug 2021 06:36:05 UTC\n\n+-------------+\n| dst(EDGE)   |\n+-------------+\n| \"player101\" |\n| \"player125\" |\n+-------------+\nGot 2 rows (time spent 583/849 us)\n\nFri, 20 Aug 2021 06:36:05 UTC\n\n+-------------+\n| dst(EDGE)   |\n+-------------+\n| \"player101\" |\n| \"player125\" |\n+-------------+\nGot 2 rows (time spent 496/671 us)\n\nFri, 20 Aug 2021 06:36:05 UTC\n\nExecuted 3 times, (total time spent 3681/4734 us), (average time spent 1227/1578 us)\n
"},{"location":"nebula-console/#sleep","title":"Sleep","text":"

This command will make NebulaGraph Console sleep for N seconds. The schema is altered in an async way and takes effect in the next heartbeat cycle. Therefore, this command is usually used when altering schema. The command is as follows:

nebula> :sleep N\n
"},{"location":"nebula-console/#disconnect_nebulagraph_console_from_nebulagraph","title":"Disconnect NebulaGraph Console from NebulaGraph","text":"

You can use :EXIT or :QUIT to disconnect from NebulaGraph. For convenience, NebulaGraph Console supports using these commands in lower case without the colon (\":\"), such as quit.

The example is as follows:

nebula> :QUIT\n\nBye root!\n
"},{"location":"1.introduction/1.what-is-nebula-graph/","title":"What is NebulaGraph","text":"

NebulaGraph is an open-source, distributed, easily scalable, and native graph database. It is capable of hosting graphs with hundreds of billions of vertices and trillions of edges, and serving queries with millisecond-latency.

"},{"location":"1.introduction/1.what-is-nebula-graph/#what_is_a_graph_database","title":"What is a graph database","text":"

A graph database, such as NebulaGraph, is a database that specializes in storing vast graph networks and retrieving information from them. It efficiently stores data as vertices (nodes) and edges (relationships) in labeled property graphs. Properties can be attached to both vertices and edges. Each vertex can have one or multiple tags (labels).

Graph databases are well suited for storing most kinds of data models abstracted from reality. Things are connected in almost all fields in the world. Modeling systems like relational databases extract the relationships between entities and squeeze them into table columns alone, with their types and properties stored in other columns or even other tables. This makes data management time-consuming and cost-ineffective.

NebulaGraph, as a typical native graph database, allows you to store the rich relationships as edges with edge types and properties directly attached to them.

"},{"location":"1.introduction/1.what-is-nebula-graph/#advantages_of_nebulagraph","title":"Advantages of NebulaGraph","text":""},{"location":"1.introduction/1.what-is-nebula-graph/#open_source","title":"Open source","text":"

NebulaGraph is open under the Apache 2.0 License. More and more people such as database developers, data scientists, security experts, and algorithm engineers are participating in the designing and development of NebulaGraph. To join the opening of source code and ideas, surf the NebulaGraph GitHub page.

"},{"location":"1.introduction/1.what-is-nebula-graph/#outstanding_performance","title":"Outstanding performance","text":"

Written in C++ and born for graphs, NebulaGraph handles graph queries in milliseconds. Among most databases, NebulaGraph shows superior performance in providing graph data services. The larger the data size, the greater the superiority of NebulaGraph.For more information, see NebulaGraph benchmarking.

"},{"location":"1.introduction/1.what-is-nebula-graph/#high_scalability","title":"High scalability","text":"

NebulaGraph is designed in a shared-nothing architecture and supports scaling in and out without interrupting the database service.

"},{"location":"1.introduction/1.what-is-nebula-graph/#developer_friendly","title":"Developer friendly","text":"

NebulaGraph supports clients in popular programming languages like Java, Python, C++, and Go, and more are under development. For more information, see NebulaGraph clients.

"},{"location":"1.introduction/1.what-is-nebula-graph/#reliable_access_control","title":"Reliable access control","text":"

NebulaGraph supports strict role-based access control and external authentication servers such as LDAP (Lightweight Directory Access Protocol) servers to enhance data security. For more information, see Authentication and authorization.

"},{"location":"1.introduction/1.what-is-nebula-graph/#diversified_ecosystem","title":"Diversified ecosystem","text":"

More and more native tools of NebulaGraph have been released, such as NebulaGraph Studio, NebulaGraph Console, and NebulaGraph Exchange. For more ecosystem tools, see Ecosystem tools overview.

Besides, NebulaGraph has the ability to be integrated with many cutting-edge technologies, such as Spark, Flink, and HBase, for the purpose of mutual strengthening in a world of increasing challenges and chances.

"},{"location":"1.introduction/1.what-is-nebula-graph/#opencypher-compatible_query_language","title":"OpenCypher-compatible query language","text":"

The native NebulaGraph Query Language, also known as nGQL, is a declarative, openCypher-compatible textual query language. It is easy to understand and easy to use. For more information, see nGQL guide.

"},{"location":"1.introduction/1.what-is-nebula-graph/#future-oriented_hardware_with_balanced_reading_and_writing","title":"Future-oriented hardware with balanced reading and writing","text":"

Solid-state drives have extremely high performance and they are getting cheaper. NebulaGraph is a product based on SSD. Compared with products based on HDD and large memory, it is more suitable for future hardware trends and easier to achieve balanced reading and writing.

"},{"location":"1.introduction/1.what-is-nebula-graph/#easy_data_modeling_and_high_flexibility","title":"Easy data modeling and high flexibility","text":"

You can easily model the connected data into NebulaGraph for your business without forcing them into a structure such as a relational table, and properties can be added, updated, and deleted freely. For more information, see Data modeling.

"},{"location":"1.introduction/1.what-is-nebula-graph/#high_popularity","title":"High popularity","text":"

NebulaGraph is being used by tech leaders such as Tencent, Vivo, Meituan, and JD Digits. For more information, visit the NebulaGraph official website.

"},{"location":"1.introduction/1.what-is-nebula-graph/#use_cases","title":"Use cases","text":"

NebulaGraph can be used to support various graph-based scenarios. To spare the time spent on pushing the kinds of data mentioned in this section into relational databases and on bothering with join queries, use NebulaGraph.

"},{"location":"1.introduction/1.what-is-nebula-graph/#fraud_detection","title":"Fraud detection","text":"

Financial institutions have to traverse countless transactions to piece together potential crimes and understand how combinations of transactions and devices might be related to a single fraud scheme. This kind of scenario can be modeled in graphs, and with the help of NebulaGraph, fraud rings and other sophisticated scams can be easily detected.

"},{"location":"1.introduction/1.what-is-nebula-graph/#real-time_recommendation","title":"Real-time recommendation","text":"

NebulaGraph offers the ability to instantly process the real-time information produced by a visitor and make accurate recommendations on articles, videos, products, and services.

"},{"location":"1.introduction/1.what-is-nebula-graph/#intelligent_question-answer_system","title":"Intelligent question-answer system","text":"

Natural languages can be transformed into knowledge graphs and stored in NebulaGraph. A question organized in a natural language can be resolved by a semantic parser in an intelligent question-answer system and re-organized. Then, possible answers to the question can be retrieved from the knowledge graph and provided to the one who asked the question.

"},{"location":"1.introduction/1.what-is-nebula-graph/#social_networking","title":"Social networking","text":"

Information on people and their relationships is typical graph data. NebulaGraph can easily handle the social networking information of billions of people and trillions of relationships, and provide lightning-fast queries for friend recommendations and job promotions in the case of massive concurrency.

"},{"location":"1.introduction/1.what-is-nebula-graph/#related_links","title":"Related links","text":""},{"location":"1.introduction/2.1.path/","title":"Path types","text":"

In graph theory, a path in a graph is a finite or infinite sequence of edges which joins a sequence of vertices. Paths are fundamental concepts of graph theory.

Paths can be categorized into 3 types: walk, trail, and path. For more information, see Wikipedia.

The following figure is an example for a brief introduction.

"},{"location":"1.introduction/2.1.path/#walk","title":"Walk","text":"

A walk is a finite or infinite sequence of edges. Both vertices and edges can be repeatedly visited in graph traversal.

In the above figure C, D, and E form a cycle. So, this figure contains infinite paths, such as A->B->C->D->E, A->B->C->D->E->C, and A->B->C->D->E->C->D.

Note

GO statements use walk.

"},{"location":"1.introduction/2.1.path/#trail","title":"Trail","text":"

A trail is a finite sequence of edges. Only vertices can be repeatedly visited in graph traversal. The Seven Bridges of K\u00f6nigsberg is a typical trail.

In the above figure, edges cannot be repeatedly visited. So, this figure contains finite paths. The longest path in this figure consists of 5 edges: A->B->C->D->E->C.

Note

MATCH, FIND PATH, and GET SUBGRAPH statements use trail.

There are two special cases of trail, cycle and circuit. The following figure is an example for a brief introduction.

"},{"location":"1.introduction/2.1.path/#path","title":"Path","text":"

A path is a finite sequence of edges. Neither vertices nor edges can be repeatedly visited in graph traversal.

So, the above figure contains finite paths. The longest path in this figure consists of 4 edges: A->B->C->D->E.

"},{"location":"1.introduction/2.data-model/","title":"Data modeling","text":"

A data model is a model that organizes data and specifies how they are related to one another. This topic describes the Nebula\u00a0Graph data model and provides suggestions for data modeling with NebulaGraph.

"},{"location":"1.introduction/2.data-model/#data_structures","title":"Data structures","text":"

NebulaGraph data model uses six data structures to store data. They are graph spaces, vertices, edges, tags, edge types and properties.

Note

Tags and Edge types are similar to \"vertex tables\" and \"edge tables\" in the relational databases.

"},{"location":"1.introduction/2.data-model/#directed_property_graph","title":"Directed property graph","text":"

NebulaGraph stores data in directed property graphs. A directed property graph has a set of vertices connected by directed edges. Both vertices and edges can have properties. A directed property graph is represented as:

G = < V, E, PV, PE >

The following table is an example of the structure of the basketball player dataset. We have two types of vertices, that is player and team, and two types of edges, that is serve and follow.

Element Name Property name (Data type) Description Tag player name (string) age (int) Represents players in the team. The properties name and age indicate the name and age. Tag team name (string) Represents the teams. The property name indicates the team name. Edge type serve start_year (int) end_year (int) Represents the action of a player serving a team. The action links the player to the team, and the direction is from the player to the team.The properties start_year and end_year indicate the start year and end year of the service respectively. Edge type follow degree (int) Represents the action of a player following another player on Twitter. The action links one player to the other player, and the direction is from one player to the other player.The property degree indicates the rating on how well the follower liked the followee.

Note

NebulaGraph supports only directed edges.

Compatibility

NebulaGraph 3.8.0 allows dangling edges. Therefore, when adding or deleting, you need to ensure the corresponding source vertex and destination vertex of an edge exist. For details, see INSERT VERTEX, DELETE VERTEX, INSERT EDGE, and DELETE EDGE.

The MERGE statement in openCypher is not supported.

"},{"location":"1.introduction/3.vid/","title":"VID","text":"

In a graph space, a vertex is uniquely identified by its ID, which is called a VID or a Vertex ID.

"},{"location":"1.introduction/3.vid/#features","title":"Features","text":" "},{"location":"1.introduction/3.vid/#vid_operation","title":"VID Operation","text":" "},{"location":"1.introduction/3.vid/#vid_generation","title":"VID Generation","text":"

VIDs can be generated via applications. Here are some tips:

"},{"location":"1.introduction/3.vid/#define_and_modify_a_vid_and_its_data_type","title":"Define and modify a VID and its data type","text":"

The data type of a VID must be defined when you create the graph space. Once defined, it cannot be modified.

A VID is set when you insert a vertex and cannot be modified.

"},{"location":"1.introduction/3.vid/#query_start_vid_and_global_scan","title":"Query start vid and global scan","text":"

In most cases, the execution plan of query statements in NebulaGraph (MATCH, GO, and LOOKUP) must query the start vid in a certain way.

There are only two ways to locate start vid:

  1. For example, GO FROM \"player100\" OVER explicitly indicates in the statement that start vid is \"player100\".

  2. For example, LOOKUP ON player WHERE player.name == \"Tony Parker\" or MATCH (v:player {name:\"Tony Parker\"}) locates start vid by the index of the property player.name.

"},{"location":"1.introduction/3.nebula-graph-architecture/1.architecture-overview/","title":"Architecture overview","text":"

NebulaGraph consists of three services: the Graph Service, the Storage Service, and the Meta Service. It applies the separation of storage and computing architecture.

Each service has its executable binaries and processes launched from the binaries. Users can deploy a NebulaGraph cluster on a single machine or multiple machines using these binaries.

The following figure shows the architecture of a typical NebulaGraph cluster.

"},{"location":"1.introduction/3.nebula-graph-architecture/1.architecture-overview/#the_meta_service","title":"The Meta Service","text":"

The Meta Service in the NebulaGraph architecture is run by the nebula-metad processes. It is responsible for metadata management, such as schema operations, cluster administration, and user privilege management.

For details on the Meta Service, see Meta Service.

"},{"location":"1.introduction/3.nebula-graph-architecture/1.architecture-overview/#the_graph_service_and_the_storage_service","title":"The Graph Service and the Storage Service","text":"

NebulaGraph applies the separation of storage and computing architecture. The Graph Service is responsible for querying. The Storage Service is responsible for storage. They are run by different processes, i.e., nebula-graphd and nebula-storaged. The benefits of the separation of storage and computing architecture are as follows:

For details on the Graph Service and the Storage Service, see Graph Service and Storage Service.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/","title":"Meta Service","text":"

This topic introduces the architecture and functions of the Meta Service.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#the_architecture_of_the_meta_service","title":"The architecture of the Meta Service","text":"

The architecture of the Meta Service is as follows:

The Meta Service is run by nebula-metad processes. Users can deploy nebula-metad processes according to the scenario:

All the nebula-metad processes form a Raft-based cluster, with one process as the leader and the others as the followers.

The leader is elected by the majorities and only the leader can provide service to the clients or other components of NebulaGraph. The followers will be run in a standby way and each has a data replication of the leader. Once the leader fails, one of the followers will be elected as the new leader.

Note

The data of the leader and the followers will keep consistent through Raft. Thus the breakdown and election of the leader will not cause data inconsistency. For more information on Raft, see Storage service architecture.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#functions_of_the_meta_service","title":"Functions of the Meta Service","text":""},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#manages_user_accounts","title":"Manages user accounts","text":"

The Meta Service stores the information of user accounts and the privileges granted to the accounts. When the clients send queries to the Meta Service through an account, the Meta Service checks the account information and whether the account has the right privileges to execute the queries or not.

For more information on NebulaGraph access control, see Authentication.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#manages_partitions","title":"Manages partitions","text":"

The Meta Service stores and manages the locations of the storage partitions and helps balance the partitions.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#manages_graph_spaces","title":"Manages graph spaces","text":"

NebulaGraph supports multiple graph spaces. Data stored in different graph spaces are securely isolated. The Meta Service stores the metadata of all graph spaces and tracks the changes of them, such as adding or dropping a graph space.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#manages_schema_information","title":"Manages schema information","text":"

NebulaGraph is a strong-typed graph database. Its schema contains tags (i.e., the vertex types), edge types, tag properties, and edge type properties.

The Meta Service stores the schema information. Besides, it performs the addition, modification, and deletion of the schema, and logs the versions of them.

For more information on NebulaGraph schema, see Data model.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#manages_ttl_information","title":"Manages TTL information","text":"

The Meta Service stores the definition of TTL (Time to Live) options which are used to control data expiration. The Storage Service takes care of the expiring and evicting processes. For more information, see TTL.

"},{"location":"1.introduction/3.nebula-graph-architecture/2.meta-service/#manages_jobs","title":"Manages jobs","text":"

The Job Management module in the Meta Service is responsible for the creation, queuing, querying, and deletion of jobs.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/","title":"Graph Service","text":"

The Graph Service is used to process the query. It has four submodules: Parser, Validator, Planner, and Executor. This topic will describe the Graph Service accordingly.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/#the_architecture_of_the_graph_service","title":"The architecture of the Graph Service","text":"

After a query is sent to the Graph Service, it will be processed by the following four submodules:

  1. Parser: Performs lexical analysis and syntax analysis.

  2. Validator: Validates the statements.

  3. Planner: Generates and optimizes the execution plans.

  4. Executor: Executes the plans with operators.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/#parser","title":"Parser","text":"

After receiving a request, the statements will be parsed by Parser composed of Flex (lexical analysis tool) and Bison (syntax analysis tool), and its corresponding AST will be generated. Statements will be directly intercepted in this stage because of their invalid syntax.

For example, the structure of the AST of GO FROM \"Tim\" OVER like WHERE properties(edge).likeness > 8.0 YIELD dst(edge) is shown in the following figure.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/#validator","title":"Validator","text":"

Validator performs a series of validations on the AST. It mainly works on these tasks:

When the validation succeeds, an execution plan will be generated. Its data structure will be stored in the src/planner directory.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/#planner","title":"Planner","text":"

In the nebula-graphd.conf file, when enable_optimizer is set to be false, Planner will not optimize the execution plans generated by Validator. It will be executed by Executor directly.

In the nebula-graphd.conf file, when enable_optimizer is set to be true, Planner will optimize the execution plans generated by Validator. The structure is as follows.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/#executor","title":"Executor","text":"

The Executor module consists of Scheduler and Executor. The Scheduler generates the corresponding execution operators against the execution plan, starting from the leaf nodes and ending at the root node. The structure is as follows.

Each node of the execution plan has one execution operator node, whose input and output have been determined in the execution plan. Each operator only needs to get the values for the input variables, compute them, and finally put the results into the corresponding output variables. Therefore, it is only necessary to execute step by step from Start, and the result of the last operator is returned to the user as the final result.

"},{"location":"1.introduction/3.nebula-graph-architecture/3.graph-service/#source_code_hierarchy","title":"Source code hierarchy","text":"

The source code hierarchy under the nebula-graph repository is as follows.

|--src\n   |--graph\n      |--context    //contexts for validation and execution\n      |--executor   //execution operators\n      |--gc         //garbage collector\n      |--optimizer  //optimization rules\n      |--planner    //structure of the execution plans\n      |--scheduler  //scheduler\n      |--service    //external service management\n      |--session    //session management\n      |--stats      //monitoring metrics\n      |--util       //basic components\n      |--validator  //validation of the statements\n      |--visitor    //visitor expression\n
"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/","title":"Storage Service","text":"

The persistent data of NebulaGraph have two parts. One is the Meta Service that stores the meta-related data.

The other is the Storage Service that stores the data, which is run by the nebula-storaged process. This topic will describe the architecture of the Storage Service.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#advantages","title":"Advantages","text":" "},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#the_architecture_of_the_storage_service","title":"The architecture of the Storage Service","text":"

The Storage Service is run by the nebula-storaged process. Users can deploy nebula-storaged processes on different occasions. For example, users can deploy 1 nebula-storaged process in a test environment and deploy 3 nebula-storaged processes in a production environment.

All the nebula-storaged processes consist of a Raft-based cluster. There are three layers in the Storage Service:

The following will describe some features of the Storage Service based on the above architecture.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#storage_writing_process","title":"Storage writing process","text":""},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#kvstore","title":"KVStore","text":"

NebulaGraph develops and customizes its built-in KVStore for the following reasons.

Therefore, NebulaGraph develops its own KVStore with RocksDB as the local storage engine. The advantages are as follows.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#data_storage_structure","title":"Data storage structure","text":"

Graphs consist of vertices and edges. NebulaGraph uses key-value pairs to store vertices, edges, and their properties. Vertices and edges are stored in keys and their properties are stored in values. Such structure enables efficient property filtering.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#property_descriptions","title":"Property descriptions","text":"

NebulaGraph uses strong-typed Schema.

NebulaGraph will store the properties of vertex and edges in order after encoding them. Since the length of fixed-length properties is fixed, queries can be made in no time according to offset. Before decoding, NebulaGraph needs to get (and cache) the schema information in the Meta Service. In addition, when encoding properties, NebulaGraph will add the corresponding schema version to support online schema change.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#data_partitioning","title":"Data partitioning","text":"

Since in an ultra-large-scale relational network, vertices can be as many as tens to hundreds of billions, and edges are even more than trillions. Even if only vertices and edges are stored, the storage capacity of both exceeds that of ordinary servers. Therefore, NebulaGraph uses hash to shard the graph elements and store them in different partitions.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#edge_partitioning_and_storage_amplification","title":"Edge partitioning and storage amplification","text":"

In NebulaGraph, an edge corresponds to two key-value pairs on the hard disk. When there are lots of edges and each has many properties, storage amplification will be obvious. The storage format of edges is shown in the figure below.

In this example, SrcVertex connects DstVertex via EdgeA, forming the path of (SrcVertex)-[EdgeA]->(DstVertex). SrcVertex, DstVertex, and EdgeA will all be stored in Partition x and Partition y as four key-value pairs in the storage layer. Details are as follows:

EdgeA_Out and EdgeA_In are stored in storage layer with opposite directions, constituting EdgeA logically. EdgeA_Out is used for traversal requests starting from SrcVertex, such as (a)-[]->(); EdgeA_In is used for traversal requests starting from DstVertex, such as ()-[]->(a).

Like EdgeA_Out and EdgeA_In, NebulaGraph redundantly stores the information of each edge, which doubles the actual capacities needed for edge storage. The key corresponding to the edge occupies a small hard disk space, but the space occupied by Value is proportional to the length and amount of the property value. Therefore, it will occupy a relatively large hard disk space if the property value of the edge is large or there are many edge property values.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#partition_algorithm","title":"Partition algorithm","text":"

NebulaGraph uses a static Hash strategy to shard data through a modulo operation on vertex ID. All the out-keys, in-keys, and tag data will be placed in the same partition. In this way, query efficiency is increased dramatically.

Note

The number of partitions needs to be determined when users are creating a graph space since it cannot be changed afterward. Users are supposed to take into consideration the demands of future business when setting it.

When inserting into NebulaGraph, vertices and edges are distributed across different partitions. And the partitions are located on different machines. The number of partitions is set in the CREATE SPACE statement and cannot be changed afterward.

If certain vertices need to be placed on the same partition (i.e., on the same machine), see Formula/code.

The following code will briefly describe the relationship between VID and partition.

// If VertexID occupies 8 bytes, it will be stored in int64 to be compatible with the version 1.0.\nuint64_t vid = 0;\nif (id.size() == 8) {\n    memcpy(static_cast<void*>(&vid), id.data(), 8);\n} else {\n    MurmurHash2 hash;\n    vid = hash(id.data());\n}\nPartitionID pId = vid % numParts + 1;\n

Roughly speaking, after hashing a fixed string to int64, (the hashing of int64 is the number itself), do modulo, and then plus one, namely:

pId = vid % numParts + 1;\n

Parameters and descriptions of the preceding formula are as follows:

Parameter Description % The modulo operation. numParts The number of partitions for the graph space where the VID is located, namely the value of partition_num in the CREATE SPACE statement. pId The ID for the partition where the VID is located.

Suppose there are 100 partitions, the vertices with VID 1, 101, and 1001 will be stored on the same partition. But, the mapping between the partition ID and the machine address is random. Therefore, we cannot assume that any two partitions are located on the same machine.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#raft","title":"Raft","text":""},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#raft_implementation","title":"Raft implementation","text":"

In a distributed system, one data usually has multiple replicas so that the system can still run normally even if a few copies fail. It requires certain technical means to ensure consistency between replicas.

Basic principle: Raft is designed to ensure consistency between replicas. Raft uses election between replicas, and the (candidate) replica that wins more than half of the votes will become the Leader, providing external services on behalf of all replicas. The rest Followers will play backups. When the Leader fails (due to communication failure, operation and maintenance commands, etc.), the rest Followers will conduct a new round of elections and vote for a new Leader. The Leader and Followers will detect each other's survival through heartbeats and write them to the hard disk in Raft-wal mode. Replicas that do not respond to more than multiple heartbeats will be considered faulty.

Note

Raft-wal needs to be written into the hard disk periodically. If hard disk bottlenecks to write, Raft will fail to send a heartbeat and conduct a new round of elections. If the hard disk IO is severely blocked, there will be no Leader for a long time.

Read and write: For every writing request of the clients, the Leader will initiate a Raft-wal and synchronize it with the Followers. Only after over half replicas have received the Raft-wal will it return to the clients successfully. For every reading request of the clients, it will get to the Leader directly, while Followers will not be involved.

Failure: Scenario 1: Take a (space) cluster of a single replica as an example. If the system has only one replica, the Leader will be itself. If failure happens, the system will be completely unavailable. Scenario 2: Take a (space) cluster of three replicas as an example. If the system has three replicas, one of them will be the Leader and the rest will be the Followers. If the Leader fails, the rest two can still vote for a new Leader (and a Follower), and the system is still available. But if any of the two Followers fails again, the system will be completely unavailable due to inadequate voters.

Note

Raft and HDFS have different modes of duplication. Raft is based on a quorum vote, so the number of replicas cannot be even.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#multi_group_raft","title":"Multi Group Raft","text":"

The Storage Service supports a distributed cluster architecture, so NebulaGraph implements Multi Group Raft according to Raft protocol. Each Raft group stores all the replicas of each partition. One replica is the leader, while others are followers. In this way, NebulaGraph achieves strong consistency and high availability. The functions of Raft are as follows.

NebulaGraph uses Multi Group Raft to improve performance when there are many partitions because Raft-wal cannot be NULL. When there are too many partitions, costs will increase, such as storing information in Raft group, WAL files, or batch operation in low load.

There are two key points to implement the Multi Raft Group:

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#batch","title":"Batch","text":"

For each partition, it is necessary to do a batch to improve throughput when writing the WAL serially. As NebulaGraph uses WAL to implement some special functions, batches need to be grouped, which is a feature of NebulaGraph.

For example, lock-free CAS operations will execute after all the previous WALs are committed. So for a batch, if there are several WALs in CAS type, we need to divide this batch into several smaller groups and make sure they are committed serially.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#transfer_leadership","title":"Transfer Leadership","text":"

Transfer leadership is extremely important for balance. When moving a partition from one machine to another, NebulaGraph first checks if the source is a leader. If so, it should be moved to another peer. After data migration is completed, it is important to balance leader distribution again.

When a transfer leadership command is committed, the leader will abandon its leadership and the followers will start a leader election.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#peer_changes","title":"Peer changes","text":"

To avoid split-brain, when members in a Raft Group change, an intermediate state is required. In such a state, the quorum of the old group and new group always have an overlap. Thus it prevents the old or new group from making decisions unilaterally. To make it even simpler, in his doctoral thesis Diego Ongaro suggests adding or removing a peer once to ensure the overlap between the quorum of the new group and the old group. NebulaGraph also uses this approach, except that the way to add or remove a member is different. For details, please refer to addPeer/removePeer in the Raft Part class.

"},{"location":"1.introduction/3.nebula-graph-architecture/4.storage-service/#differences_with_hdfs","title":"Differences with HDFS","text":"

The Storage Service is a Raft-based distributed architecture, which has certain differences with that of HDFS. For example:

In a word, the Storage Service is more lightweight with some functions simplified and its architecture is simpler than HDFS, which can effectively improve the read and write performance of a smaller block of data.

"},{"location":"14.client/1.nebula-client/","title":"Clients overview","text":"

NebulaGraph supports multiple types of clients for users to connect to and manage the NebulaGraph database.

Note

Only the following classes are thread-safe:

"},{"location":"14.client/3.nebula-cpp-client/","title":"NebulaGraph CPP","text":"

NebulaGraph CPP is a C++ client for connecting to and managing the NebulaGraph database.

"},{"location":"14.client/3.nebula-cpp-client/#prerequisites","title":"Prerequisites","text":"

You have installed C++ and GCC 4.8 or later versions.

"},{"location":"14.client/3.nebula-cpp-client/#compatibility_with_nebulagraph","title":"Compatibility with NebulaGraph","text":"

See github.

"},{"location":"14.client/3.nebula-cpp-client/#install_nebulagraph_cpp","title":"Install NebulaGraph CPP","text":"

This document describes how to install NebulaGraph CPP with the source code.

"},{"location":"14.client/3.nebula-cpp-client/#prerequisites_1","title":"Prerequisites","text":""},{"location":"14.client/3.nebula-cpp-client/#steps","title":"Steps","text":"
  1. Clone the NebulaGraph CPP source code to the host.

  2. Change the working directory to nebula-cpp.

    $ cd nebula-cpp\n
  3. Create a directory named build and change the working directory to it.

    $ mkdir build && cd build\n
  4. Generate the makefile file with CMake.

    Note

    The default installation path is /usr/local/nebula. To modify it, add the -DCMAKE_INSTALL_PREFIX=<installation_path> option while running the following command.

    $ cmake -DCMAKE_BUILD_TYPE=Release ..\n

    Note

    If G++ does not support C++ 11, add the option -DDISABLE_CXX11_ABI=ON.

  5. Compile NebulaGraph CPP.

    To speed up the compiling, use the -j option to set a concurrent number N. It should be \\(\\min(\\text{CPU core number},\\frac{\\text{the memory size(GB)}}{2})\\).

    $ make -j{N}\n
  6. Install NebulaGraph CPP.

    $ sudo make install\n
  7. Update the dynamic link library.

    $ sudo ldconfig\n
"},{"location":"14.client/3.nebula-cpp-client/#use_nebulagraph_cpp","title":"Use NebulaGraph CPP","text":"

Compile the CPP file to an executable file, then you can use it. The following steps take using SessionExample.cpp for example.

  1. Use the example code to create the SessionExample.cpp file.

  2. Run the following command to compile the file.

    $ LIBRARY_PATH=<library_folder_path>:$LIBRARY_PATH g++ -std=c++11 SessionExample.cpp -I<include_folder_path> -lnebula_graph_client -o session_example\n

For example:

$ LIBRARY_PATH=/usr/local/nebula/lib64:$LIBRARY_PATH g++ -std=c++11 SessionExample.cpp -I/usr/local/nebula/include -lnebula_graph_client -o session_example\n
"},{"location":"14.client/3.nebula-cpp-client/#api_reference","title":"API reference","text":"

Click here to check the classes and functions provided by the CPP Client.

"},{"location":"14.client/3.nebula-cpp-client/#core_of_the_example_code","title":"Core of the example code","text":"

Nebula CPP clients provide both Session Pool and Connection Pool methods to connect to NebulaGraph. Using the Connection Pool method requires users to manage session instances by themselves.

"},{"location":"14.client/4.nebula-java-client/","title":"NebulaGraph Java","text":"

NebulaGraph Java is a Java client for connecting to and managing the NebulaGraph database.

"},{"location":"14.client/4.nebula-java-client/#prerequisites","title":"Prerequisites","text":"

JDK 8 is installed.

"},{"location":"14.client/4.nebula-java-client/#compatibility_with_nebulagraph","title":"Compatibility with NebulaGraph","text":"

See github.

"},{"location":"14.client/4.nebula-java-client/#download_nebulagraph_java","title":"Download NebulaGraph Java","text":" "},{"location":"14.client/4.nebula-java-client/#use_nebulagraph_java","title":"Use NebulaGraph Java","text":"

Note

We recommend that each thread use one session. If multiple threads use the same session, the performance will be reduced.

When importing a Maven project with tools such as IDEA, set the following dependency in pom.xml.

Note

3.0.0-SNAPSHOT indicates the daily development version that may have unknown issues. We recommend that you replace 3.0.0-SNAPSHOT with a released version number to use a table version.

<dependency>\n  <groupId>com.vesoft</groupId>\n  <artifactId>client</artifactId>\n  <version>3.0.0-SNAPSHOT</version>\n</dependency>\n

If you cannot download the dependency for the daily development version, set the following content in pom.xml. Released versions have no such issue.

<repositories> \n  <repository> \n    <id>snapshots</id> \n    <url>https://oss.sonatype.org/content/repositories/snapshots/</url> \n  </repository> \n</repositories>\n

If there is no Maven to manage the project, manually download the JAR file to install NebulaGraph Java.

"},{"location":"14.client/4.nebula-java-client/#api_reference","title":"API reference","text":"

Click here to check the classes and functions provided by the Java Client.

"},{"location":"14.client/4.nebula-java-client/#core_of_the_example_code","title":"Core of the example code","text":"

The NebulaGraph Java client provides both Connection Pool and Session Pool modes, using Connection Pool requires the user to manage session instances.

"},{"location":"14.client/5.nebula-python-client/","title":"NebulaGraph Python","text":"

NebulaGraph Python is a Python client for connecting to and managing the NebulaGraph database.

"},{"location":"14.client/5.nebula-python-client/#prerequisites","title":"Prerequisites","text":"

You have installed Python 3.6 or later versions.

"},{"location":"14.client/5.nebula-python-client/#compatibility_with_nebulagraph","title":"Compatibility with NebulaGraph","text":"

See github.

"},{"location":"14.client/5.nebula-python-client/#install_nebulagraph_python","title":"Install NebulaGraph Python","text":""},{"location":"14.client/5.nebula-python-client/#install_nebulagraph_python_with_pip","title":"Install NebulaGraph Python with pip","text":"
$ pip install nebula3-python==<version>\n
"},{"location":"14.client/5.nebula-python-client/#install_nebulagraph_python_from_the_source_code","title":"Install NebulaGraph Python from the source code","text":"
  1. Clone the NebulaGraph Python source code to the host.

  2. Change the working directory to nebula-python.

    $ cd nebula-python\n
  3. Run the following command to install NebulaGraph Python.

    $ pip install .\n
"},{"location":"14.client/5.nebula-python-client/#api_reference","title":"API reference","text":"

Click here to check the classes and functions provided by the Python Client.

"},{"location":"14.client/5.nebula-python-client/#core_of_the_example_code","title":"Core of the example code","text":"

NebulaGraph Python clients provides Connection Pool and Session Pool methods to connect to NebulaGraph. Using the Connection Pool method requires users to manage sessions by themselves.

"},{"location":"14.client/6.nebula-go-client/","title":"NebulaGraph Go","text":"

NebulaGraph Go is a Golang client for connecting to and managing the NebulaGraph database.

"},{"location":"14.client/6.nebula-go-client/#prerequisites","title":"Prerequisites","text":"

You have installed Golang 1.13 or later versions.

"},{"location":"14.client/6.nebula-go-client/#compatibility_with_nebulagraph","title":"Compatibility with NebulaGraph","text":"

See github.

"},{"location":"14.client/6.nebula-go-client/#download_nebulagraph_go","title":"Download NebulaGraph Go","text":" "},{"location":"14.client/6.nebula-go-client/#install_or_update","title":"Install or update","text":"

Run the following command to install or update NebulaGraph Go:

$ go get -u -v github.com/vesoft-inc/nebula-go/v3@v3.8.0\n
"},{"location":"14.client/6.nebula-go-client/#api_reference","title":"API reference","text":"

Click here to check the functions and types provided by the GO Client.

"},{"location":"14.client/6.nebula-go-client/#core_of_the_example_code","title":"Core of the example code","text":"

The NebulaGraph GO client provides both Connection Pool and Session Pool, using Connection Pool requires the user to manage the session instances.

"},{"location":"14.client/contributed-clients/","title":"Community contributed clients","text":"

You can use the following clients developed by community users to connect to and manage NebulaGraph:

"},{"location":"15.contribution/how-to-contribute/","title":"How to Contribute","text":""},{"location":"15.contribution/how-to-contribute/#before_you_get_started","title":"Before you get started","text":""},{"location":"15.contribution/how-to-contribute/#commit_an_issue_on_the_github_or_forum","title":"Commit an issue on the github or forum","text":"

You are welcome to contribute any code or files to the project. But firstly we suggest you raise an issue on the github or the forum to start a discussion with the community. Check through the topic for Github.

"},{"location":"15.contribution/how-to-contribute/#sign_the_contributor_license_agreement_cla","title":"Sign the Contributor License Agreement CLA","text":"
  1. Open the CLA sign-in page.
  2. Click the Sign in with GitHub button to sign in.
  3. Read and agree to the vesoft inc. Contributor License Agreement.

If you have any questions, submit an issue.

"},{"location":"15.contribution/how-to-contribute/#modify_a_single_document","title":"Modify a single document","text":"

This manual is written in the Markdown language. Click the pencil icon on the right of the document title to commit the modification.

This method applies to modifying a single document only.

"},{"location":"15.contribution/how-to-contribute/#batch_modify_or_add_files","title":"Batch modify or add files","text":"

This method applies to contributing code, modifying multiple documents in batches, or adding new documents.

"},{"location":"15.contribution/how-to-contribute/#step_1_fork_in_the_githubcom","title":"Step 1: Fork in the github.com","text":"

The NebulaGraph project has many repositories. Take the nebul repository for example:

  1. Visit https://github.com/vesoft-inc/nebula.

  2. Click the Fork button to establish an online fork.

"},{"location":"15.contribution/how-to-contribute/#step_2_clone_fork_to_local_storage","title":"Step 2: Clone Fork to Local Storage","text":"
  1. Define a local working directory.

    # Define the working directory.\nworking_dir=$HOME/Workspace\n
  2. Set user to match the Github profile name.

    user={the Github profile name}\n
  3. Create your clone.

    mkdir -p $working_dir\ncd $working_dir\ngit clone https://github.com/$user/nebula.git\n# or: git clone git@github.com:$user/nebula.git\n\ncd $working_dir/nebula\ngit remote add upstream https://github.com/vesoft-inc/nebula.git\n# or: git remote add upstream git@github.com:vesoft-inc/nebula.git\n\n# Never push to upstream master since you do not have write access.\ngit remote set-url --push upstream no_push\n\n# Confirm that the remote branch is valid.\n# The correct format is:\n# origin    git@github.com:$(user)/nebula.git (fetch)\n# origin    git@github.com:$(user)/nebula.git (push)\n# upstream  https://github.com/vesoft-inc/nebula (fetch)\n# upstream  no_push (push)\ngit remote -v\n
  4. (Optional) Define a pre-commit hook.

    Please link the NebulaGraph pre-commit hook into the .git directory.

    This hook checks the commits for formatting, building, doc generation, etc.

    cd $working_dir/nebula/.git/hooks\nln -s $working_dir/nebula/.linters/cpp/hooks/pre-commit.sh .\n

    Sometimes, the pre-commit hook cannot be executed. You have to execute it manually.

    cd $working_dir/nebula/.git/hooks\nchmod +x pre-commit\n
"},{"location":"15.contribution/how-to-contribute/#step_3_branch","title":"Step 3: Branch","text":"
  1. Get your local master up to date.

    cd $working_dir/nebula\ngit fetch upstream\ngit checkout master\ngit rebase upstream/master\n
  2. Checkout a new branch from master.

    git checkout -b myfeature\n

    Note

    Because the PR often consists of several commits, which might be squashed while being merged into upstream. We strongly suggest you to open a separate topic branch to make your changes on. After merged, this topic branch can be just abandoned, thus you could synchronize your master branch with upstream easily with a rebase like above. Otherwise, if you commit your changes directly into master, you need to use a hard reset on the master branch. For example:

    git fetch upstream\ngit checkout master\ngit reset --hard upstream/master\ngit push --force origin master\n
"},{"location":"15.contribution/how-to-contribute/#step_4_develop","title":"Step 4: Develop","text":" "},{"location":"15.contribution/how-to-contribute/#step_5_bring_your_branch_update_to_date","title":"Step 5: Bring Your Branch Update to Date","text":"
# While on your myfeature branch.\ngit fetch upstream\ngit rebase upstream/master\n

Users need to bring the head branch up to date after other contributors merge PR to the base branch.

"},{"location":"15.contribution/how-to-contribute/#step_6_commit","title":"Step 6: Commit","text":"

Commit your changes.

git commit -a\n

Users can use the command --amend to re-edit the previous code.

"},{"location":"15.contribution/how-to-contribute/#step_7_push","title":"Step 7: Push","text":"

When ready to review or just to establish an offsite backup, push your branch to your fork on github.com:

git push origin myfeature\n
"},{"location":"15.contribution/how-to-contribute/#step_8_create_a_pull_request","title":"Step 8: Create a Pull Request","text":"
  1. Visit your fork at https://github.com/$user/nebula (replace $user here).

  2. Click the Compare & pull request button next to your myfeature branch.

"},{"location":"15.contribution/how-to-contribute/#step_9_get_a_code_review","title":"Step 9: Get a Code Review","text":"

Once your pull request has been created, it will be assigned to at least two reviewers. Those reviewers will do a thorough code review to make sure that the changes meet the repository's contributing guidelines and other quality standards.

"},{"location":"15.contribution/how-to-contribute/#add_test_cases","title":"Add test cases","text":"

For detailed methods, see How to add test cases.

"},{"location":"15.contribution/how-to-contribute/#donation","title":"Donation","text":""},{"location":"15.contribution/how-to-contribute/#step_1_confirm_the_project_donation","title":"Step 1: Confirm the project donation","text":"

Contact the official NebulaGraph staff via email, WeChat, Slack, etc. to confirm the donation project. The project will be donated to the NebulaGraph Contrib organization.

Email address: info@vesoft.com

WeChat: NebulaGraphbot

Slack: Join Slack

"},{"location":"15.contribution/how-to-contribute/#step_2_get_the_information_of_the_project_recipient","title":"Step 2: Get the information of the project recipient","text":"

The NebulaGraph official staff will give the recipient ID of the NebulaGraph Contrib project.

"},{"location":"15.contribution/how-to-contribute/#step_3_donate_a_project","title":"Step 3: Donate a project","text":"

The user transfers the project to the recipient of this donation, and the recipient transfers the project to the NebulaGraph Contrib organization. After the donation, the user will continue to lead the development of community projects as a Maintainer.

For operations of transferring a repository on GitHub, see Transferring a repository owned by your user account.

"},{"location":"2.quick-start/1.quick-start-workflow/","title":"Quickly deploy NebulaGraph using Docker","text":"

You can quickly get started with NebulaGraph by deploying NebulaGraph with Docker Desktop or Docker Compose.

Using Docker DesktopUsing Docker Compose

NebulaGraph is available as a Docker Extension that you can easily install and run on your Docker Desktop. You can quickly deploy NebulaGraph using Docker Desktop with just one click.

  1. Install Docker Desktop.

    Caution

    We do not recommend you deploy NebulaGraph on Docker Desktop for Windows due to its subpar performance. For details, see #12401. If you must use Docker Desktop for Windows, install WSL 2 first.

  2. In the left sidebar of Docker Desktop, click Extensions or Add Extensions.

  3. On the Extensions Marketplace, search for NebulaGraph and click Install.

    Click Update to update NebulaGraph to the latest version when a new version is available.

  4. Click Open to navigate to the NebulaGraph extension page.

  5. At the top of the page, click Studio in Browser to use NebulaGraph.

For more information about how to use NebulaGraph with Docker Desktop, see the following video:

Using Docker Compose can quickly deploy NebulaGraph services based on the prepared configuration file. It is only recommended to use this method when testing the functions of NebulaGraph.

"},{"location":"2.quick-start/1.quick-start-workflow/#prerequisites","title":"Prerequisites","text":" "},{"location":"2.quick-start/1.quick-start-workflow/#deploy_nebulagraph","title":"Deploy NebulaGraph","text":"
  1. Clone the 3.8.0 branch of the nebula-docker-compose repository to your host with Git.

    Danger

    The master branch contains the untested code for the latest NebulaGraph development release. DO NOT use this release in a production environment.

    $ git clone -b release-3.8 https://github.com/vesoft-inc/nebula-docker-compose.git\n

    Note

    The x.y version of Docker Compose aligns to the x.y version of NebulaGraph. For the NebulaGraph z version, Docker Compose does not publish the corresponding z version, but pulls the z version of the NebulaGraph image.

  2. Go to the nebula-docker-compose directory.

    $ cd nebula-docker-compose/\n
  3. Run the following command to start all the NebulaGraph services.

    Note

    [nebula-docker-compose]$ docker-compose up -d\nCreating nebula-docker-compose_metad0_1 ... done\nCreating nebula-docker-compose_metad2_1 ... done\nCreating nebula-docker-compose_metad1_1 ... done\nCreating nebula-docker-compose_graphd2_1   ... done\nCreating nebula-docker-compose_graphd_1    ... done\nCreating nebula-docker-compose_graphd1_1   ... done\nCreating nebula-docker-compose_storaged0_1 ... done\nCreating nebula-docker-compose_storaged2_1 ... done\nCreating nebula-docker-compose_storaged1_1 ... done\n

    Compatibility

    Starting from NebulaGraph version 3.1.0, nebula-docker-compose automatically starts a NebulaGraph Console docker container and adds the storage host to the cluster (i.e. ADD HOSTS command).

    Note

    For more information of the preceding services, see NebulaGraph architecture.

"},{"location":"2.quick-start/1.quick-start-workflow/#connect_to_nebulagraph","title":"Connect to NebulaGraph","text":"

There are two ways to connect to NebulaGraph:

  1. Run the following command to view the name of NebulaGraph Console docker container.

    $ docker-compose ps\n          Name                         Command             State                 Ports\n--------------------------------------------------------------------------------------------\nnebula-docker-compose_console_1     sh -c sleep 3 &&          Up\n                                  nebula-co ...\n......\n
  2. Run the following command to enter the NebulaGraph Console docker container.

    docker exec -it nebula-docker-compose_console_1 /bin/sh\n/ #\n
  3. Connect to NebulaGraph with NebulaGraph Console.

    / # ./usr/local/bin/nebula-console -u <user_name> -p <password> --address=graphd --port=9669\n

    Note

    By default, the authentication is off, you can only log in with an existing username (the default is root) and any password. To turn it on, see Enable authentication.

  4. Run the following commands to view the cluster state.

    nebula> SHOW HOSTS;\n+-------------+------+----------+--------------+----------------------+------------------------+---------+\n| Host        | Port | Status   | Leader count | Leader distribution  | Partition distribution | Version |\n+-------------+------+----------+--------------+----------------------+------------------------+---------+\n| \"storaged0\" | 9779 | \"ONLINE\" | 0            | \"No valid partition\" | \"No valid partition\"   | \"3.8.0\" |\n| \"storaged1\" | 9779 | \"ONLINE\" | 0            | \"No valid partition\" | \"No valid partition\"   | \"3.8.0\" |\n| \"storaged2\" | 9779 | \"ONLINE\" | 0            | \"No valid partition\" | \"No valid partition\"   | \"3.8.0\" |\n+-------------+------+----------+--------------+----------------------+------------------------+---------+\n

Run exit twice to switch back to your terminal (shell).

"},{"location":"2.quick-start/1.quick-start-workflow/#check_the_nebulagraph_service_status_and_ports","title":"Check the NebulaGraph service status and ports","text":"

Run docker-compose ps to list all the services of NebulaGraph and their status and ports.

Note

NebulaGraph provides services to the clients through port 9669 by default. To use other ports, modify the docker-compose.yaml file in the nebula-docker-compose directory and restart the NebulaGraph services.

$ docker-compose ps\nnebula-docker-compose_console_1     sh -c sleep 3 &&                 Up\n                                  nebula-co ...\nnebula-docker-compose_graphd1_1     /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49174->19669/tcp,:::49174->19669/tcp, 0.0.0.0:49171->19670/tcp,:::49171->19670/tcp, 0.0.0.0:49177->9669/tcp,:::49177->9669/tcp\nnebula-docker-compose_graphd2_1     /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49175->19669/tcp,:::49175->19669/tcp, 0.0.0.0:49172->19670/tcp,:::49172->19670/tcp, 0.0.0.0:49178->9669/tcp,:::49178->9669/tcp\nnebula-docker-compose_graphd_1      /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49180->19669/tcp,:::49180->19669/tcp, 0.0.0.0:49179->19670/tcp,:::49179->19670/tcp, 0.0.0.0:9669->9669/tcp,:::9669->9669/tcp\nnebula-docker-compose_metad0_1      /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49157->19559/tcp,:::49157->19559/tcp, 0.0.0.0:49154->19560/tcp,:::49154->19560/tcp, 0.0.0.0:49160->9559/tcp,:::49160->9559/tcp, 9560/tcp\nnebula-docker-compose_metad1_1      /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49156->19559/tcp,:::49156->19559/tcp, 0.0.0.0:49153->19560/tcp,:::49153->19560/tcp, 0.0.0.0:49159->9559/tcp,:::49159->9559/tcp, 9560/tcp\nnebula-docker-compose_metad2_1      /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49158->19559/tcp,:::49158->19559/tcp, 0.0.0.0:49155->19560/tcp,:::49155->19560/tcp, 0.0.0.0:49161->9559/tcp,:::49161->9559/tcp, 9560/tcp\nnebula-docker-compose_storaged0_1   /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49166->19779/tcp,:::49166->19779/tcp, 0.0.0.0:49163->19780/tcp,:::49163->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49169->9779/tcp,:::49169->9779/tcp, 9780/tcp\nnebula-docker-compose_storaged1_1   /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49165->19779/tcp,:::49165->19779/tcp, 0.0.0.0:49162->19780/tcp,:::49162->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49168->9779/tcp,:::49168->9779/tcp, 9780/tcp\nnebula-docker-compose_storaged2_1   /usr/local/nebula/bin/nebu ...   Up      0.0.0.0:49167->19779/tcp,:::49167->19779/tcp, 0.0.0.0:49164->19780/tcp,:::49164->19780/tcp, 9777/tcp, 9778/tcp, 0.0.0.0:49170->9779/tcp,:::49170->9779/tcp, 9780/tcp\n

If the service is abnormal, you can first confirm the abnormal container name (such as nebula-docker-compose_graphd2_1).

Then you can execute docker ps to view the corresponding CONTAINER ID (such as 2a6c56c405f5).

[nebula-docker-compose]$ docker ps\nCONTAINER ID   IMAGE                               COMMAND                  CREATED          STATUS                    PORTS                                                                                                  NAMES\n2a6c56c405f5   vesoft/nebula-graphd:nightly     \"/usr/local/nebula/b\u2026\"   36 minutes ago   Up 36 minutes (healthy)   0.0.0.0:49230->9669/tcp, 0.0.0.0:49229->19669/tcp, 0.0.0.0:49228->19670/tcp                            nebula-docker-compose_graphd2_1\n7042e0a8e83d   vesoft/nebula-storaged:nightly   \"./bin/nebula-storag\u2026\"   36 minutes ago   Up 36 minutes (healthy)   9777-9778/tcp, 9780/tcp, 0.0.0.0:49227->9779/tcp, 0.0.0.0:49226->19779/tcp, 0.0.0.0:49225->19780/tcp   nebula-docker-compose_storaged2_1\n18e3ea63ad65   vesoft/nebula-storaged:nightly   \"./bin/nebula-storag\u2026\"   36 minutes ago   Up 36 minutes (healthy)   9777-9778/tcp, 9780/tcp, 0.0.0.0:49219->9779/tcp, 0.0.0.0:49218->19779/tcp, 0.0.0.0:49217->19780/tcp   nebula-docker-compose_storaged0_1\n4dcabfe8677a   vesoft/nebula-graphd:nightly     \"/usr/local/nebula/b\u2026\"   36 minutes ago   Up 36 minutes (healthy)   0.0.0.0:49224->9669/tcp, 0.0.0.0:49223->19669/tcp, 0.0.0.0:49222->19670/tcp                            nebula-docker-compose_graphd1_1\na74054c6ae25   vesoft/nebula-graphd:nightly     \"/usr/local/nebula/b\u2026\"   36 minutes ago   Up 36 minutes (healthy)   0.0.0.0:9669->9669/tcp, 0.0.0.0:49221->19669/tcp, 0.0.0.0:49220->19670/tcp                             nebula-docker-compose_graphd_1\n880025a3858c   vesoft/nebula-storaged:nightly   \"./bin/nebula-storag\u2026\"   36 minutes ago   Up 36 minutes (healthy)   9777-9778/tcp, 9780/tcp, 0.0.0.0:49216->9779/tcp, 0.0.0.0:49215->19779/tcp, 0.0.0.0:49214->19780/tcp   nebula-docker-compose_storaged1_1\n45736a32a23a   vesoft/nebula-metad:nightly      \"./bin/nebula-metad \u2026\"   36 minutes ago   Up 36 minutes (healthy)   9560/tcp, 0.0.0.0:49213->9559/tcp, 0.0.0.0:49212->19559/tcp, 0.0.0.0:49211->19560/tcp                  nebula-docker-compose_metad0_1\n3b2c90eb073e   vesoft/nebula-metad:nightly      \"./bin/nebula-metad \u2026\"   36 minutes ago   Up 36 minutes (healthy)   9560/tcp, 0.0.0.0:49207->9559/tcp, 0.0.0.0:49206->19559/tcp, 0.0.0.0:49205->19560/tcp                  nebula-docker-compose_metad2_1\n7bb31b7a5b3f   vesoft/nebula-metad:nightly      \"./bin/nebula-metad \u2026\"   36 minutes ago   Up 36 minutes (healthy)   9560/tcp, 0.0.0.0:49210->9559/tcp, 0.0.0.0:49209->19559/tcp, 0.0.0.0:49208->19560/tcp                  nebula-docker-compose_metad1_1\n

Use the CONTAINER ID to log in the container and troubleshoot.

nebula-docker-compose]$ docker exec -it 2a6c56c405f5 bash\n[root@2a6c56c405f5 nebula]#\n
"},{"location":"2.quick-start/1.quick-start-workflow/#check_the_service_data_and_logs","title":"Check the service data and logs","text":"

All the data and logs of NebulaGraph are stored persistently in the nebula-docker-compose/data and nebula-docker-compose/logs directories.

The structure of the directories is as follows:

nebula-docker-compose/\n  |-- docker-compose.yaml\n  \u251c\u2500\u2500 data\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 meta0\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 meta1\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 meta2\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 storage0\n  \u2502\u00a0\u00a0 \u251c\u2500\u2500 storage1\n  \u2502\u00a0\u00a0 \u2514\u2500\u2500 storage2\n  \u2514\u2500\u2500 logs\n      \u251c\u2500\u2500 graph\n      \u251c\u2500\u2500 graph1\n      \u251c\u2500\u2500 graph2\n      \u251c\u2500\u2500 meta0\n      \u251c\u2500\u2500 meta1\n      \u251c\u2500\u2500 meta2\n      \u251c\u2500\u2500 storage0\n      \u251c\u2500\u2500 storage1\n      \u2514\u2500\u2500 storage2\n
"},{"location":"2.quick-start/1.quick-start-workflow/#stop_the_nebulagraph_services","title":"Stop the NebulaGraph services","text":"

You can run the following command to stop the NebulaGraph services:

$ docker-compose down\n

The following information indicates you have successfully stopped the NebulaGraph services:

Stopping nebula-docker-compose_console_1   ... done\nStopping nebula-docker-compose_graphd1_1   ... done\nStopping nebula-docker-compose_graphd_1    ... done\nStopping nebula-docker-compose_graphd2_1   ... done\nStopping nebula-docker-compose_storaged1_1 ... done\nStopping nebula-docker-compose_storaged0_1 ... done\nStopping nebula-docker-compose_storaged2_1 ... done\nStopping nebula-docker-compose_metad2_1    ... done\nStopping nebula-docker-compose_metad0_1    ... done\nStopping nebula-docker-compose_metad1_1    ... done\nRemoving nebula-docker-compose_console_1   ... done\nRemoving nebula-docker-compose_graphd1_1   ... done\nRemoving nebula-docker-compose_graphd_1    ... done\nRemoving nebula-docker-compose_graphd2_1   ... done\nRemoving nebula-docker-compose_storaged1_1 ... done\nRemoving nebula-docker-compose_storaged0_1 ... done\nRemoving nebula-docker-compose_storaged2_1 ... done\nRemoving nebula-docker-compose_metad2_1    ... done\nRemoving nebula-docker-compose_metad0_1    ... done\nRemoving nebula-docker-compose_metad1_1    ... done\nRemoving network nebula-docker-compose_nebula-net\n

Danger

The parameter -v in the command docker-compose down -v will delete all your local NebulaGraph storage data. Try this command if you are using the nightly release and having some compatibility issues.

"},{"location":"2.quick-start/1.quick-start-workflow/#modify_configurations","title":"Modify configurations","text":"

The configuration file of NebulaGraph deployed by Docker Compose is nebula-docker-compose/docker-compose.yaml. To make the new configuration take effect, modify the configuration in this file and restart the service.

For more instructions, see Configurations.

"},{"location":"2.quick-start/1.quick-start-workflow/#faq","title":"FAQ","text":""},{"location":"2.quick-start/1.quick-start-workflow/#how_to_fix_the_docker_mapping_to_external_ports","title":"How to fix the docker mapping to external ports?","text":"

To set the ports of corresponding services as fixed mapping, modify the docker-compose.yaml in the nebula-docker-compose directory. For example:

graphd:\n    image: vesoft/nebula-graphd:release-3.6\n    ...\n    ports:\n      - 9669:9669\n      - 19669\n      - 19670\n

9669:9669 indicates the internal port 9669 is uniformly mapped to external ports, while 19669 indicates the internal port 19669 is randomly mapped to external ports.

"},{"location":"2.quick-start/1.quick-start-workflow/#how_to_upgrade_or_update_the_docker_images_of_nebulagraph_services","title":"How to upgrade or update the docker images of NebulaGraph services","text":"
  1. In the nebula-docker-compose/docker-compose.yaml file, change all the image values to the required image version.

  2. In the nebula-docker-compose directory, run docker-compose pull to update the images of the Graph Service, Storage Service, Meta Service, and NebulaGraph Console.

  3. Run docker-compose up -d to start the NebulaGraph services again.

  4. After connecting to NebulaGraph with NebulaGraph Console, run SHOW HOSTS GRAPH, SHOW HOSTS STORAGE, or SHOW HOSTS META to check the version of the responding service respectively.

"},{"location":"2.quick-start/1.quick-start-workflow/#error_toomanyrequests_when_docker-compose_pull","title":"ERROR: toomanyrequests when docker-compose pull","text":"

You may meet the following error.

ERROR: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit.

You have met the rate limit of Docker Hub. Learn more on Understanding Docker Hub Rate Limiting.

"},{"location":"2.quick-start/1.quick-start-workflow/#how_to_update_the_nebulagraph_console_client","title":"How to update the NebulaGraph Console client","text":"

The command docker-compose pull updates both the NebulaGraph services and the NebulaGraph Console.

"},{"location":"2.quick-start/2.install-nebula-graph/","title":"Step 1: Install NebulaGraph","text":"

RPM and DEB are common package formats on Linux systems. This topic shows how to quickly install NebulaGraph with the RPM or DEB package.

Note

The console is not complied or packaged with NebulaGraph server binaries. You can install nebula-console by yourself.

"},{"location":"2.quick-start/2.install-nebula-graph/#prerequisites","title":"Prerequisites","text":""},{"location":"2.quick-start/2.install-nebula-graph/#step_1_download_the_package_from_cloud_service","title":"Step 1: Download the package from cloud service","text":"

Note

NebulaGraph is currently only supported for installation on Linux systems, and only CentOS 7.x, CentOS 8.x, Ubuntu 16.04, Ubuntu 18.04, and Ubuntu 20.04 operating systems are supported.

"},{"location":"2.quick-start/2.install-nebula-graph/#step_2_install_nebulagraph","title":"Step 2: Install NebulaGraph","text":" "},{"location":"2.quick-start/2.install-nebula-graph/#next_to_do","title":"Next to do","text":" "},{"location":"2.quick-start/3.1add-storage-hosts/","title":"Register the Storage Service","text":"

When connecting to NebulaGraph for the first time, you have to add the Storage hosts, and confirm that all the hosts are online.

Compatibility

"},{"location":"2.quick-start/3.1add-storage-hosts/#prerequisites","title":"Prerequisites","text":"

You have connected to NebulaGraph.

"},{"location":"2.quick-start/3.1add-storage-hosts/#steps","title":"Steps","text":"
  1. Add the Storage hosts.

    Run the following command to add hosts:

    ADD HOSTS <ip>:<port> [,<ip>:<port> ...];\n

    Example\uff1a

    nebula> ADD HOSTS 192.168.10.100:9779, 192.168.10.101:9779, 192.168.10.102:9779;\n

    Caution

    Make sure that the IP you added is the same as the IP configured for local_ip in the nebula-storaged.conf file. Otherwise, the Storage service will fail to start. For information about configurations, see Configurations.

  2. Check the status of the hosts to make sure that they are all online.

    nebula> SHOW HOSTS;\n+------------------+------+----------+--------------+----------------------  +------------------------+---------+\n| Host             | Port | Status   | Leader count | Leader distribution  |   Partition distribution | Version |\n+------------------+------+----------+--------------+----------------------  +------------------------+---------+\n| \"192.168.10.100\" | 9779 | \"ONLINE\" | 0            | \"No valid partition\" | \"No   valid partition\"   | \"3.8.0\" |\n| \"192.168.10.101\" | 9779 | \"ONLINE\" | 0            | \"No valid partition\" | \"No   valid partition\"   | \"3.8.0\"|\n| \"192.168.10.102\" | 9779 | \"ONLINE\" | 0            | \"No valid partition\" | \"No   valid partition\"   | \"3.8.0\"|\n+------------------+------+----------+--------------+----------------------  +------------------------+---------+\n

    The Status column of the result above shows that all Storage hosts are online.

"},{"location":"2.quick-start/3.connect-to-nebula-graph/","title":"Step 3: Connect to NebulaGraph","text":"

This topic provides basic instruction on how to use the native CLI client NebulaGraph Console to connect to NebulaGraph.

Caution

When connecting to NebulaGraph for the first time, you must register the Storage Service before querying data.

NebulaGraph supports multiple types of clients, including a CLI client, a GUI client, and clients developed in popular programming languages. For more information, see the client list.

"},{"location":"2.quick-start/3.connect-to-nebula-graph/#prerequisites","title":"Prerequisites","text":" "},{"location":"2.quick-start/3.connect-to-nebula-graph/#steps","title":"Steps","text":"
  1. On the NebulaGraph Console releases page, select a NebulaGraph Console version and click Assets.

    Note

    It is recommended to select the latest version.

  2. In the Assets area, find the correct binary file for the machine where you want to run NebulaGraph Console and download the file to the machine.

  3. (Optional) Rename the binary file to nebula-console for convenience.

    Note

    For Windows, rename the file to nebula-console.exe.

  4. On the machine to run NebulaGraph Console, grant the execute permission of the nebula-console binary file to the user.

    Note

    For Windows, skip this step.

    $ chmod 111 nebula-console\n
  5. In the command line interface, change the working directory to the one where the nebula-console binary file is stored.

  6. Run the following command to connect to NebulaGraph.

    $ ./nebula-console -addr <ip> -port <port> -u <username> -p <password>\n[-t 120] [-e \"nGQL_statement\" | -f filename.nGQL]\n
    > nebula-console.exe -addr <ip> -port <port> -u <username> -p <password>\n[-t 120] [-e \"nGQL_statement\" | -f filename.nGQL]\n

    Parameter descriptions are as follows:

    Parameter Description -h/-help Shows the help menu. -addr/-address Sets the IP (or hostname) of the Graph service. The default address is 127.0.0.1. -P/-port Sets the port number of the graphd service. The default port number is 9669. -u/-user Sets the username of your NebulaGraph account. Before enabling authentication, you can use any existing username. The default username is root. -p/-password Sets the password of your NebulaGraph account. Before enabling authentication, you can use any characters as the password. -t/-timeout Sets an integer-type timeout threshold of the connection. The unit is millisecond. The default value is 120. -e/-eval Sets a string-type nGQL statement. The nGQL statement is executed once the connection succeeds. The connection stops after the result is returned. -f/-file Sets the path of an nGQL file. The nGQL statements in the file are executed once the connection succeeds. The result will be returned and the connection stops then. -enable_ssl Enables SSL encryption when connecting to NebulaGraph. -ssl_root_ca_path Sets the storage path of the certification authority file. -ssl_cert_path Sets the storage path of the certificate file. -ssl_private_key_path Sets the storage path of the private key file.

    For information on more parameters, see the project repository.

"},{"location":"2.quick-start/4.nebula-graph-crud/","title":"Step 4: Use nGQL (CRUD)","text":"

This topic will describe the basic CRUD operations in NebulaGraph.

For more information, see nGQL guide.

"},{"location":"2.quick-start/4.nebula-graph-crud/#graph_space_and_nebulagraph_schema","title":"Graph space and NebulaGraph schema","text":"

A NebulaGraph instance consists of one or more graph spaces. Graph spaces are physically isolated from each other. You can use different graph spaces in the same instance to store different datasets.

To insert data into a graph space, define a schema for the graph database. NebulaGraph schema is based on the following components.

Schema component Description Vertex Represents an entity in the real world. A vertex can have zero to multiple tags. Tag The type of the same group of vertices. It defines a set of properties that describes the types of vertices. Edge Represents a directed relationship between two vertices. Edge type The type of an edge. It defines a group of properties that describes the types of edges.

For more information, see Data modeling.

In this topic, we will use the following dataset to demonstrate basic CRUD operations.

"},{"location":"2.quick-start/4.nebula-graph-crud/#async_implementation_of_create_and_alter","title":"Async implementation of CREATE and ALTER","text":"

Caution

In NebulaGraph, the following CREATE or ALTER commands are implemented in an async way and take effect in the next heartbeat cycle. Otherwise, an error will be returned. To make sure the follow-up operations work as expected, Wait for two heartbeat cycles, i.e., 20 seconds.

Note

The default heartbeat interval is 10 seconds. To change the heartbeat interval, modify the heartbeat_interval_secs parameter in the configuration files for all services.

"},{"location":"2.quick-start/4.nebula-graph-crud/#create_and_use_a_graph_space","title":"Create and use a graph space","text":""},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax","title":"nGQL syntax","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#examples","title":"Examples","text":"
  1. Use the following statement to create a graph space named basketballplayer.

    nebula> CREATE SPACE basketballplayer(partition_num=15, replica_factor=1, vid_type=fixed_string(30));\n

    Note

    If the system returns the error [ERROR (-1005)]: Host not enough!, check whether registered the Storage Service.

  2. Check the partition distribution with SHOW HOSTS to make sure that the partitions are distributed in a balanced way.

    nebula> SHOW HOSTS;\n+-------------+-----------+-----------+--------------+----------------------------------+------------------------+---------+\n| Host        | Port      | Status    | Leader count | Leader distribution              | Partition distribution | Version |\n+-------------+-----------+-----------+--------------+----------------------------------+------------------------+---------+\n| \"storaged0\" | 9779      | \"ONLINE\"  | 5            | \"basketballplayer:5\"             | \"basketballplayer:5\"   | \"3.8.0\"|\n| \"storaged1\" | 9779      | \"ONLINE\"  | 5            | \"basketballplayer:5\"             | \"basketballplayer:5\"   | \"3.8.0\"|\n| \"storaged2\" | 9779      | \"ONLINE\"  | 5            | \"basketballplayer:5\"             | \"basketballplayer:5\"   | \"3.8.0\"|\n+-------------+-----------+-----------+-----------+--------------+----------------------------------+------------------------+---------+\n

    If the Leader distribution is uneven, use BALANCE LEADER to redistribute the partitions. For more information, see BALANCE.

  3. Use the basketballplayer graph space.

    nebula[(none)]> USE basketballplayer;\n

    You can use SHOW SPACES to check the graph space you created.

    nebula> SHOW SPACES;\n+--------------------+\n| Name               |\n+--------------------+\n| \"basketballplayer\" |\n+--------------------+\n
"},{"location":"2.quick-start/4.nebula-graph-crud/#create_tags_and_edge_types","title":"Create tags and edge types","text":""},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax_1","title":"nGQL syntax","text":"
CREATE {TAG | EDGE} [IF NOT EXISTS] {<tag_name> | <edge_type_name>}\n    (\n      <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']\n      [{, <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']} ...] \n    )\n    [TTL_DURATION = <ttl_duration>]\n    [TTL_COL = <prop_name>]\n    [COMMENT = '<comment>'];\n

For more information on parameters, see CREATE TAG and CREATE EDGE.

"},{"location":"2.quick-start/4.nebula-graph-crud/#examples_1","title":"Examples","text":"

Create tags player and team, and edge types follow and serve. Descriptions are as follows.

Component name Type Property player Tag name (string), age (int) team Tag name (string) follow Edge type degree (int) serve Edge type start_year (int), end_year (int)
nebula> CREATE TAG player(name string, age int);\n\nnebula> CREATE TAG team(name string);\n\nnebula> CREATE EDGE follow(degree int);\n\nnebula> CREATE EDGE serve(start_year int, end_year int);\n
"},{"location":"2.quick-start/4.nebula-graph-crud/#insert_vertices_and_edges","title":"Insert vertices and edges","text":"

You can use the INSERT statement to insert vertices or edges based on existing tags or edge types.

"},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax_2","title":"nGQL syntax","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#examples_2","title":"Examples","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#read_data","title":"Read data","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax_3","title":"nGQL syntax","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#examples_of_go_statement","title":"Examples of GO statement","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#example_of_fetch_statement","title":"Example of FETCH statement","text":"

Use FETCH: Fetch the properties of the player with VID player100.

nebula> FETCH PROP ON player \"player100\" YIELD properties(vertex);\n+-------------------------------+\n| properties(VERTEX)            |\n+-------------------------------+\n| {age: 42, name: \"Tim Duncan\"} |\n+-------------------------------+\n

Note

The examples of LOOKUP and MATCH statements are in indexes.

"},{"location":"2.quick-start/4.nebula-graph-crud/#update_vertices_and_edges","title":"Update vertices and edges","text":"

Users can use the UPDATE or the UPSERT statements to update existing data.

UPSERT is the combination of UPDATE and INSERT. If you update a vertex or an edge with UPSERT, the database will insert a new vertex or edge if it does not exist.

Note

UPSERT operates serially in a partition-based order. Therefore, it is slower than INSERT OR UPDATE. And UPSERT has concurrency only between multiple partitions.

"},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax_4","title":"nGQL syntax","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#examples_3","title":"Examples","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#delete_vertices_and_edges","title":"Delete vertices and edges","text":""},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax_5","title":"nGQL syntax","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#examples_4","title":"Examples","text":" "},{"location":"2.quick-start/4.nebula-graph-crud/#about_indexes","title":"About indexes","text":"

Users can add indexes to tags and edge types with the CREATE INDEX statement.

Must-read for using indexes

Both MATCH and LOOKUP statements depend on the indexes. But indexes can dramatically reduce the write performance. DO NOT use indexes in production environments unless you are fully aware of their influences on your service.

Users MUST rebuild indexes for pre-existing data. Otherwise, the pre-existing data cannot be indexed and therefore cannot be returned in MATCH or LOOKUP statements. For more information, see REBUILD INDEX.

"},{"location":"2.quick-start/4.nebula-graph-crud/#ngql_syntax_6","title":"nGQL syntax","text":"

Note

Define the index length when creating an index for a variable-length property. In UTF-8 encoding, a non-ascii character occupies 3 bytes. You should set an appropriate index length according to the variable-length property. For example, the index should be 30 bytes for 10 non-ascii characters. For more information, see CREATE INDEX

"},{"location":"2.quick-start/4.nebula-graph-crud/#examples_of_lookup_and_match_index-based","title":"Examples of LOOKUP and MATCH (index-based)","text":"

Make sure there is an index for LOOKUP or MATCH to use. If there is not, create an index first.

Find the information of the vertex with the tag player and its value of the name property is Tony Parker.

This example creates the index player_index_1 on the name property.

nebula> CREATE TAG INDEX IF NOT EXISTS player_index_1 ON player(name(20));\n

This example rebuilds the index to make sure it takes effect on pre-existing data.

nebula> REBUILD TAG INDEX player_index_1\n+------------+\n| New Job Id |\n+------------+\n| 31         |\n+------------+\n

This example uses the LOOKUP statement to retrieve the vertex property.

nebula> LOOKUP ON player WHERE player.name == \"Tony Parker\" \\\n        YIELD properties(vertex).name AS name, properties(vertex).age AS age;\n+---------------+-----+\n| name          | age |\n+---------------+-----+\n| \"Tony Parker\" | 36  |\n+---------------+-----+\n

This example uses the MATCH statement to retrieve the vertex property.

nebula> MATCH (v:player{name:\"Tony Parker\"}) RETURN v;\n+-----------------------------------------------------+\n| v                                                   |\n+-----------------------------------------------------+\n| (\"player101\" :player{age: 36, name: \"Tony Parker\"}) |\n+-----------------------------------------------------+\n
"},{"location":"2.quick-start/5.start-stop-service/","title":"Step 2: Manage NebulaGraph Service","text":"

NebulaGraph supports managing services with scripts.

"},{"location":"2.quick-start/5.start-stop-service/#manage_services_with_script","title":"Manage services with script","text":"

You can use the nebula.service script to start, stop, restart, terminate, and check the NebulaGraph services.

Note

nebula.service is stored in the /usr/local/nebula/scripts directory by default. If you have customized the path, use the actual path in your environment.

"},{"location":"2.quick-start/5.start-stop-service/#syntax","title":"Syntax","text":"
$ sudo /usr/local/nebula/scripts/nebula.service\n[-v] [-c <config_file_path>]\n<start | stop | restart | kill | status>\n<metad | graphd | storaged | all>\n
Parameter Description -v Display detailed debugging information. -c Specify the configuration file path. The default path is /usr/local/nebula/etc/. start Start the target services. stop Stop the target services. restart Restart the target services. kill Terminate the target services. status Check the status of the target services. metad Set the Meta Service as the target service. graphd Set the Graph Service as the target service. storaged Set the Storage Service as the target service. all Set all the NebulaGraph services as the target services."},{"location":"2.quick-start/5.start-stop-service/#start_nebulagraph","title":"Start NebulaGraph","text":"

Run the following command to start NebulaGraph.

$ sudo /usr/local/nebula/scripts/nebula.service start all\n[INFO] Starting nebula-metad...\n[INFO] Done\n[INFO] Starting nebula-graphd...\n[INFO] Done\n[INFO] Starting nebula-storaged...\n[INFO] Done\n
"},{"location":"2.quick-start/5.start-stop-service/#stop_nebulagraph","title":"Stop NebulaGraph","text":"

Danger

Do not run kill -9 to forcibly terminate the processes. Otherwise, there is a low probability of data loss.

Run the following command to stop NebulaGraph.

$ sudo /usr/local/nebula/scripts/nebula.service stop all\n[INFO] Stopping nebula-metad...\n[INFO] Done\n[INFO] Stopping nebula-graphd...\n[INFO] Done\n[INFO] Stopping nebula-storaged...\n[INFO] Done\n
"},{"location":"2.quick-start/5.start-stop-service/#check_the_service_status","title":"Check the service status","text":"

Run the following command to check the service status of NebulaGraph.

$ sudo /usr/local/nebula/scripts/nebula.service status all\n

The NebulaGraph services consist of the Meta Service, Graph Service, and Storage Service. The configuration files for all three services are stored in the /usr/local/nebula/etc/ directory by default. You can check the configuration files according to the returned result to troubleshoot problems.

"},{"location":"2.quick-start/5.start-stop-service/#next_to_do","title":"Next to do","text":"

Connect to NebulaGraph

"},{"location":"2.quick-start/6.cheatsheet-for-ngql/","title":"nGQL cheatsheet","text":""},{"location":"2.quick-start/6.cheatsheet-for-ngql/#functions","title":"Functions","text":" "},{"location":"2.quick-start/6.cheatsheet-for-ngql/#general_queries_statements","title":"General queries statements","text":" "},{"location":"2.quick-start/6.cheatsheet-for-ngql/#clauses_and_options","title":"Clauses and options","text":"Clause Syntax Example Description GROUP BY GROUP BY <var> YIELD <var>, <aggregation_function(var)> GO FROM \"player100\" OVER follow BIDIRECT YIELD $$.player.name as Name | GROUP BY $-.Name YIELD $-.Name as Player, count(*) AS Name_Count Finds all the vertices connected directly to vertex \"player100\", groups the result set by player names, and counts how many times the name shows up in the result set. LIMIT YIELD <var> [| LIMIT [<offset_value>,] <number_rows>] GO FROM \"player100\" OVER follow REVERSELY YIELD $$.player.name AS Friend, $$.player.age AS Age | ORDER BY $-.Age, $-.Friend | LIMIT 1, 3 Returns the 3 rows of data starting from the second row of the sorted output. SKIP RETURN <var> [SKIP <offset>] [LIMIT <number_rows>] MATCH (v:player{name:\"Tim Duncan\"}) --> (v2) RETURN v2.player.name AS Name, v2.player.age AS Age ORDER BY Age DESC SKIP 1 SKIP can be used alone to set the offset and return the data after the specified position. SAMPLE <go_statement> SAMPLE <sample_list>; GO 3 STEPS FROM \"player100\" OVER * YIELD properties($$).name AS NAME, properties($$).age AS Age SAMPLE [1,2,3]; Takes samples evenly in the result set and returns the specified amount of data. ORDER BY <YIELD clause> ORDER BY <expression> [ASC | DESC] [, <expression> [ASC | DESC] ...] FETCH PROP ON player \"player100\", \"player101\", \"player102\", \"player103\" YIELD player.age AS age, player.name AS name | ORDER BY $-.age ASC, $-.name DESC The ORDER BY clause specifies the order of the rows in the output. RETURN RETURN {<vertex_name>|<edge_name>|<vertex_name>.<property>|<edge_name>.<property>|...} MATCH (v:player) RETURN v.player.name, v.player.age LIMIT 3 Returns the first three rows with values of the vertex properties name and age. TTL CREATE TAG <tag_name>(<property_name_1> <property_value_1>, <property_name_2> <property_value_2>, ...) ttl_duration= <value_int>, ttl_col = <property_name> CREATE TAG t2(a int, b int, c string) ttl_duration= 100, ttl_col = \"a\" Create a tag and set the TTL options. WHERE WHERE {<vertex|edge_alias>.<property_name> {>|==|<|...} <value>...} MATCH (v:player) WHERE v.player.name == \"Tim Duncan\" XOR (v.player.age < 30 AND v.player.name == \"Yao Ming\") OR NOT (v.player.name == \"Yao Ming\" OR v.player.name == \"Tim Duncan\") RETURN v.player.name, v.player.age The WHERE clause filters the output by conditions. The WHERE clause usually works in Native nGQL GO and LOOKUP statements, and OpenCypher MATCH and WITH statements. YIELD YIELD [DISTINCT] <col> [AS <alias>] [, <col> [AS <alias>] ...] [WHERE <conditions>]; GO FROM \"player100\" OVER follow YIELD dst(edge) AS ID | FETCH PROP ON player $-.ID YIELD player.age AS Age | YIELD AVG($-.Age) as Avg_age, count(*)as Num_friends Finds the players that \"player100\" follows and calculates their average age. WITH MATCH $expressions WITH {nodes()|labels()|...} MATCH p=(v:player{name:\"Tim Duncan\"})--() WITH nodes(p) AS n UNWIND n AS n1 RETURN DISTINCT n1 The WITH clause can retrieve the output from a query part, process it, and pass it to the next query part as the input. UNWIND UNWIND <list> AS <alias> <RETURN clause> UNWIND [1,2,3] AS n RETURN n Splits a list into rows."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#space_statements","title":"Space statements","text":"Statement Syntax Example Description CREATE SPACE CREATE SPACE [IF NOT EXISTS] <graph_space_name> ( [partition_num = <partition_number>,] [replica_factor = <replica_number>,] vid_type = {FIXED_STRING(<N>) | INT[64]} ) [COMMENT = '<comment>'] CREATE SPACE my_space_1 (vid_type=FIXED_STRING(30)) Creates a graph space with CREATE SPACE CREATE SPACE <new_graph_space_name> AS <old_graph_space_name> CREATE SPACE my_space_4 as my_space_3 Clone a graph. space. USE USE <graph_space_name> USE space1 Specifies a graph space as the current working graph space for subsequent queries. SHOW SPACES SHOW SPACES SHOW SPACES Lists all the graph spaces in the NebulaGraph examples. DESCRIBE SPACE DESC[RIBE] SPACE <graph_space_name> DESCRIBE SPACE basketballplayer Returns the information about the specified graph space. CLEAR SPACE CLEAR SPACE [IF EXISTS] <graph_space_name> Deletes the vertices and edges in a graph space, but does not delete the graph space itself and the schema information. DROP SPACE DROP SPACE [IF EXISTS] <graph_space_name> DROP SPACE basketballplayer Deletes everything in the specified graph space."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#tag_statements","title":"TAG statements","text":"Statement Syntax Example Description CREATE TAG CREATE TAG [IF NOT EXISTS] <tag_name> ( <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>'] [{, <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']} ...] ) [TTL_DURATION = <ttl_duration>] [TTL_COL = <prop_name>] [COMMENT = '<comment>'] CREATE TAG woman(name string, age int, married bool, salary double, create_time timestamp) TTL_DURATION = 100, TTL_COL = \"create_time\" Creates a tag with the given name in a graph space. DROP TAG DROP TAG [IF EXISTS] <tag_name> DROP TAG test; Drops a tag with the given name in the current working graph space. ALTER TAG ALTER TAG <tag_name> <alter_definition> [, alter_definition] ...] [ttl_definition [, ttl_definition] ... ] [COMMENT = '<comment>'] ALTER TAG t1 ADD (p3 int, p4 string) Alters the structure of a tag with the given name in a graph space. You can add or drop properties, and change the data type of an existing property. You can also set a TTL (Time-To-Live) on a property, or change its TTL duration. SHOW TAGS SHOW TAGS SHOW TAGS Shows the name of all tags in the current graph space. DESCRIBE TAG DESC[RIBE] TAG <tag_name> DESCRIBE TAG player Returns the information about a tag with the given name in a graph space, such as field names, data type, and so on. DELETE TAG DELETE TAG <tag_name_list> FROM <VID> DELETE TAG test1 FROM \"test\" Deletes a tag with the given name on a specified vertex."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#edge_type_statements","title":"Edge type statements","text":"Statement Syntax Example Description CREATE EDGE CREATE EDGE [IF NOT EXISTS] <edge_type_name> ( <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>'] [{, <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']} ...] ) [TTL_DURATION = <ttl_duration>] [TTL_COL = <prop_name>] [COMMENT = '<comment>'] CREATE EDGE e1(p1 string, p2 int, p3 timestamp) TTL_DURATION = 100, TTL_COL = \"p2\" Creates an edge type with the given name in a graph space. DROP EDGE DROP EDGE [IF EXISTS] <edge_type_name> DROP EDGE e1 Drops an edge type with the given name in a graph space. ALTER EDGE ALTER EDGE <edge_type_name> <alter_definition> [, alter_definition] ...] [ttl_definition [, ttl_definition] ... ] [COMMENT = '<comment>'] ALTER EDGE e1 ADD (p3 int, p4 string) Alters the structure of an edge type with the given name in a graph space. SHOW EDGES SHOW EDGES SHOW EDGES Shows all edge types in the current graph space. DESCRIBE EDGE DESC[RIBE] EDGE <edge_type_name> DESCRIBE EDGE follow Returns the information about an edge type with the given name in a graph space, such as field names, data type, and so on."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#vertex_statements","title":"Vertex statements","text":"Statement Syntax Example Description INSERT VERTEX INSERT VERTEX [IF NOT EXISTS] [tag_props, [tag_props] ...] VALUES <vid>: ([prop_value_list]) INSERT VERTEX t2 (name, age) VALUES \"13\":(\"n3\", 12), \"14\":(\"n4\", 8) Inserts one or more vertices into a graph space in NebulaGraph. DELETE VERTEX DELETE VERTEX <vid> [, <vid> ...] DELETE VERTEX \"team1\" Deletes vertices and the related incoming and outgoing edges of the vertices. UPDATE VERTEX UPDATE VERTEX ON <tag_name> <vid> SET <update_prop> [WHEN <condition>] [YIELD <output>] UPDATE VERTEX ON player \"player101\" SET age = age + 2 Updates properties on tags of a vertex. UPSERT VERTEX UPSERT VERTEX ON <tag> <vid> SET <update_prop> [WHEN <condition>] [YIELD <output>] UPSERT VERTEX ON player \"player667\" SET age = 31 The UPSERT statement is a combination of UPDATE and INSERT. You can use UPSERT VERTEX to update the properties of a vertex if it exists or insert a new vertex if it does not exist."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#edge_statements","title":"Edge statements","text":"Statement Syntax Example Description INSERT EDGE INSERT EDGE [IF NOT EXISTS] <edge_type> ( <prop_name_list> ) VALUES <src_vid> -> <dst_vid>[@<rank>] : ( <prop_value_list> ) [, <src_vid> -> <dst_vid>[@<rank>] : ( <prop_value_list> ), ...] INSERT EDGE e2 (name, age) VALUES \"11\"->\"13\":(\"n1\", 1) Inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in NebulaGraph. DELETE EDGE DELETE EDGE <edge_type> <src_vid> -> <dst_vid>[@<rank>] [, <src_vid> -> <dst_vid>[@<rank>] ...] DELETE EDGE serve \"player100\" -> \"team204\"@0 Deletes one edge or multiple edges at a time. UPDATE EDGE UPDATE EDGE ON <edge_type> <src_vid> -> <dst_vid> [@<rank>] SET <update_prop> [WHEN <condition>] [YIELD <output>] UPDATE EDGE ON serve \"player100\" -> \"team204\"@0 SET start_year = start_year + 1 Updates properties on an edge. UPSERT EDGE UPSERT EDGE ON <edge_type> <src_vid> -> <dst_vid> [@rank] SET <update_prop> [WHEN <condition>] [YIELD <properties>] UPSERT EDGE on serve \"player666\" -> \"team200\"@0 SET end_year = 2021 The UPSERT statement is a combination of UPDATE and INSERT. You can use UPSERT EDGE to update the properties of an edge if it exists or insert a new edge if it does not exist."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#index","title":"Index","text":" "},{"location":"2.quick-start/6.cheatsheet-for-ngql/#subgraph_and_path_statements","title":"Subgraph and path statements","text":"Type Syntax Example Description GET SUBGRAPH GET SUBGRAPH [WITH PROP] [<step_count> {STEP|STEPS}] FROM {<vid>, <vid>...} [{IN | OUT | BOTH} <edge_type>, <edge_type>...] YIELD [VERTICES AS <vertex_alias>] [,EDGES AS <edge_alias>] GET SUBGRAPH 1 STEPS FROM \"player100\" YIELD VERTICES AS nodes, EDGES AS relationships Retrieves information of vertices and edges reachable from the source vertices of the specified edge types and returns information of the subgraph. FIND PATH FIND { SHORTEST | ALL | NOLOOP } PATH [WITH PROP] FROM <vertex_id_list> TO <vertex_id_list> OVER <edge_type_list> [REVERSELY | BIDIRECT] [<WHERE clause>] [UPTO <N> {STEP|STEPS}] YIELD path as <alias> [| ORDER BY $-.path] [| LIMIT <M>] FIND SHORTEST PATH FROM \"player102\" TO \"team204\" OVER * YIELD path as p Finds the paths between the selected source vertices and destination vertices. A returned path is like (<vertex_id>)-[:<edge_type_name>@<rank>]->(<vertex_id)."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#query_tuning_statements","title":"Query tuning statements","text":"Type Syntax Example Description EXPLAIN EXPLAIN [format=\"row\" | \"dot\"] <your_nGQL_statement> EXPLAIN format=\"row\" SHOW TAGSEXPLAIN format=\"dot\" SHOW TAGS Helps output the execution plan of an nGQL statement without executing the statement. PROFILE PROFILE [format=\"row\" | \"dot\"] <your_nGQL_statement> PROFILE format=\"row\" SHOW TAGSEXPLAIN format=\"dot\" SHOW TAGS Executes the statement, then outputs the execution plan as well as the execution profile."},{"location":"2.quick-start/6.cheatsheet-for-ngql/#operation_and_maintenance_statements","title":"Operation and maintenance statements","text":" "},{"location":"20.appendix/0.FAQ/","title":"FAQ","text":"

This topic lists the frequently asked questions for using NebulaGraph 3.8.0. You can use the search box in the help center or the search function of the browser to match the questions you are looking for.

If the solutions described in this topic cannot solve your problems, ask for help on the NebulaGraph forum or submit an issue on GitHub issue.

"},{"location":"20.appendix/0.FAQ/#about_manual_updates","title":"About manual updates","text":""},{"location":"20.appendix/0.FAQ/#why_is_the_behavior_in_the_manual_not_consistent_with_the_system","title":"\"Why is the behavior in the manual not consistent with the system?\"","text":"

NebulaGraph is still under development. Its behavior changes from time to time. Users can submit an issue to inform the team if the manual and the system are not consistent.

Note

If you find some errors in this topic:

  1. Click the pencil button at the top right side of this page.
  2. Use markdown to fix this error. Then click \"Commit changes\" at the bottom, which will start a Github pull request.
  3. Sign the CLA. This pull request will be merged after the acceptance of at least two reviewers.
"},{"location":"20.appendix/0.FAQ/#about_legacy_version_compatibility","title":"About legacy version compatibility","text":"

Compatibility

Neubla Graph 3.8.0 is not compatible with NebulaGraph 1.x nor 2.0-RC in both data formats and RPC-protocols, and vice versa. The service process may quit if using an lower version client to connect to a higher version server.

To upgrade data formats, see Upgrade NebulaGraph to the current version. Users must upgrade all clients.

"},{"location":"20.appendix/0.FAQ/#about_execution_errors","title":"About execution errors","text":""},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_-1005graphmemoryexceeded_-2600","title":"\"How to resolve the error -1005:GraphMemoryExceeded: (-2600)?\"","text":"

This error is issued by the Memory Tracker when it observes that memory usage has exceeded a set threshold. This mechanism can help avoid service processes from being terminated by the system's OOM (Out of Memory) killer. Steps to resolve:

  1. Check memory usage: First, you need to check the memory usage during the execution of the command. If the memory usage is indeed high, then this error might be expected.

  2. Check the configuration of the Memory Tracker: If the memory usage is not high, check the relevant configurations of the Memory Tracker. These include memory_tracker_untracked_reserved_memory_mb (untracked reserved memory in MB), memory_tracker_limit_ratio (memory limit ratio), and memory_purge_enabled (whether memory purge is enabled). For the configuration of the Memory Tracker, see memory tracker configuration.

  3. Optimize configurations: Adjust these configurations according to the actual situation. For example, if the available memory limit is too low, you can increase the value of memory_tracker_limit_ratio.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_semanticerror_missing_yield_clause","title":"\"How to resolve the error SemanticError: Missing yield clause.?\"","text":"

Starting with NebulaGraph 3.0.0, the statements LOOKUP, GO, and FETCH must output results with the YIELD clause. For more information, see YIELD.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_host_not_enough","title":"\"How to resolve the error Host not enough!?\"","text":"

From NebulaGraph version 3.0.0, the Storage services added in the configuration files CANNOT be read or written directly. The configuration files only register the Storage services into the Meta services. You must run the ADD HOSTS command to read and write data on Storage servers. For more information, see Manage Storage hosts.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_to_get_the_property_of_the_vertex_in_vage_should_use_the_format_vartagprop","title":"\"How to resolve the error To get the property of the vertex in 'v.age', should use the format 'var.tag.prop'?\"","text":"

From NebulaGraph version 3.0.0, patterns support matching multiple tags at the same time, so you need to specify a tag name when querying properties. The original statement RETURN variable_name.property_name is changed to RETURN variable_name.<tag_name>.property_name.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_used_memory_hits_the_high_watermark0800000_of_total_system_memory","title":"\"How to resolve Used memory hits the high watermark(0.800000) of total system memory.?\"","text":"

The error may be caused if the system memory usage is higher than the threshold specified bysystem_memory_high_watermark_ratio, which defaults to 0.8. When the threshold is exceeded, an alarm is triggered and NebulaGraph stops processing queries.

Possible solutions are as follows:

However, the system_memory_high_watermark_ratio parameter is deprecated. It is recommended that you use the Memory Tracker feature instead to limit the memory usage of Graph and Storage services. For more information, see Memory Tracker for Graph service and Memory Tracker for Storage service.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_storage_error_e_rpc_failure","title":"\"How to resolve the error Storage Error E_RPC_FAILURE?\"","text":"

The reason for this error is usually that the storaged process returns too many data back to the graphd process. Possible solutions are as follows:

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_the_leader_has_changed_try_again_later","title":"\"How to resolve the error The leader has changed. Try again later?\"","text":"

It is a known issue. Just retry 1 to N times, where N is the partition number. The reason is that the meta client needs some heartbeats to update or errors to trigger the new leader information.

If this error occurs when logging in to NebulaGraph, you can consider using df -h to view the disk space and check whether the local disk is full.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_schema_not_exist_xxx","title":"\"How to resolve Schema not exist: xxx?\"","text":"

If the system returns Schema not exist when querying, make sure that:

"},{"location":"20.appendix/0.FAQ/#unable_to_download_snapshot_packages_when_compiling_exchange_connectors_or_algorithm","title":"Unable to download SNAPSHOT packages when compiling Exchange, Connectors, or Algorithm","text":"

Problem description: The system reports Could not find artifact com.vesoft:client:jar:xxx-SNAPSHOT when compiling.

Cause: There is no local Maven repository for storing or downloading SNAPSHOT packages. The default central repository in Maven only stores official releases, not development versions (SNAPSHOTs).

Solution: Add the following configuration in the profiles scope of Maven's setting.xml file:

  <profile>\n     <activation>\n        <activeByDefault>true</activeByDefault>\n     </activation>\n     <repositories>\n        <repository>\n            <id>snapshots</id>\n            <url>https://oss.sonatype.org/content/repositories/snapshots/</url>\n            <snapshots>\n               <enabled>true</enabled>\n            </snapshots>\n      </repository>\n     </repositories>\n  </profile>\n
"},{"location":"20.appendix/0.FAQ/#how_to_resolve_error_-1004_syntaxerror_syntax_error_near","title":"\"How to resolve [ERROR (-1004)]: SyntaxError: syntax error near?\"","text":"

In most cases, a query statement requires a YIELD or a RETURN. Check your query statement to see if YIELD or RETURN is provided.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_cant_solve_the_start_vids_from_the_sentence","title":"\"How to resolve the error can\u2019t solve the start vids from the sentence?\"","text":"

The graphd process requires start vids to begin a graph traversal. The start vids can be specified by the user. For example:

> GO FROM ${vids} ...\n> MATCH (src) WHERE id(src) == ${vids}\n# The \"start vids\" are explicitly given by ${vids}.\n

It can also be found from a property index. For example:

# CREATE TAG INDEX IF NOT EXISTS i_player ON player(name(20));\n# REBUILD TAG INDEX i_player;\n\n> LOOKUP ON player WHERE player.name == \"abc\" | ... YIELD ...\n> MATCH (src) WHERE src.name == \"abc\" ...\n# The \"start vids\" are found from the property index \"name\".\n

Otherwise, an error like can\u2019t solve the start vids from the sentence will be returned.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_wrong_vertex_id_type_1001","title":"\"How to resolve the error Wrong vertex id type: 1001?\"","text":"

Check whether the VID is INT64 or FIXED_STRING(N) set by create space. For more information, see create space.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_the_vid_must_be_a_64-bit_integer_or_a_string_fitting_space_vertex_id_length_limit","title":"\"How to resolve the error The VID must be a 64-bit integer or a string fitting space vertex id length limit.?\"","text":"

Check whether the length of the VID exceeds the limitation. For more information, see create space.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_edge_conflict_or_vertex_conflict","title":"\"How to resolve the error edge conflict or vertex conflict?\"","text":"

NebulaGraph may return such errors when the Storage service receives multiple requests to insert or update the same vertex or edge within milliseconds. Try the failed requests again later.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_rpc_failure_in_metaclient_connection_refused","title":"\"How to resolve the error RPC failure in MetaClient: Connection refused?\"","text":"

The reason for this error is usually that the metad service status is unusual, or the network of the machine where the metad and graphd services are located is disconnected. Possible solutions are as follows:

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_storageclientbaseinl214_request_to_xxxx9779_failed_n6apache6thrift9transport19ttransportexceptione_timed_out_in_nebula-graphinfo","title":"\"How to resolve the error StorageClientBase.inl:214] Request to \"x.x.x.x\":9779 failed: N6apache6thrift9transport19TTransportExceptionE: Timed Out in nebula-graph.INFO?\"","text":"

The reason for this error may be that the amount of data to be queried is too large, and the storaged process has timed out. Possible solutions are as follows:

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_metaclientcpp65_heartbeat_failed_statuswrong_cluster_in_nebula-storagedinfo_or_hbprocessorcpp54_reject_wrong_cluster_host_xxxx9771_in_nebula-metadinfo","title":"\"How to resolve the error MetaClient.cpp:65] Heartbeat failed, status:Wrong cluster! in nebula-storaged.INFO, or HBProcessor.cpp:54] Reject wrong cluster host \"x.x.x.x\":9771! in nebula-metad.INFO?\"","text":"

The reason for this error may be that the user has modified the IP or the port information of the metad process, or the storage service has joined other clusters before. Possible solutions are as follows:

Delete the cluster.id file in the installation directory where the storage machine is deployed (the default installation directory is /usr/local/nebula), and restart the storaged service.

"},{"location":"20.appendix/0.FAQ/#how_to_resolve_the_error_storage_error_more_than_one_request_trying_to_addupdatedelete_one_edgevertex_at_he_same_time","title":"\"How to resolve the error Storage Error: More than one request trying to add/update/delete one edge/vertex at he same time.?\"","text":"

The reason for this error is that the current NebulaGraph version does not support concurrent requests to the same vertex or edge at the same time. To solve this error, re-execute your commands.

"},{"location":"20.appendix/0.FAQ/#about_design_and_functions","title":"About design and functions","text":""},{"location":"20.appendix/0.FAQ/#how_is_the_time_spent_value_at_the_end_of_each_return_message_calculated","title":"\"How is the time spent value at the end of each return message calculated?\"","text":"

Take the returned message of SHOW SPACES as an example:

nebula> SHOW SPACES;\n+--------------------+\n| Name               |\n+--------------------+\n| \"basketballplayer\" |\n+--------------------+\nGot 1 rows (time spent 1235/1934 us)\n
"},{"location":"20.appendix/0.FAQ/#why_does_the_port_number_of_the_nebula-storaged_process_keep_showing_red_after_connecting_to_nebulagraph","title":"\"Why does the port number of the nebula-storaged process keep showing red after connecting to NebulaGraph?\"","text":"

Because the nebula-storaged process waits for nebula-metad to add the current Storage service during the startup process. The Storage works after it receives the ready signal. Starting from NebulaGraph 3.0.0, the Meta service cannot directly read or write data in the Storage service that you add in the configuration file. The configuration file only registers the Storage service to the Meta service. You must run the ADD HOSTS command to enable the Meta to read and write data in the Storage service. For more information, see Manage Storage hosts.

"},{"location":"20.appendix/0.FAQ/#why_is_there_no_line_separating_each_row_in_the_returned_result_of_nebulagraph_260","title":"\"Why is there no line separating each row in the returned result of NebulaGraph 2.6.0?\"","text":"

This is caused by the release of NebulaGraph Console 2.6.0, not the change of NebulaGraph core. And it will not affect the content of the returned data itself.

"},{"location":"20.appendix/0.FAQ/#about_dangling_edges","title":"About dangling edges","text":"

A dangling edge is an edge that only connects to a single vertex and only one part of the edge connects to the vertex.

Dangling edges may appear in NebulaGraph 3.8.0 as the design. And there is no MERGE statements of openCypher. The guarantee for dangling edges depends entirely on the application level. For more information, see INSERT VERTEX, DELETE VERTEX, INSERT EDGE, DELETE EDGE.

"},{"location":"20.appendix/0.FAQ/#can_i_set_replica_factor_as_an_even_number_in_create_space_statements_eg_replica_factor_2","title":"\"Can I set replica_factor as an even number in CREATE SPACE statements, e.g., replica_factor = 2?\"","text":"

NO.

The Storage service guarantees its availability based on the Raft consensus protocol. The number of failed replicas must not exceed half of the total replica number.

When the number of machines is 1, replica_factor can only be set to1.

When there are enough machines and replica_factor=2, if one replica fails, the Storage service fails. No matter replica_factor=3 or replica_factor=4, if more than one replica fails, the Storage Service fails. To prevent unnecessary waste of resources, we recommend that you set an odd replica number.

We suggest that you set replica_factor=3 for a production environment and replica_factor=1 for a test environment. Do not use an even number.

"},{"location":"20.appendix/0.FAQ/#is_stopping_or_killing_slow_queries_supported","title":"\"Is stopping or killing slow queries supported?\"","text":"

Yes. For more information, see Kill query.

"},{"location":"20.appendix/0.FAQ/#why_are_the_query_results_different_when_using_go_and_match_to_execute_the_same_semantic_query","title":"\"Why are the query results different when using GO and MATCH to execute the same semantic query?\"","text":"

The possible reasons are listed as follows.

The example is as follows.

All queries that start from A with 5 hops will end at C (A->B->C->D->E->C). If it is 6 hops, the GO statement will end at D (A->B->C->D->E->C->D), because the edge C->D can be visited repeatedly. However, the MATCH statement returns empty, because edges cannot be visited repeatedly.

Therefore, using GO and MATCH to execute the same semantic query may cause different query results.

For more information, see Wikipedia.

"},{"location":"20.appendix/0.FAQ/#how_to_count_the_verticesedges_number_of_each_tagedge_type","title":"\"How to count the vertices/edges number of each tag/edge type?\"","text":"

See show-stats.

"},{"location":"20.appendix/0.FAQ/#how_to_get_all_the_verticesedge_of_each_tagedge_type","title":"\"How to get all the vertices/edge of each tag/edge type?\"","text":"
  1. Create and rebuild the index.

    > CREATE TAG INDEX IF NOT EXISTS i_player ON player();\n> REBUILD TAG INDEX IF NOT EXISTS i_player;\n
  2. Use LOOKUP or MATCH. For example:

    > LOOKUP ON player;\n> MATCH (n:player) RETURN n;\n

For more information, see INDEX, LOOKUP, and MATCH.

"},{"location":"20.appendix/0.FAQ/#can_non-english_characters_be_used_as_identifiers_such_as_the_names_of_graph_spaces_tags_edge_types_properties_and_indexes","title":"\"Can non-English characters be used as identifiers, such as the names of graph spaces, tags, edge types, properties, and indexes?\"","text":"

Yes, for more information, see Keywords and reserved words.

"},{"location":"20.appendix/0.FAQ/#how_to_get_the_out-degreethe_in-degree_of_a_given_vertex","title":"\"How to get the out-degree/the in-degree of a given vertex?\"","text":"

The out-degree of a vertex refers to the number of edges starting from that vertex, while the in-degree refers to the number of edges pointing to that vertex.

nebula > MATCH (s)-[e]->() WHERE id(s) == \"given\" RETURN count(e); #Out-degree\nnebula > MATCH (s)<-[e]-() WHERE id(s) == \"given\" RETURN count(e); #In-degree\n

This is a very slow operation to get the out/in degree since no accelaration can be applied (no indices or caches). It also could be out-of-memory when hitting a supper-node.

"},{"location":"20.appendix/0.FAQ/#how_to_quickly_get_the_out-degree_and_in-degree_of_all_vertices","title":"\"How to quickly get the out-degree and in-degree of all vertices?\"","text":"

There is no such command.

You can use NebulaGraph Algorithm.

"},{"location":"20.appendix/0.FAQ/#about_operation_and_maintenance","title":"About operation and maintenance","text":""},{"location":"20.appendix/0.FAQ/#the_runtime_log_files_are_too_large_how_to_recycle_the_logs","title":"\"The runtime log files are too large. How to recycle the logs?\"","text":"

NebulaGraph uses glog for log printing, which does not support log recycling. You can manage runtime logs by using cron jobs or the log management tool logrotate. For operational details, see Log recycling.

"},{"location":"20.appendix/0.FAQ/#how_to_check_the_nebulagraph_version","title":"\"How to check the NebulaGraph version?\"","text":"

If the service is running: run command SHOW HOSTS META in nebula-console. See SHOW HOSTS.

If the service is not running:

Different installation methods make the method of checking the version different. The instructions are as follows:

If the service is not running, run the command ./<binary_name> --version to get the version and the Git commit IDs of the NebulaGraph binary files. For example:

$ ./nebula-graphd --version\n
"},{"location":"20.appendix/0.FAQ/#how_to_scale_my_cluster_updown_or_outin","title":"\"How to scale my cluster up/down or out/in?\"","text":"

Warning

The cluster scaling function has not been officially released in the community edition. The operations involving SUBMIT JOB BALANCE DATA REMOVE and SUBMIT JOB BALANCE DATA are experimental features in the community edition and the functionality is not stable. Before using it in the community edition, make sure to back up your data first and set enable_experimental_feature and enable_data_balance to true in the Graph configuration file.

"},{"location":"20.appendix/0.FAQ/#increase_or_decrease_the_number_of_meta_graph_or_storage_nodes","title":"Increase or decrease the number of Meta, Graph, or Storage nodes","text":""},{"location":"20.appendix/0.FAQ/#add_or_remove_disks_in_the_storage_nodes","title":"Add or remove disks in the Storage nodes","text":"

Currently, Storage cannot dynamically recognize new added disks. You can add or remove disks in the Storage nodes of the distributed cluster by following these steps:

  1. Execute SUBMIT JOB BALANCE DATA REMOVE <ip:port> to migrate data in the Storage node with the disk to be added or removed to other Storage nodes.

    Caution

  2. Execute DROP HOSTS <ip:port> to remove the Storage node with the disk to be added or removed.

  3. In the configuration file of all Storage nodes, configure the path of the new disk to be added or removed through --data_path, see Storage configuration file for details.

  4. Execute ADD HOSTS <ip:port> to re-add the Storage node with the disk to be added or removed.
  5. As needed, execute SUBMIT JOB BALANCE DATA to evenly distribute the shards of the current space to all Storage nodes and execute SUBMIT JOB BALANCE LEADER command to balance the leaders in all spaces. Before running the command, select a space.
"},{"location":"20.appendix/0.FAQ/#after_changing_the_name_of_the_host_the_old_one_keeps_displaying_offline_what_should_i_do","title":"\"After changing the name of the host, the old one keeps displaying OFFLINE. What should I do?\"","text":"

Hosts with the status of OFFLINE will be automatically deleted after one day.

"},{"location":"20.appendix/0.FAQ/#how_do_i_view_the_dmp_file","title":"\"How do I view the dmp file?\"","text":"

The dmp file is an error report file detailing the exit of the process and can be viewed with the gdb utility. the Coredump file is saved in the directory of the startup binary (by default it is /usr/local/nebula) and is generated automatically when the NebulaGraph service crashes.

  1. Check the Core file process name, pid is usually a numeric value.
    $ file core.<pid>\n
  2. Use gdb to debug.
    $ gdb <process.name> core.<pid>\n
  3. View the contents of the file.
    $(gdb) bt\n

For example:

$ file core.1316027\ncore.1316027: ELF 64-bit LSB core file, x86-64, version 1 (SYSV), SVR4-style, from '/home/workspace/fork/nebula-debug/bin/nebula-metad --flagfile /home/k', real uid: 1008, effective uid: 1008, real gid: 1008, effective gid: 1008, execfn: '/home/workspace/fork/nebula-debug/bin/nebula-metad', platform: 'x86_64'\n\n$ gdb /home/workspace/fork/nebula-debug/bin/nebula-metad core.1316027\n\n$(gdb) bt\n#0  0x00007f9de58fecf5 in __memcpy_ssse3_back () from /lib64/libc.so.6\n#1  0x0000000000eb2299 in void std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_construct<char*>(char*, char*, std::forward_iterator_tag) ()\n#2  0x0000000000ef71a7 in nebula::meta::cpp2::QueryDesc::QueryDesc(nebula::meta::cpp2::QueryDesc const&) ()\n...\n

If you are not clear about the information that dmp prints out, you can post the printout with the OS version, hardware configuration, error logs before and after the Core file was created and actions that may have caused the error on the NebulaGraph forum.

"},{"location":"20.appendix/0.FAQ/#how_can_i_set_the_nebulagraph_service_to_start_automatically_on_boot_via_systemctl","title":"How can I set the NebulaGraph service to start automatically on boot via systemctl?","text":"
  1. Execute systemctl enable to start the metad, graphd and storaged services.

    [root]# systemctl enable nebula-metad.service\nCreated symlink from /etc/systemd/system/multi-user.target.wants/nebula-metad.service to /usr/lib/systemd/system/nebula-metad.service.\n[root]# systemctl enable nebula-graphd.service\nCreated symlink from /etc/systemd/system/multi-user.target.wants/nebula-graphd.service to /usr/lib/systemd/system/nebula-graphd.service.\n[root]# systemctl enable nebula-storaged.service\nCreated symlink from /etc/systemd/system/multi-user.target.wants/nebula-storaged.service to /usr/lib/systemd/system/nebula-storaged.service.\n
  2. Configure the service files for metad, graphd and storaged to set the service to pull up automatically.

    Caution

    The following points need to be noted when configuring the service file. - The paths of the PIDFile, ExecStart, ExecReload and ExecStop parameters need to be the same as those on the server. - RestartSec is the length of time (in seconds) to wait before restarting, which can be modified according to the actual situation. - (Optional) StartLimitInterval is the unlimited restart, the default is 10 seconds if the restart exceeds 5 times, and set to 0 means unlimited restart. - (Optional) LimitNOFILE is the maximum number of open files for the service, the default is 1024 and can be changed according to the actual situation.

    Configure the service file for the metad service.

    $ vi /usr/lib/systemd/system/nebula-metad.service\n\n[Unit]\nDescription=Nebula Graph Metad Service\nAfter=network.target\n\n[Service ]\nType=forking\nRestart=always\nRestartSec=15s\nPIDFile=/usr/local/nebula/pids/nebula-metad.pid\nExecStart=/usr/local/nebula/scripts/nebula.service start metad\nExecReload=/usr/local/nebula/scripts/nebula.service restart metad\nExecStop=/usr/local/nebula/scripts/nebula.service stop metad\nPrivateTmp=true\nStartLimitInterval=0\nLimitNOFILE=1024\n\n[Install]\nWantedBy=multi-user.target\n

    Configure the service file for the graphd service.

    $ vi /usr/lib/systemd/system/nebula-graphd.service\n[Unit]\nDescription=Nebula Graph Graphd Service\nAfter=network.target\n\n[Service]\nType=forking\nRestart=always\nRestartSec=15s\nPIDFile=/usr/local/nebula/pids/nebula-graphd.pid\nExecStart=/usr/local/nebula/scripts/nebula.service start graphd\nExecReload=/usr/local/nebula/scripts/nebula.service restart graphd\nExecStop=/usr/local/nebula/scripts/nebula.service stop graphd\nPrivateTmp=true\nStartLimitInterval=0\nLimitNOFILE=1024\n\n[Install]\nWantedBy=multi-user.target\n
    Configure the service file for the storaged service.

    $ vi /usr/lib/systemd/system/nebula-storaged.service\n[Unit]\nDescription=Nebula Graph Storaged Service\nAfter=network.target\n\n[Service]\nType=forking\nRestart=always\nRestartSec=15s\nPIDFile=/usr/local/nebula/pids/nebula-storaged.pid\nExecStart=/usr/local/nebula/scripts/nebula.service start storaged\nExecReload=/usr/local/nebula/scripts/nebula.service restart storaged\nExecStop=/usr/local/nebula/scripts/nebula.service stop storaged\nPrivateTmp=true\nStartLimitInterval=0\nLimitNOFILE=1024\n\n[Install]\nWantedBy=multi-user.target\n
  3. Reload the configuration file.

    [root]# sudo systemctl daemon-reload\n
  4. Restart the service.

    $ systemctl restart nebula-metad.service\n$ systemctl restart nebula-graphd.service\n$ systemctl restart nebula-storaged.service\n
"},{"location":"20.appendix/0.FAQ/#about_connections","title":"About connections","text":""},{"location":"20.appendix/0.FAQ/#which_ports_should_be_opened_on_the_firewalls","title":"\"Which ports should be opened on the firewalls?\"","text":"

If you have not modified the predefined ports in the Configurations, open the following ports for the NebulaGraph services:

Service Port Meta 9559, 9560, 19559 Graph 9669, 19669 Storage 9777 ~ 9780, 19779

If you have customized the configuration files and changed the predefined ports, find the port numbers in your configuration files and open them on the firewalls.

For more port information, see Port Guide for Company Products.

"},{"location":"20.appendix/0.FAQ/#how_to_test_whether_a_port_is_open_or_closed","title":"\"How to test whether a port is open or closed?\"","text":"

You can use telnet as follows to check for port status.

telnet <ip> <port>\n

Note

If you cannot use the telnet command, check if telnet is installed or enabled on your host.

For example:

// If the port is open:\n$ telnet 192.168.1.10 9669\nTrying 192.168.1.10...\nConnected to 192.168.1.10.\nEscape character is '^]'.\n\n// If the port is closed or blocked:\n$ telnet 192.168.1.10 9777\nTrying 192.168.1.10...\ntelnet: connect to address 192.168.1.10: Connection refused\n
"},{"location":"20.appendix/6.eco-tool-version/","title":"Ecosystem tools overview","text":""},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_studio","title":"NebulaGraph Studio","text":"

NebulaGraph Studio (Studio for short) is a graph database visualization tool that can be accessed through the Web. It can be used with NebulaGraph DBMS to provide one-stop services such as composition, data import, writing nGQL queries, and graph exploration. For details, see What is NebulaGraph Studio.

Note

The release of the Studio is independent of NebulaGraph core, and its naming method is also not the same as the core naming rules.

NebulaGraph version Studio version v3.8.0 v3.10.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_dashboard_community_edition","title":"NebulaGraph Dashboard Community Edition","text":"

NebulaGraph Dashboard Community Edition (Dashboard for short) is a visualization tool for monitoring the status of machines and services in the NebulaGraph cluster. For details, see What is NebulaGraph Dashboard.

NebulaGraph version Dashboard Community version v3.8.0 v3.4.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_exchange","title":"NebulaGraph Exchange","text":"

NebulaGraph Exchange (Exchange for short) is an Apache Spark&trade application for batch migration of data in a cluster to NebulaGraph in a distributed environment. It can support the migration of batch data and streaming data in a variety of different formats. For details, see What is NebulaGraph Exchange.

NebulaGraph version Exchange Community version v3.8.0 v3.8.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_operator","title":"NebulaGraph Operator","text":"

NebulaGraph Operator (Operator for short) is a tool to automate the deployment, operation, and maintenance of NebulaGraph clusters on Kubernetes. Building upon the excellent scalability mechanism of Kubernetes, NebulaGraph introduced its operation and maintenance knowledge into the Kubernetes system, which makes NebulaGraph a real cloud-native graph database. For more information, see What is NebulaGraph Operator.

NebulaGraph version Operator version v3.8.0 v1.8.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_importer","title":"NebulaGraph Importer","text":"

NebulaGraph Importer (Importer for short) is a CSV file import tool for NebulaGraph. The Importer can read the local CSV file, and then import the data into the NebulaGraph database. For details, see What is NebulaGraph Importer.

NebulaGraph version Importer version v3.8.0 v4.1.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_spark_connector","title":"NebulaGraph Spark Connector","text":"

NebulaGraph Spark Connector is a Spark connector that provides the ability to read and write NebulaGraph data in the Spark standard format. NebulaGraph Spark Connector consists of two parts, Reader and Writer. For details, see What is NebulaGraph Spark Connector.

NebulaGraph version Spark Connector version v3.8.0 v3.8.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_flink_connector","title":"NebulaGraph Flink Connector","text":"

NebulaGraph Flink Connector is a connector that helps Flink users quickly access NebulaGraph. It supports reading data from the NebulaGraph database or writing data read from other external data sources to the NebulaGraph database. For details, see What is NebulaGraph Flink Connector.

NebulaGraph version Flink Connector version v3.8.0 v3.8.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_algorithm","title":"NebulaGraph Algorithm","text":"

NebulaGraph Algorithm (Algorithm for short) is a Spark application based on GraphX, which uses a complete algorithm tool to analyze data in the NebulaGraph database by submitting a Spark task To perform graph computing, use the algorithm under the lib repository through programming to perform graph computing for DataFrame. For details, see What is NebulaGraph Algorithm.

NebulaGraph version Algorithm version v3.8.0 v3.2.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_console","title":"NebulaGraph Console","text":"

NebulaGraph Console is the native CLI client of NebulaGraph. For how to use it, see NebulaGraph Console.

NebulaGraph version Console version v3.8.0 v3.8.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_docker_compose","title":"NebulaGraph Docker Compose","text":"

Docker Compose can quickly deploy NebulaGraph clusters. For how to use it, please refer to Docker Compose Deployment NebulaGraph.

NebulaGraph version Docker Compose version v3.8.0 v3.8.0"},{"location":"20.appendix/6.eco-tool-version/#backup_restore","title":"Backup & Restore","text":"

Backup&Restore (BR for short) is a command line interface (CLI) tool that can help back up the graph space data of NebulaGraph, or restore it through a backup file data.

NebulaGraph version BR version v3.8.0 v3.6.0"},{"location":"20.appendix/6.eco-tool-version/#nebulagraph_bench","title":"NebulaGraph Bench","text":"

NebulaGraph Bench is used to test the baseline performance data of NebulaGraph. It uses the standard data set of LDBC.

NebulaGraph version Bench version v3.8.0 v1.2.0"},{"location":"20.appendix/6.eco-tool-version/#api_and_sdk","title":"API and SDK","text":"

Compatibility

Select the latest version of X.Y.* which is the same as the core version.

NebulaGraph version Language v3.8.0 C++ v3.8.0 Go v3.8.0 Python v3.8.0 Java v3.8.0 HTTP"},{"location":"20.appendix/6.eco-tool-version/#community_contributed_tools","title":"Community contributed tools","text":"

The following are useful utilities and tools contributed and maintained by community users.

"},{"location":"20.appendix/error-code/","title":"Error code","text":"

NebulaGraph returns an error code when an error occurs. This topic describes the details of the error code returned.

Note

Error name Error Code Description E_DISCONNECTED -1 Lost connection E_FAIL_TO_CONNECT -2 Unable to establish connection E_RPC_FAILURE -3 RPC failure E_LEADER_CHANGED -4 Raft leader has been changed E_SPACE_NOT_FOUND -5 Graph space does not exist E_TAG_NOT_FOUND -6 Tag does not exist E_EDGE_NOT_FOUND -7 Edge type does not exist E_INDEX_NOT_FOUND -8 Index does not exist E_EDGE_PROP_NOT_FOUND -9 Edge type property does not exist E_TAG_PROP_NOT_FOUND -10 Tag property does not exist E_ROLE_NOT_FOUND -11 The current role does not exist E_CONFIG_NOT_FOUND -12 The current configuration does not exist E_MACHINE_NOT_FOUND -13 The current host does not exist E_LISTENER_NOT_FOUND -15 Listener does not exist E_PART_NOT_FOUND -16 The current partition does not exist E_KEY_NOT_FOUND -17 Key does not exist E_USER_NOT_FOUND -18 User does not exist E_STATS_NOT_FOUND -19 Statistics do not exist E_SERVICE_NOT_FOUND -20 No current service found E_DRAINER_NOT_FOUND -21 Drainer does not exist E_DRAINER_CLIENT_NOT_FOUND -22 Drainer client does not exist E_PART_STOPPED -23 The current partition has already been stopped E_BACKUP_FAILED -24 Backup failed E_BACKUP_EMPTY_TABLE -25 The backed-up table is empty E_BACKUP_TABLE_FAILED -26 Table backup failure E_PARTIAL_RESULT -27 MultiGet could not get all data E_REBUILD_INDEX_FAILED -28 Index rebuild failed E_INVALID_PASSWORD -29 Password is invalid E_FAILED_GET_ABS_PATH -30 Unable to get absolute path E_BAD_USERNAME_PASSWORD -1001 Authentication failed E_SESSION_INVALID -1002 Invalid session E_SESSION_TIMEOUT -1003 Session timeout E_SYNTAX_ERROR -1004 Syntax error E_EXECUTION_ERROR -1005 Execution error E_STATEMENT_EMPTY -1006 Statement is empty E_BAD_PERMISSION -1008 Permission denied E_SEMANTIC_ERROR -1009 Semantic error E_TOO_MANY_CONNECTIONS -1010 Maximum number of connections exceeded E_PARTIAL_SUCCEEDED -1011 Access to storage failed (only some requests succeeded) E_NO_HOSTS -2001 Host does not exist E_EXISTED -2002 Host already exists E_INVALID_HOST -2003 Invalid host E_UNSUPPORTED -2004 The current command, statement, or function is not supported E_NOT_DROP -2005 Not allowed to drop E_CONFIG_IMMUTABLE -2007 Configuration items cannot be changed E_CONFLICT -2008 Parameters conflict with meta data E_INVALID_PARM -2009 Invalid parameter E_WRONGCLUSTER -2010 Wrong cluster E_ZONE_NOT_ENOUGH -2011 Listener conflicts E_ZONE_IS_EMPTY -2012 Host not exist E_SCHEMA_NAME_EXISTS -2013 Schema name already exists E_RELATED_INDEX_EXISTS -2014 There are still indexes related to tag or edge, cannot drop it E_RELATED_SPACE_EXISTS -2015 There are still some space on the host, cannot drop it E_STORE_FAILURE -2021 Failed to store data E_STORE_SEGMENT_ILLEGAL -2022 Illegal storage segment E_BAD_BALANCE_PLAN -2023 Invalid data balancing plan E_BALANCED -2024 The cluster is already in the data balancing status E_NO_RUNNING_BALANCE_PLAN -2025 There is no running data balancing plan E_NO_VALID_HOST -2026 Lack of valid hosts E_CORRUPTED_BALANCE_PLAN -2027 A data balancing plan that has been corrupted E_IMPROPER_ROLE -2030 Failed to recover user role E_INVALID_PARTITION_NUM -2031 Number of invalid partitions E_INVALID_REPLICA_FACTOR -2032 Invalid replica factor E_INVALID_CHARSET -2033 Invalid character set E_INVALID_COLLATE -2034 Invalid character sorting rules E_CHARSET_COLLATE_NOT_MATCH -2035 Character set and character sorting rule mismatch E_SNAPSHOT_FAILURE -2040 Failed to generate a snapshot E_BLOCK_WRITE_FAILURE -2041 Failed to write block data E_ADD_JOB_FAILURE -2044 Failed to add new task E_STOP_JOB_FAILURE -2045 Failed to stop task E_SAVE_JOB_FAILURE -2046 Failed to save task information E_BALANCER_FAILURE -2047 Data balancing failed E_JOB_NOT_FINISHED -2048 The current task has not been completed E_TASK_REPORT_OUT_DATE -2049 Task report failed E_JOB_NOT_IN_SPACE -2050 The current task is not in the graph space E_JOB_NEED_RECOVER -2051 The current task needs to be resumed E_JOB_ALREADY_FINISH -2052 The job status has already been failed or finished E_JOB_SUBMITTED -2053 Job default status E_JOB_NOT_STOPPABLE -2054 The given job do not support stop E_JOB_HAS_NO_TARGET_STORAGE -2055 The leader distribution has not been reported, so can't send task to storage E_INVALID_JOB -2065 Invalid task E_BACKUP_BUILDING_INDEX -2066 Backup terminated (index being created) E_BACKUP_SPACE_NOT_FOUND -2067 Graph space does not exist at the time of backup E_RESTORE_FAILURE -2068 Backup recovery failed E_SESSION_NOT_FOUND -2069 Session does not exist E_LIST_CLUSTER_FAILURE -2070 Failed to get cluster information E_LIST_CLUSTER_GET_ABS_PATH_FAILURE -2071 Failed to get absolute path when getting cluster information E_LIST_CLUSTER_NO_AGENT_FAILURE -2072 Unable to get an agent when getting cluster information E_QUERY_NOT_FOUND -2073 Query not found E_AGENT_HB_FAILUE -2074 Failed to receive heartbeat from agent E_HOST_CAN_NOT_BE_ADDED -2082 The host can not be added for it's not a storage host E_ACCESS_ES_FAILURE -2090 Failed to access elasticsearch E_GRAPH_MEMORY_EXCEEDED -2600 Graph memory exceeded E_CONSENSUS_ERROR -3001 Consensus cannot be reached during an election E_KEY_HAS_EXISTS -3002 Key already exists E_DATA_TYPE_MISMATCH -3003 Data type mismatch E_INVALID_FIELD_VALUE -3004 Invalid field value E_INVALID_OPERATION -3005 Invalid operation E_NOT_NULLABLE -3006 Current value is not allowed to be empty E_FIELD_UNSET -3007 Field value must be set if the field value is NOT NULL or has no default value E_OUT_OF_RANGE -3008 The value is out of the range of the current type E_DATA_CONFLICT_ERROR -3010 Data conflict E_WRITE_STALLED -3011 Writes are delayed E_IMPROPER_DATA_TYPE -3021 Incorrect data type E_INVALID_SPACEVIDLEN -3022 Invalid VID length E_INVALID_FILTER -3031 Invalid filter E_INVALID_UPDATER -3032 Invalid field update E_INVALID_STORE -3033 Invalid KV storage E_INVALID_PEER -3034 Peer invalid E_RETRY_EXHAUSTED -3035 Out of retries E_TRANSFER_LEADER_FAILED -3036 Leader change failed E_INVALID_STAT_TYPE -3037 Invalid stat type E_INVALID_VID -3038 VID is invalid E_LOAD_META_FAILED -3040 Failed to load meta information E_FAILED_TO_CHECKPOINT -3041 Failed to generate checkpoint E_CHECKPOINT_BLOCKED -3042 Generating checkpoint is blocked E_FILTER_OUT -3043 Data is filtered E_INVALID_DATA -3044 Invalid data E_MUTATE_EDGE_CONFLICT -3045 Concurrent write conflicts on the same edge E_MUTATE_TAG_CONFLICT -3046 Concurrent write conflict on the same vertex E_OUTDATED_LOCK -3047 Lock is invalid E_INVALID_TASK_PARA -3051 Invalid task parameter E_USER_CANCEL -3052 The user canceled the task E_TASK_EXECUTION_FAILED -3053 Task execution failed E_PLAN_IS_KILLED -3060 Execution plan was cleared E_NO_TERM -3070 The heartbeat process was not completed when the request was received E_OUTDATED_TERM -3071 Out-of-date heartbeat received from the old leader (the new leader has been elected) E_WRITE_WRITE_CONFLICT -3073 Concurrent write conflicts with later requests E_RAFT_UNKNOWN_PART -3500 Unknown partition E_RAFT_LOG_GAP -3501 Raft logs lag behind E_RAFT_LOG_STALE -3502 Raft logs are out of date E_RAFT_TERM_OUT_OF_DATE -3503 Heartbeat messages are out of date E_RAFT_UNKNOWN_APPEND_LOG -3504 Unknown additional logs E_RAFT_WAITING_SNAPSHOT -3511 Waiting for the snapshot to complete E_RAFT_SENDING_SNAPSHOT -3512 There was an error sending the snapshot E_RAFT_INVALID_PEER -3513 Invalid receiver E_RAFT_NOT_READY -3514 Raft did not start E_RAFT_STOPPED -3515 Raft has stopped E_RAFT_BAD_ROLE -3516 Wrong role E_RAFT_WAL_FAIL -3521 Write to a WAL failed E_RAFT_HOST_STOPPED -3522 The host has stopped E_RAFT_TOO_MANY_REQUESTS -3523 Too many requests E_RAFT_PERSIST_SNAPSHOT_FAILED -3524 Persistent snapshot failed E_RAFT_RPC_EXCEPTION -3525 RPC exception E_RAFT_NO_WAL_FOUND -3526 No WAL logs found E_RAFT_HOST_PAUSED -3527 Host suspended E_RAFT_WRITE_BLOCKED -3528 Writes are blocked E_RAFT_BUFFER_OVERFLOW -3529 Cache overflow E_RAFT_ATOMIC_OP_FAILED -3530 Atomic operation failed E_LEADER_LEASE_FAILED -3531 Leader lease expired E_RAFT_CAUGHT_UP -3532 Data has been synchronized on Raft E_STORAGE_MEMORY_EXCEEDED -3600 Storage memory exceeded E_LOG_GAP -4001 Drainer logs lag behind E_LOG_STALE -4002 Drainer logs are out of date E_INVALID_DRAINER_STORE -4003 The drainer data storage is invalid E_SPACE_MISMATCH -4004 Graph space mismatch E_PART_MISMATCH -4005 Partition mismatch E_DATA_CONFLICT -4006 Data conflict E_REQ_CONFLICT -4007 Request conflict E_DATA_ILLEGAL -4008 Illegal data E_CACHE_CONFIG_ERROR -5001 Cache configuration error E_NOT_ENOUGH_SPACE -5002 Insufficient space E_CACHE_MISS -5003 No cache hit E_CACHE_WRITE_FAILURE -5005 Write cache failed E_NODE_NUMBER_EXCEED_LIMIT -7001 Number of machines exceeded the limit E_PARSING_LICENSE_FAILURE -7002 Failed to resolve certificate E_UNKNOWN -8000 Unknown error"},{"location":"20.appendix/history/","title":"History timeline for NebulaGraph","text":"
  1. 2018.9: dutor wrote and submitted the first line of NebulaGraph database code.

  2. 2019.5: NebulaGraph v0.1.0-alpha was released as open-source.

    NebulaGraph v1.0.0-beta, v1.0.0-rc1, v1.0.0-rc2, v1.0.0-rc3, and v1.0.0-rc4 were released one after another within a year thereafter.

  3. 2019.7: NebulaGraph's debut at HBaseCon1. @dangleptr

  4. 2020.3: NebulaGraph v2.0 was starting developed in the final stage of v1.0 development.

  5. 2020.6: The first major version of NebulaGraph v1.0.0 GA was released.

  6. 2021.3: The second major version of NebulaGraph v2.0 GA was released.

  7. 2021.8: NebulaGraph v2.5.0 was released.

  8. 2021.10: NebulaGraph v2.6.0 was released.

  9. 2022.2: NebulaGraph v3.0.0 was released.

  10. 2022.4: NebulaGraph v3.1.0 was released.

  11. 2022.7: NebulaGraph v3.2.0 was released.

  12. 2022.10: NebulaGraph v3.3.0 was released.

  13. 2023.2: NebulaGraph v3.4.0 was released.

  14. 2023.5: NebulaGraph v3.5.0 was released.

  15. 2023.8: NebulaGraph v3.6.0 was released.

  1. NebulaGraph v1.x supports both RocksDB and HBase as its storage engines. NebulaGraph v2.x removes HBase supports.\u00a0\u21a9

"},{"location":"20.appendix/port-guide/","title":"Port guide for company products","text":"

The following are the default ports used by NebulaGraph core and peripheral tools.

No. Product / Service Type Default Description 1 NebulaGraph TCP 9669 Graph service RPC daemon listening port. Commonly used for client connections to the Graph service. 2 NebulaGraph TCP 19669 Graph service HTTP port. 3 NebulaGraph TCP 19670 Graph service HTTP/2 port. (Deprecated after version 3.x) 4 NebulaGraph TCP 9559, 9560 9559 is the RPC daemon listening port for Meta service. Commonly used by Graph and Storage services for querying and updating metadata in the graph database. The neighboring +1 (9560) port is used for Raft communication between Meta services. 5 NebulaGraph TCP 19559 Meta service HTTP port. 6 NebulaGraph TCP 19560 Meta service HTTP/2 port. (Deprecated after version 3.x) 7 NebulaGraph TCP 9779, 9778, 9780 9779 is the RPC daemon listening port for Storage service. Commonly used by Graph services for data storage-related operations, such as reading, writing, or deleting data. The neighboring ports -1 (9778) and +1 (9780) are also used. 9778: The port used by the Admin service, which receives Meta commands for Storage. 9780: The port used for Raft communication between Storage services. 8 NebulaGraph TCP 19779 Storage service HTTP port. 9 NebulaGraph TCP 19780 Storage service HTTP/2 port. (Deprecated after version 3.x) 10 NebulaGraph TCP 8888 Backup and restore Agent service port. The Agent is a daemon running on each machine in the cluster, responsible for starting and stopping NebulaGraph services and uploading and downloading backup files. 11 NebulaGraph TCP 9789, 9788, 9790 9789 is the Raft Listener port for Full-text index, which reads data from Storage services and writes it to the Elasticsearch cluster.Also the port for Storage Listener in inter-cluster data synchronization, used for synchronizing Storage data from the primary cluster. The neighboring ports -1 (9788) and +1 (9790) are also used.9788: An internal port.9790: The port used for Raft communication. 12 NebulaGraph TCP 9200 NebulaGraph uses this port for HTTP communication with Elasticsearch to perform full-text search queries and manage full-text indexes. 13 NebulaGraph TCP 9569, 9568, 9570 9569 is the Meta Listener port in inter-cluster data synchronization, used for synchronizing Meta data from the primary cluster. The neighboring ports -1 (9568) and +1 (9570) are also used.9568: An internal port.9570: The port used for Raft communication. 14 NebulaGraph TCP 9889, 9888, 9890 Drainer service port in inter-cluster data synchronization, used for synchronizing Storage and Meta data to the primary cluster. The neighboring ports -1 (9888) and +1 (9890) are also used.9888: An internal port.9890: The port used for Raft communication. 15 NebulaGraph Studio TCP 7001 Studio web service port. 16 NebulaGraph Dashboard TCP 8090 Nebula HTTP Gateway dependency service port. Provides an HTTP interface for cluster services to interact with the NebulaGraph database using nGQL statements.0 17 NebulaGraph Dashboard TCP 9200 Nebula Stats Exporter dependency service port. Collects cluster performance metrics, including service IP addresses, versions, and monitoring metrics (such as query count, query latency, heartbeat latency, etc.). 18 NebulaGraph Dashboard TCP 9100 Node Exporter dependency service port. Collects resource information for machines in the cluster, including CPU, memory, load, disk, and traffic. 19 NebulaGraph Dashboard TCP 9090 Prometheus service port. Time-series database for storing monitoring data. 20 NebulaGraph Dashboard TCP 7003 Dashboard Community Edition web service port."},{"location":"20.appendix/release-notes/dashboard-comm-release-note/","title":"NebulaGraph Dashboard Community Edition release notes","text":""},{"location":"20.appendix/release-notes/dashboard-comm-release-note/#community_edition_340","title":"Community Edition 3.4.0","text":" "},{"location":"20.appendix/release-notes/nebula-comm-release-note/","title":"NebulaGraph 3.8.0 release notes","text":" "},{"location":"20.appendix/release-notes/studio-release-note/","title":"NebulaGraph Studio release notes","text":""},{"location":"20.appendix/release-notes/studio-release-note/#v3100_20245","title":"v3.10.0 (2024.5)","text":" "},{"location":"20.appendix/release-notes/studio-release-note/#v391_20242","title":"v3.9.1 (2024.2)","text":""},{"location":"20.appendix/release-notes/studio-release-note/#v390_20241","title":"v3.9.0 (2024.1)","text":" "},{"location":"3.ngql-guide/4.job-statements/","title":"Job manager and the JOB statements","text":"

The long-term tasks run by the Storage Service are called jobs, such as COMPACT, FLUSH, and STATS. These jobs can be time-consuming if the data amount in the graph space is large. The job manager helps you run, show, stop, and recover jobs.

Note

All job management commands can be executed only after selecting a graph space.

"},{"location":"3.ngql-guide/4.job-statements/#submit_job_balance_leader","title":"SUBMIT JOB BALANCE LEADER","text":"

Starts a job to balance the distribution of all the storage leaders in all graph spaces. It returns the job ID.

For example:

nebula> SUBMIT JOB BALANCE LEADER;\n+------------+\n| New Job Id |\n+------------+\n| 33         |\n+------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#submit_job_compact","title":"SUBMIT JOB COMPACT","text":"

The SUBMIT JOB COMPACT statement triggers the long-term RocksDB compact operation in the current graph space.

For more information about compact configuration, see Storage Service configuration.

For example:

nebula> SUBMIT JOB COMPACT;\n+------------+\n| New Job Id |\n+------------+\n| 40         |\n+------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#submit_job_flush","title":"SUBMIT JOB FLUSH","text":"

The SUBMIT JOB FLUSH statement writes the RocksDB memfile in the memory to the hard disk in the current graph space.

For example:

nebula> SUBMIT JOB FLUSH;\n+------------+\n| New Job Id |\n+------------+\n| 96         |\n+------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#submit_job_stats","title":"SUBMIT JOB STATS","text":"

The SUBMIT JOB STATS statement starts a job that makes the statistics of the current graph space. Once this job succeeds, you can use the SHOW STATS statement to list the statistics. For more information, see SHOW STATS.

Note

If the data stored in the graph space changes, in order to get the latest statistics, you have to run SUBMIT JOB STATS again.

For example:

nebula> SUBMIT JOB STATS;\n+------------+\n| New Job Id |\n+------------+\n| 9          |\n+------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#submit_job_downloadingest","title":"SUBMIT JOB DOWNLOAD/INGEST","text":"

The SUBMIT JOB DOWNLOAD HDFS and SUBMIT JOB INGEST commands are used to import the SST file into NebulaGraph. For detail, see Import data from SST files.

The SUBMIT JOB DOWNLOAD HDFS command will download the SST file on the specified HDFS.

The SUBMIT JOB INGEST command will import the downloaded SST file into NebulaGraph.

For example:

nebula> SUBMIT JOB DOWNLOAD HDFS \"hdfs://192.168.10.100:9000/sst\";\n+------------+\n| New Job Id |\n+------------+\n| 10         |\n+------------+\nnebula> SUBMIT JOB INGEST;\n+------------+\n| New Job Id |\n+------------+\n| 11         |\n+------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#show_job","title":"SHOW JOB","text":"

The Meta Service parses a SUBMIT JOB request into multiple tasks and assigns them to the nebula-storaged processes. The SHOW JOB <job_id> statement shows the information about a specific job and all its tasks in the current graph space.

job_id is returned when you run the SUBMIT JOB statement.

For example:

nebula> SHOW JOB 8;\n+----------------+-----------------+------------+----------------------------+----------------------------+-------------+\n| Job Id(TaskId) | Command(Dest)   | Status     | Start Time                 | Stop Time                  | Error Code  |\n+----------------+-----------------+------------+----------------------------+----------------------------+-------------+\n| 8              | \"STATS\"         | \"FINISHED\" | 2022-10-18T08:14:45.000000 | 2022-10-18T08:14:45.000000 | \"SUCCEEDED\" |\n| 0              | \"192.168.8.129\" | \"FINISHED\" | 2022-10-18T08:14:45.000000 | 2022-10-18T08:15:13.000000 | \"SUCCEEDED\" |\n| \"Total:1\"      | \"Succeeded:1\"   | \"Failed:0\" | \"In Progress:0\"            | \"\"                         | \"\"          |\n+----------------+-----------------+------------+----------------------------+----------------------------+-------------+\n

The descriptions are as follows.

Parameter Description Job Id(TaskId) The first row shows the job ID and the other rows show the task IDs and the last row shows the total number of job-related tasks. Command(Dest) The first row shows the command executed and the other rows show on which storaged processes the task is running. The last row shows the number of successful tasks related to the job. Status Shows the status of the job or task. The last row shows the number of failed tasks related to the job. For more information, see Job status. Start Time Shows a timestamp indicating the time when the job or task enters the RUNNING phase. The last row shows the number of ongoing tasks related to the job. Stop Time Shows a timestamp indicating the time when the job or task gets FINISHED, FAILED, or STOPPED. Error Code The error code of job."},{"location":"3.ngql-guide/4.job-statements/#job_status","title":"Job status","text":"

The descriptions are as follows.

Status Description QUEUE The job or task is waiting in a queue. The Start Time is empty in this phase. RUNNING The job or task is running. The Start Time shows the beginning time of this phase. FINISHED The job or task is successfully finished. The Stop Time shows the time when the job or task enters this phase. FAILED The job or task has failed. The Stop Time shows the time when the job or task enters this phase. STOPPED The job or task is stopped without running. The Stop Time shows the time when the job or task enters this phase. REMOVED The job or task is removed.

The description of switching the status is described as follows.

Queue -- running -- finished -- removed\n     \\          \\                /\n      \\          \\ -- failed -- /\n       \\          \\            /\n        \\ ---------- stopped -/\n
"},{"location":"3.ngql-guide/4.job-statements/#show_jobs","title":"SHOW JOBS","text":"

The SHOW JOBS statement lists all the unexpired jobs in the current graph space.

The default job expiration interval is one week. You can change it by modifying the job_expired_secs parameter of the Meta Service. For how to modify job_expired_secs, see Meta Service configuration.

For example:

nebula> SHOW JOBS;\n+--------+---------------------+------------+----------------------------+----------------------------+\n| Job Id | Command             | Status     | Start Time                 | Stop Time                  |\n+--------+---------------------+------------+----------------------------+----------------------------+\n| 34     | \"STATS\"             | \"FINISHED\" | 2021-11-01T03:32:27.000000 | 2021-11-01T03:32:27.000000 |\n| 33     | \"FLUSH\"             | \"FINISHED\" | 2021-11-01T03:32:15.000000 | 2021-11-01T03:32:15.000000 |\n| 32     | \"COMPACT\"           | \"FINISHED\" | 2021-11-01T03:32:06.000000 | 2021-11-01T03:32:06.000000 |\n| 31     | \"REBUILD_TAG_INDEX\" | \"FINISHED\" | 2021-10-29T05:39:16.000000 | 2021-10-29T05:39:17.000000 |\n| 10     | \"COMPACT\"           | \"FINISHED\" | 2021-10-26T02:27:05.000000 | 2021-10-26T02:27:05.000000 |\n+--------+---------------------+------------+----------------------------+----------------------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#stop_job","title":"STOP JOB","text":"

The STOP JOB <job_id> statement stops jobs that are not finished in the current graph space.

For example:

nebula> STOP JOB 22;\n+---------------+\n| Result        |\n+---------------+\n| \"Job stopped\" |\n+---------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#recover_job","title":"RECOVER JOB","text":"

The RECOVER JOB [<job_id>] statement re-executes the jobs that status is FAILED or STOPPED in the current graph space and returns the number of recovered jobs. If <job_id> is not specified, re-execution is performed from the earliest job and the number of jobs that have been recovered is returned.

For example:

nebula> RECOVER JOB;\n+-------------------+\n| Recovered job num |\n+-------------------+\n| 5 job recovered   |\n+-------------------+\n
"},{"location":"3.ngql-guide/4.job-statements/#faq","title":"FAQ","text":""},{"location":"3.ngql-guide/4.job-statements/#how_to_troubleshoot_job_problems","title":"How to troubleshoot job problems?","text":"

The SUBMIT JOB operations use the HTTP port. Please check if the HTTP ports on the machines where the Storage Service is running are working well. You can use the following command to debug.

curl \"http://{storaged-ip}:19779/admin?space={space_name}&op=compact\"\n
"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/","title":"NebulaGraph Query Language (nGQL)","text":"

This topic gives an introduction to the query language of NebulaGraph, nGQL.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#what_is_ngql","title":"What is nGQL","text":"

nGQL is a declarative graph query language for NebulaGraph. It allows expressive and efficient graph patterns. nGQL is designed for both developers and operations professionals. nGQL is an SQL-like query language, so it's easy to learn.

nGQL is a project in progress. New features and optimizations are done steadily. There can be differences between syntax and implementation. Submit an issue to inform the NebulaGraph team if you find a new issue of this type. NebulaGraph 3.0 or later releases will support openCypher 9.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#what_can_ngql_do","title":"What can nGQL do","text":""},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#example_data_basketballplayer","title":"Example data Basketballplayer","text":"

Users can download the example data Basketballplayer in NebulaGraph. After downloading the example data, you can import it to NebulaGraph by using the -f option in NebulaGraph Console.

Note

Ensure that you have executed the ADD HOSTS command to add the Storage service to your NebulaGraph cluster before importing the example data. For more information, see Manage Storage hosts.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#placeholder_identifiers_and_values","title":"Placeholder identifiers and values","text":"

Refer to the following standards in nGQL:

In template code, any token that is not a keyword, a literal value, or punctuation is a placeholder identifier or a placeholder value.

For details of the symbols in nGQL syntax, see the following table:

Token Meaning < > name of a syntactic element : formula that defines an element [ ] optional elements { } explicitly specified elements | complete alternative elements ... may be repeated any number of times

For example, create vertices in nGQL syntax:

INSERT VERTEX [IF NOT EXISTS] [tag_props, [tag_props] ...]\nVALUES <vid>: ([prop_value_list])\ntag_props:\n  tag_name ([prop_name_list])\nprop_name_list:\n   [prop_name [, prop_name] ...]\nprop_value_list:\n   [prop_value [, prop_value] ...]  \n

Example statement:

nebula> CREATE TAG IF NOT EXISTS player(name string, age int);\n
"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#about_opencypher_compatibility","title":"About openCypher compatibility","text":""},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#native_ngql_and_opencypher","title":"Native nGQL and openCypher","text":"

Native nGQL is the part of a graph query language designed and implemented by NebulaGraph. OpenCypher is a graph query language maintained by openCypher Implementers Group.

The latest release is openCypher 9. The compatible parts of openCypher in nGQL are called openCypher compatible sentences (short as openCypher).

Note

nGQL = native nGQL + openCypher compatible sentences

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#is_ngql_compatible_with_opencypher_9_completely","title":"Is nGQL compatible with openCypher 9 completely?","text":"

NO.

Compatibility with openCypher

nGQL is designed to be compatible with part of DQL (match, optional match, with, etc.).

Users can search in this manual with the keyword compatibility to find major compatibility issues.

Multiple known incompatible items are listed in NebulaGraph Issues. Submit an issue with the incompatible tag if you find a new issue of this type.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#what_are_the_major_differences_between_ngql_and_opencypher_9","title":"What are the major differences between nGQL and openCypher 9?","text":"

The following are some major differences (by design incompatible) between nGQL and openCypher.

Category openCypher 9 nGQL Schema Optional Schema Strong Schema Equality operator = == Math exponentiation ^ ^ is not supported. Use pow(x, y) instead. Edge rank No such concept. edge rank (reference by @) Statement - All DMLs (CREATE, MERGE, etc) of openCypher 9. Label and tag A label is used for searching a vertex, namely an index of vertex. A tag defines the type of a vertex and its corresponding properties. It cannot be used as an index. Pre-compiling and parameterized queries Support Parameterized queries are supported, but precompiling is not.

Compatibility

OpenCypher 9 and Cypher have some differences in grammar and licence. For example,

  1. Cypher requires that All Cypher statements are explicitly run within a transaction. While openCypher has no such requirement. And nGQL does not support transactions.

  2. Cypher has a variety of constraints, including Unique node property constraints, Node property existence constraints, Relationship property existence constraints, and Node key constraints. While OpenCypher has no such constraints. As a strong schema system, most of the constraints mentioned above can be solved through schema definitions (including NOT NULL) in nGQL. The only function that cannot be supported is the UNIQUE constraint.

  3. Cypher has APoC, while openCypher 9 does not have APoC. Cypher has Blot protocol support requirements, while openCypher 9 does not.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#where_can_i_find_more_ngql_examples","title":"Where can I find more nGQL examples?","text":"

Users can find more than 2500 nGQL examples in the features directory on the NebulaGraph GitHub page.

The features directory consists of .feature files. Each file records scenarios that you can use as nGQL examples. Here is an example:

Feature: Basic match\n\n  Background:\n    Given a graph with space named \"basketballplayer\"\n\n  Scenario: Single node\n    When executing query:\n      \"\"\"\n      MATCH (v:player {name: \"Yao Ming\"}) RETURN v;\n      \"\"\"\n    Then the result should be, in any order, with relax comparison:\n      | v                                                |\n      | (\"player133\" :player{age: 38, name: \"Yao Ming\"}) |\n\n  Scenario: One step\n    When executing query:\n      \"\"\"\n      MATCH (v1:player{name: \"LeBron James\"}) -[r]-> (v2)\n      RETURN type(r) AS Type, v2.player.name AS Name\n      \"\"\"\n    Then the result should be, in any order:\n\n      | Type     | Name        |\n      | \"follow\" | \"Ray Allen\" |\n      | \"serve\"  | \"Lakers\"    |\n      | \"serve\"  | \"Heat\"      |\n      | \"serve\"  | \"Cavaliers\" |\n\nFeature:  Comparison of where clause\n\n  Background:\n    Given a graph with space named \"basketballplayer\"\n\n    Scenario: push edge props filter down\n      When profiling query:\n        \"\"\"\n        GO FROM \"player100\" OVER follow \n        WHERE properties(edge).degree IN [v IN [95,99] WHERE v > 0] \n        YIELD dst(edge), properties(edge).degree\n        \"\"\"\n      Then the result should be, in any order:\n        | follow._dst | follow.degree |\n        | \"player101\" | 95            |\n        | \"player125\" | 95            |\n      And the execution plan should be:\n        | id | name         | dependencies | operator info                                               |\n        | 0  | Project      | 1            |                                                             |\n        | 1  | GetNeighbors | 2            | {\"filter\": \"(properties(edge).degree IN [v IN [95,99] WHERE (v>0)])\"} |\n        | 2  | Start        |              |                                                             |\n

The keywords in the preceding example are described as follows.

Keyword Description Feature Describes the topic of the current .feature file. Background Describes the background information of the current .feature file. Given Describes the prerequisites of running the test statements in the current .feature file. Scenario Describes the scenarios. If there is the @skip before one Scenario, this scenario may not work and do not use it as a working example in a production environment. When Describes the nGQL statement to be executed. It can be a executing query or profiling query. Then Describes the expected return results of running the statement in the When clause. If the return results in your environment do not match the results described in the .feature file, submit an issue to inform the NebulaGraph team. And Describes the side effects of running the statement in the When clause. @skip This test case will be skipped. Commonly, the to-be-tested code is not ready.

Welcome to add more tck case and return automatically to the using statements in CI/CD.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#does_it_support_tinkerpop_gremlin","title":"Does it support TinkerPop Gremlin?","text":"

No. And no plan to support that.

"},{"location":"3.ngql-guide/1.nGQL-overview/1.overview/#does_nebulagraph_support_w3c_rdf_sparql_or_graphql","title":"Does NebulaGraph support W3C RDF (SPARQL) or GraphQL?","text":"

No. And no plan to support that.

The data model of NebulaGraph is the property graph. And as a strong schema system, NebulaGraph does not support RDF.

NebulaGraph Query Language does not support SPARQL nor GraphQL.

"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/","title":"Patterns","text":"

Patterns and graph pattern matching are the very heart of a graph query language. This topic will describe the patterns in NebulaGraph, some of which have not yet been implemented.

"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#patterns_for_vertices","title":"Patterns for vertices","text":"

A vertex is described using a pair of parentheses and is typically given a name. For example:

(a)\n

This simple pattern describes a single vertex and names that vertex using the variable a.

"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#patterns_for_related_vertices","title":"Patterns for related vertices","text":"

A more powerful construct is a pattern that describes multiple vertices and edges between them. Patterns describe an edge by employing an arrow between two vertices. For example:

(a)-[]->(b)\n

This pattern describes a very simple data structure: two vertices and a single edge from one to the other. In this example, the two vertices are named as a and b respectively and the edge is directed: it goes from a to b.

This manner of describing vertices and edges can be extended to cover an arbitrary number of vertices and the edges between them, for example:

(a)-[]->(b)<-[]-(c)\n

Such a series of connected vertices and edges is called a path.

Note that the naming of the vertices in these patterns is only necessary when one needs to refer to the same vertex again, either later in the pattern or elsewhere in the query. If not, the name may be omitted as follows:

(a)-[]->()<-[]-(c)\n
"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#patterns_for_tags","title":"Patterns for tags","text":"

Note

The concept of tag in nGQL has a few differences from that of label in openCypher. For example, users must create a tag before using it. And a tag also defines the type of properties.

In addition to simply describing the vertices in the graphs, patterns can also describe the tags of the vertices. For example:

(a:User)-[]->(b)\n

Patterns can also describe a vertex that has multiple tags. For example:

(a:User:Admin)-[]->(b)\n
"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#patterns_for_properties","title":"Patterns for properties","text":"

Vertices and edges are the fundamental elements in a graph. In nGQL, properties are added to them for richer models.

In the patterns, the properties can be expressed as follows: some key-value pairs are enclosed in curly brackets and separated by commas, and the tag or edge type to which a property belongs must be specified.

For example, a vertex with two properties will be like:

(a:player{name: \"Tim Duncan\", age: 42})\n

One of the edges that connect to this vertex can be like:

(a)-[e:follow{degree: 95}]->(b)\n
"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#patterns_for_edges","title":"Patterns for edges","text":"

The simplest way to describe an edge is by using the arrow between two vertices, as in the previous examples.

Users can describe an edge and its direction using the following statement. If users do not care about its direction, the arrowhead can be omitted. For example:

(a)-[]-(b)\n

Like vertices, edges can also be named. A pair of square brackets will be used to separate the arrow and the variable will be placed between them. For example:

(a)-[r]->(b)\n

Like the tags on vertices, edges can also have types. To describe an edge with a specific type, use the pattern as follows:

(a)-[r:REL_TYPE]->(b)\n

An edge can only have one edge type. But if we'd like to describe some data such that the edge could have a set of types, then they can all be listed in the pattern, separating them with the pipe symbol | like this:

(a)-[r:TYPE1|TYPE2]->(b)\n

Like vertices, the name of an edge can be omitted. For example:

(a)-[:REL_TYPE]->(b)\n
"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#variable-length_pattern","title":"Variable-length pattern","text":"

Rather than describing a long path using a sequence of many vertex and edge descriptions in a pattern, many edges (and the intermediate vertices) can be described by specifying a length in the edge description of a pattern. For example:

(a)-[*2]->(b)\n

The following pattern describes a graph of three vertices and two edges, all in one path (a path of length 2). It is equivalent to:

(a)-[]->()-[]->(b)\n

The range of lengths can also be specified. Such edge patterns are called variable-length edges. For example:

(a)-[*3..5]->(b)\n

The preceding example defines a path with a minimum length of 3 and a maximum length of 5.

It describes a graph of either 4 vertices and 3 edges, 5 vertices and 4 edges, or 6 vertices and 5 edges, all connected in a single path.

You may specify either the upper limit or lower limit of the length range, or neither of them, for example:

(a)-[*..5]->(b)   // The minimum length is 1 and the maximum length is 5.\n(a)-[*3..]->(b)   // The minimum length is 3 and the maximum length is infinity.\n(a)-[*]->(b)      // The minimum length is 1 and the maximum length is infinity.\n
"},{"location":"3.ngql-guide/1.nGQL-overview/3.graph-patterns/#assigning_to_path_variables","title":"Assigning to path variables","text":"

As described above, a series of connected vertices and edges is called a path. nGQL allows paths to be named using variables. For example:

p = (a)-[*3..5]->(b)\n

Users can do this in the MATCH statement.

"},{"location":"3.ngql-guide/1.nGQL-overview/comments/","title":"Comments","text":"

This topic will describe the comments in nGQL.

Legacy version compatibility

"},{"location":"3.ngql-guide/1.nGQL-overview/comments/#examples","title":"Examples","text":"
nebula> RETURN 1+1;     # This comment continues to the end of this line.\nnebula> RETURN 1+1;     // This comment continues to the end of this line.\nnebula> RETURN 1 /* This is an in-line comment. */ + 1 == 2;\nnebula> RETURN 11 +            \\\n/* Multi-line comment.       \\\nUse a backslash as a line break.   \\\n*/ 12;\n

Note

"},{"location":"3.ngql-guide/1.nGQL-overview/comments/#opencypher_compatibility","title":"OpenCypher compatibility","text":"
/* openCypher style:\nThe following comment\nspans more than\none line */\nMATCH (n:label)\nRETURN n;\n
/* nGQL style:  \\\nThe following comment       \\\nspans more than     \\\none line */       \\\nMATCH (n:tag) \\\nRETURN n;\n
"},{"location":"3.ngql-guide/1.nGQL-overview/identifier-case-sensitivity/","title":"Identifier case sensitivity","text":""},{"location":"3.ngql-guide/1.nGQL-overview/identifier-case-sensitivity/#identifiers_are_case-sensitive","title":"Identifiers are Case-Sensitive","text":"

The following statements will not work because they refer to two different spaces, i.e. my_space and MY_SPACE.

nebula> CREATE SPACE IF NOT EXISTS my_space (vid_type=FIXED_STRING(30));\nnebula> use MY_SPACE;\n[ERROR (-1005)]: SpaceNotFound:\n
"},{"location":"3.ngql-guide/1.nGQL-overview/identifier-case-sensitivity/#keywords_and_reserved_words_are_case-insensitive","title":"Keywords and Reserved Words are Case-Insensitive","text":"

The following statements are equivalent since show and spaces are keywords.

nebula> show spaces;  \nnebula> SHOW SPACES;\nnebula> SHOW spaces;\nnebula> show SPACES;\n
"},{"location":"3.ngql-guide/1.nGQL-overview/identifier-case-sensitivity/#functions_are_case-insensitive","title":"Functions are Case-Insensitive","text":"

Functions are case-insensitive. For example, count(), COUNT(), and couNT() are equivalent.

nebula> WITH [NULL, 1, 1, 2, 2] As a \\\n        UNWIND a AS b \\\n        RETURN count(b), COUNT(*), couNT(DISTINCT b);\n+----------+----------+-------------------+\n| count(b) | COUNT(*) | couNT(distinct b) |\n+----------+----------+-------------------+\n| 4        | 5        | 2                 |\n+----------+----------+-------------------+\n
"},{"location":"3.ngql-guide/1.nGQL-overview/keywords-and-reserved-words/","title":"Keywords","text":"

Keywords in nGQL are words with particular meanings, such as CREATE and TAG in the CREATE TAG statement. Keywords that require special processing to be used as identifiers are referred to as reserved keywords, while the part of keywords that can be used directly as identifiers are called non-reserved keywords.

It is not recommended to use keywords to identify schemas. If you must use keywords as identifiers, pay attention to the following restrictions:

Note

Keywords are case-insensitive.

nebula> CREATE TAG TAG(name string);\n[ERROR (-1004)]: SyntaxError: syntax error near `TAG'\n\nnebula> CREATE TAG `TAG` (name string);\nExecution succeeded\n\nnebula> CREATE TAG SPACE(name string);\nExecution succeeded\n\nnebula> CREATE TAG \u4e2d\u6587(\u7b80\u4f53 string);\nExecution succeeded\n\nnebula> CREATE TAG `\uffe5%special characters&*+-*/` (`q~\uff01\uff08\uff09=  wer` string);\nExecution succeeded\n
"},{"location":"3.ngql-guide/1.nGQL-overview/keywords-and-reserved-words/#reserved_keywords","title":"Reserved keywords","text":"
ACROSS\nADD\nALTER\nAND\nAS\nASC\nASCENDING\nBALANCE\nBOOL\nBY\nCASE\nCHANGE\nCOMPACT\nCREATE\nDATE\nDATETIME\nDELETE\nDESC\nDESCENDING\nDESCRIBE\nDISTINCT\nDOUBLE\nDOWNLOAD\nDROP\nDURATION\nEDGE\nEDGES\nEXISTS\nEXPLAIN\nFALSE\nFETCH\nFIND\nFIXED_STRING\nFLOAT\nFLUSH\nFROM\nGEOGRAPHY\nGET\nGO\nGRANT\nIF\nIGNORE_EXISTED_INDEX\nIN\nINDEX\nINDEXES\nINGEST\nINSERT\nINT\nINT16\nINT32\nINT64\nINT8\nINTERSECT\nIS\nJOIN\nLEFT\nLIST\nLOOKUP\nMAP\nMATCH\nMINUS\nNO\nNOT\nNULL\nOF\nON\nOR\nORDER\nOVER\nOVERWRITE\nPATH\nPROP\nREBUILD\nRECOVER\nREMOVE\nRESTART\nRETURN\nREVERSELY\nREVOKE\nSET\nSHOW\nSTEP\nSTEPS\nSTOP\nSTRING\nSUBMIT\nTAG\nTAGS\nTIME\nTIMESTAMP\nTO\nTRUE\nUNION\nUNWIND\nUPDATE\nUPSERT\nUPTO\nUSE\nVERTEX\nVERTICES\nWHEN\nWHERE\nWITH\nXOR\nYIELD\n
"},{"location":"3.ngql-guide/1.nGQL-overview/keywords-and-reserved-words/#non-reserved_keywords","title":"Non-reserved keywords","text":"
ACCOUNT\nADMIN\nAGENT\nALL\nALLSHORTESTPATHS\nANALYZER\nANY\nATOMIC_EDGE\nAUTO\nBASIC\nBIDIRECT\nBOTH\nCHARSET\nCLEAR\nCLIENTS\nCOLLATE\nCOLLATION\nCOMMENT\nCONFIGS\nCONTAINS\nDATA\nDBA\nDEFAULT\nDIVIDE\nDRAINER\nDRAINERS\nELASTICSEARCH\nELSE\nEND\nENDS\nES_QUERY\nFORCE\nFORMAT\nFULLTEXT\nGOD\nGRANTS\nGRAPH\nGROUP\nGROUPS\nGUEST\nHDFS\nHOST\nHOSTS\nHTTP\nHTTPS\nINTO\nIP\nJOB\nJOBS\nKILL\nLEADER\nLIMIT\nLINESTRING\nLISTENER\nLOCAL\nMERGE\nMETA\nNEW\nNOLOOP\nNONE\nOFFSET\nOPTIONAL\nOUT\nPART\nPARTITION_NUM\nPARTS\nPASSWORD\nPLAN\nPOINT\nPOLYGON\nPROFILE\nQUERIES\nQUERY\nREAD\nREDUCE\nRENAME\nREPLICA_FACTOR\nRESET\nROLE\nROLES\nS2_MAX_CELLS\nS2_MAX_LEVEL\nSAMPLE\nSEARCH\nSERVICE\nSESSION\nSESSIONS\nSHORTEST\nSHORTESTPATH\nSIGN\nSINGLE\nSKIP\nSNAPSHOT\nSNAPSHOTS\nSPACE\nSPACES\nSTARTS\nSTATS\nSTATUS\nSTORAGE\nSUBGRAPH\nSYNC\nTEXT\nTEXT_SEARCH\nTHEN\nTOP\nTTL_COL\nTTL_DURATION\nUSER\nUSERS\nUUID\nVALUE\nVALUES\nVARIABLES\nVID_TYPE\nWHITELIST\nWRITE\nZONE\nZONES\n
"},{"location":"3.ngql-guide/1.nGQL-overview/ngql-style-guide/","title":"nGQL style guide","text":"

nGQL does not have strict formatting requirements, but creating nGQL statements according to an appropriate and uniform style can improve readability and avoid ambiguity. Using the same nGQL style in the same organization or project helps reduce maintenance costs and avoid problems caused by format confusion or misunderstanding. This topic will provide a style guide for writing nGQL statements.

Compatibility

The styles of nGQL and Cypher Style Guide are different.

"},{"location":"3.ngql-guide/1.nGQL-overview/ngql-style-guide/#newline","title":"Newline","text":"
  1. Start a new line to write a clause.

    Not recommended:

    GO FROM \"player100\" OVER follow REVERSELY YIELD src(edge) AS id;\n

    Recommended:

    GO FROM \"player100\" \\\nOVER follow REVERSELY \\\nYIELD src(edge) AS id;\n
  2. Start a new line to write different statements in a composite statement.

    Not recommended:

    GO FROM \"player100\" OVER follow REVERSELY YIELD src(edge) AS id | GO FROM $-.id \\\nOVER serve WHERE properties($^).age > 20 YIELD properties($^).name AS FriendOf, properties($$).name AS Team;\n

    Recommended:

    GO FROM \"player100\" \\\nOVER follow REVERSELY \\\nYIELD src(edge) AS id | \\\nGO FROM $-.id OVER serve \\\nWHERE properties($^).age > 20 \\\nYIELD properties($^).name AS FriendOf, properties($$).name AS Team;\n
  3. If the clause exceeds 80 characters, start a new line at the appropriate place.

    Not recommended:

    MATCH (v:player{name:\"Tim Duncan\"})-[e]->(v2) \\\nWHERE (v2.player.name STARTS WITH \"Y\" AND v2.player.age > 35 AND v2.player.age < v.player.age) OR (v2.player.name STARTS WITH \"T\" AND v2.player.age < 45 AND v2.player.age > v.player.age) \\\nRETURN v2;\n

    Recommended:

    MATCH (v:player{name:\"Tim Duncan\"})-[e]->(v2) \\\nWHERE (v2.player.name STARTS WITH \"Y\" AND v2.player.age > 35 AND v2.player.age < v.player.age) \\\nOR (v2.player.name STARTS WITH \"T\" AND v2.player.age < 45 AND v2.player.age > v.player.age) \\\nRETURN v2;\n

Note

If needed, you can also start a new line for better understanding, even if the clause does not exceed 80 characters.

"},{"location":"3.ngql-guide/1.nGQL-overview/ngql-style-guide/#identifier_naming","title":"Identifier naming","text":"

In nGQL statements, characters other than keywords, punctuation marks, and blanks are all identifiers. Recommended methods to name the identifiers are as follows.

  1. Use singular nouns to name tags, and use the base form of verbs or verb phrases to form Edge types.

    Not recommended:

    MATCH p=(v:players)-[e:are_following]-(v2) \\\nRETURN nodes(p);\n

    Recommended:

    MATCH p=(v:player)-[e:follow]-(v2) \\\nRETURN nodes(p);\n
  2. Use the snake case to name identifiers, and connect words with underscores (_) with all the letters lowercase.

    Not recommended:

    MATCH (v:basketballTeam) \\\nRETURN v;\n

    Recommended:

    MATCH (v:basketball_team) \\\nRETURN v;\n
  3. Use uppercase keywords and lowercase variables.

    Not recommended:

    match (V:player) return V limit 5;\n

    Recommended:

    MATCH (v:player) RETURN v LIMIT 5;\n
"},{"location":"3.ngql-guide/1.nGQL-overview/ngql-style-guide/#pattern","title":"Pattern","text":"
  1. Start a new line on the right side of the arrow indicating an edge when writing patterns.

    Not recommended:

    MATCH (v:player{name: \"Tim Duncan\", age: 42}) \\\n-[e:follow]->()-[e:serve]->()<--(v2) \\\nRETURN v, e, v2;\n

    Recommended:

    MATCH (v:player{name: \"Tim Duncan\", age: 42})-[e:follow]-> \\\n()-[e:serve]->()<--(v2) \\\nRETURN v, e, v2;\n
  2. Anonymize the vertices and edges that do not need to be queried.

    Not recommended:

    MATCH (v:player)-[e:follow]->(v2) \\\nRETURN v;\n

    Recommended:

    MATCH (v:player)-[:follow]->() \\\nRETURN v;\n
  3. Place named vertices in front of anonymous vertices.

    Not recommended:

    MATCH ()-[:follow]->(v) \\\nRETURN v;\n

    Recommended:

    MATCH (v)<-[:follow]-() \\\nRETURN v;\n
"},{"location":"3.ngql-guide/1.nGQL-overview/ngql-style-guide/#string","title":"String","text":"

The strings should be surrounded by double quotes.

Not recommended:

RETURN 'Hello Nebula!';\n

Recommended:

RETURN \"Hello Nebula!\\\"123\\\"\";\n

Note

When single or double quotes need to be nested in a string, use a backslash () to escape. For example:

RETURN \"\\\"NebulaGraph is amazing,\\\" the user says.\";\n
"},{"location":"3.ngql-guide/1.nGQL-overview/ngql-style-guide/#statement_termination","title":"Statement termination","text":"
  1. End the nGQL statements with an English semicolon (;).

    Not recommended:

    FETCH PROP ON player \"player100\" YIELD properties(vertex)\n

    Recommended:

    FETCH PROP ON player \"player100\" YIELD properties(vertex);\n
  2. Use a pipe (|) to separate a composite statement, and end the statement with an English semicolon at the end of the last line. Using an English semicolon before a pipe will cause the statement to fail.

    Not supported:

    GO FROM \"player100\" \\\nOVER follow \\\nYIELD dst(edge) AS id; | \\\nGO FROM $-.id \\\nOVER serve \\\nYIELD properties($$).name AS Team, properties($^).name AS Player;\n

    Supported:

    GO FROM \"player100\" \\\nOVER follow \\\nYIELD dst(edge) AS id | \\\nGO FROM $-.id \\\nOVER serve \\\nYIELD properties($$).name AS Team, properties($^).name AS Player;\n
  3. In a composite statement that contains user-defined variables, use an English semicolon to end the statements that define the variables. If you do not follow the rules to add a semicolon or use a pipe to end the composite statement, the execution will fail.

    Not supported:

    $var = GO FROM \"player100\" \\\nOVER follow \\\nYIELD follow._dst AS id \\\nGO FROM $var.id \\\nOVER serve \\\nYIELD $$.team.name AS Team, $^.player.name AS Player;\n

    Not supported:

    $var = GO FROM \"player100\" \\\nOVER follow \\\nYIELD follow._dst AS id | \\\nGO FROM $var.id \\\nOVER serve \\\nYIELD $$.team.name AS Team, $^.player.name AS Player;\n

    Supported:

    $var = GO FROM \"player100\" \\\nOVER follow \\\nYIELD follow._dst AS id; \\\nGO FROM $var.id \\\nOVER serve \\\nYIELD $$.team.name AS Team, $^.player.name AS Player;\n
"},{"location":"3.ngql-guide/10.tag-statements/1.create-tag/","title":"CREATE TAG","text":"

CREATE TAG creates a tag with the given name in a graph space.

"},{"location":"3.ngql-guide/10.tag-statements/1.create-tag/#opencypher_compatibility","title":"OpenCypher compatibility","text":"

Tags in nGQL are similar to labels in openCypher. But they are also quite different. For example, the ways to create them are different.

"},{"location":"3.ngql-guide/10.tag-statements/1.create-tag/#prerequisites","title":"Prerequisites","text":"

Running the CREATE TAG statement requires some privileges for the graph space. Otherwise, NebulaGraph throws an error.

"},{"location":"3.ngql-guide/10.tag-statements/1.create-tag/#syntax","title":"Syntax","text":"

To create a tag in a specific graph space, you must specify the current working space with the USE statement.

CREATE TAG [IF NOT EXISTS] <tag_name>\n    (\n      <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']\n      [{, <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']} ...] \n    )\n    [TTL_DURATION = <ttl_duration>]\n    [TTL_COL = <prop_name>]\n    [COMMENT = '<comment>'];\n
Parameter Description IF NOT EXISTS Detects if the tag that you want to create exists. If it does not exist, a new one will be created. The tag existence detection here only compares the tag names (excluding properties). <tag_name> 1. Each tag name in the graph space must be unique. 2. Tag names cannot be modified after they are set. 3. By default, the name only supports 1-4 byte UTF-8 encoded characters, including English letters (case sensitive), numbers, Chinese characters, etc. However, it cannot include special characters other than the underscore (_), and cannot start with a number. 4. To use special characters, reserved keywords, or start with a number, quote the entire name with backticks (`) and do not include periods (.) within the pair of backticks (`). For more information, see Keywords and reserved words. Note:1. If you name a tag in Chinese and encounter a SyntaxError, you need to quote the Chinese characters with backticks (`). 2. To include a backtick (`) in a tag name, use a backslash to escape the backtick, such as \\`; to include a backslash, the backslash itself also needs to be escaped, such as \\ . <prop_name> The name of the property. It must be unique for each tag. The rules for permitted property names are the same as those for tag names. <data_type> The data type of the property. The following data types are supported: Numeric, Boolean, String, Date and time, and Geography. NULL | NOT NULL Specifies if the property supports NULL | NOT NULL. The default value is NULL. DEFAULT Specifies a default value for a property. The default value can be a literal value or an expression supported by NebulaGraph. If no value is specified, the default value is used when inserting a new vertex. COMMENT The remarks of a certain property or the tag itself. The maximum length is 256 bytes. By default, there will be no comments on a tag. TTL_DURATION Specifies the life cycle for the property. The property that exceeds the specified TTL expires. The expiration threshold is the TTL_COL value plus the TTL_DURATION. The default value of TTL_DURATION is 0. It means the data never expires. TTL_COL Specifies the property to set a timeout on. The data type of the property must be int or timestamp. A tag can only specify one field as TTL_COL. For more information on TTL, see TTL options."},{"location":"3.ngql-guide/10.tag-statements/1.create-tag/#examples","title":"Examples","text":"
nebula> CREATE TAG IF NOT EXISTS player(name string, age int);\n\n# The following example creates a tag with no properties.\nnebula> CREATE TAG IF NOT EXISTS no_property();\u00a0\n\n# The following example creates a tag with a default value.\nnebula> CREATE TAG IF NOT EXISTS player_with_default(name string, age int DEFAULT 20);\n\n# In the following example, the TTL of the create_time field is set to be 100 seconds.\nnebula> CREATE TAG IF NOT EXISTS woman(name string, age int, \\\n        married bool, salary double, create_time timestamp) \\\n        TTL_DURATION = 100, TTL_COL = \"create_time\";\n
"},{"location":"3.ngql-guide/10.tag-statements/1.create-tag/#implementation_of_the_operation","title":"Implementation of the operation","text":"

Trying to use a newly created tag may fail because the creation of the tag is implemented asynchronously. To make sure the follow-up operations work as expected, Wait for two heartbeat cycles, i.e., 20 seconds.

To change the heartbeat interval, modify the heartbeat_interval_secs parameter in the configuration files for all services.

"},{"location":"3.ngql-guide/10.tag-statements/2.drop-tag/","title":"DROP TAG","text":"

DROP TAG drops a tag with the given name in the current working graph space.

A vertex can have one or more tags.

This operation only deletes the Schema data. All the files or directories in the disk will not be deleted directly until the next compaction.

Compatibility

In NebulaGraph 3.8.0, inserting vertex without tag is not supported by default. If you want to use the vertex without tags, add --graph_use_vertex_key=true to the configuration files (nebula-graphd.conf) of all Graph services in the cluster, and add --use_vertex_key=true to the configuration files (nebula-storaged.conf) of all Storage services in the cluster.

"},{"location":"3.ngql-guide/10.tag-statements/2.drop-tag/#prerequisites","title":"Prerequisites","text":" "},{"location":"3.ngql-guide/10.tag-statements/2.drop-tag/#syntax","title":"Syntax","text":"
DROP TAG [IF EXISTS] <tag_name>;\n
"},{"location":"3.ngql-guide/10.tag-statements/2.drop-tag/#example","title":"Example","text":"
nebula> CREATE TAG IF NOT EXISTS test(p1 string, p2 int);\nnebula> DROP TAG test;\n
"},{"location":"3.ngql-guide/10.tag-statements/3.alter-tag/","title":"ALTER TAG","text":"

ALTER TAG alters the structure of a tag with the given name in a graph space. You can add or drop properties, and change the data type of an existing property. You can also set a TTL (Time-To-Live) on a property, or change its TTL duration.

"},{"location":"3.ngql-guide/10.tag-statements/3.alter-tag/#notes","title":"Notes","text":" "},{"location":"3.ngql-guide/10.tag-statements/3.alter-tag/#syntax","title":"Syntax","text":"
ALTER TAG <tag_name>\n    <alter_definition> [[, alter_definition] ...]\n    [ttl_definition [, ttl_definition] ... ]\n    [COMMENT '<comment>'];\n\nalter_definition:\n| ADD    (prop_name data_type [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>'])\n| DROP   (prop_name)\n| CHANGE (prop_name data_type [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>'])\n\nttl_definition:\n    TTL_DURATION = ttl_duration, TTL_COL = prop_name\n
"},{"location":"3.ngql-guide/10.tag-statements/3.alter-tag/#examples","title":"Examples","text":"
nebula> CREATE TAG IF NOT EXISTS t1 (p1 string, p2 int);\nnebula> ALTER TAG t1 ADD (p3 int32, fixed_string(10));\nnebula> ALTER TAG t1 TTL_DURATION = 2, TTL_COL = \"p2\";\nnebula> ALTER TAG t1 COMMENT = 'test1';\nnebula> ALTER TAG t1 ADD (p5 double NOT NULL DEFAULT 0.4 COMMENT 'p5') COMMENT='test2';\n// Change the data type of p3 in the TAG t1 from INT32 to INT64, and that of p4 from FIXED_STRING(10) to STRING.\nnebula> ALTER TAG t1 CHANGE (p3 int64, p4 string);\n[ERROR(-1005)]: Unsupported!\n
"},{"location":"3.ngql-guide/10.tag-statements/3.alter-tag/#implementation_of_the_operation","title":"Implementation of the operation","text":"

Trying to use a newly altered tag may fail because the alteration of the tag is implemented asynchronously. To make sure the follow-up operations work as expected, Wait for two heartbeat cycles, i.e., 20 seconds.

To change the heartbeat interval, modify the heartbeat_interval_secs parameter in the configuration files for all services.

"},{"location":"3.ngql-guide/10.tag-statements/4.show-tags/","title":"SHOW TAGS","text":"

The SHOW TAGS statement shows the name of all tags in the current graph space.

You do not need any privileges for the graph space to run the SHOW TAGS statement. But the returned results are different based on role privileges.

"},{"location":"3.ngql-guide/10.tag-statements/4.show-tags/#syntax","title":"Syntax","text":"
SHOW TAGS;\n
"},{"location":"3.ngql-guide/10.tag-statements/4.show-tags/#examples","title":"Examples","text":"
nebula> SHOW TAGS;\n+----------+\n| Name     |\n+----------+\n| \"player\" |\n| \"team\"   |\n+----------+\n
"},{"location":"3.ngql-guide/10.tag-statements/5.describe-tag/","title":"DESCRIBE TAG","text":"

DESCRIBE TAG returns the information about a tag with the given name in a graph space, such as field names, data type, and so on.

"},{"location":"3.ngql-guide/10.tag-statements/5.describe-tag/#prerequisite","title":"Prerequisite","text":"

Running the DESCRIBE TAG statement requires some privileges for the graph space. Otherwise, NebulaGraph throws an error.

"},{"location":"3.ngql-guide/10.tag-statements/5.describe-tag/#syntax","title":"Syntax","text":"
DESC[RIBE] TAG <tag_name>;\n

You can use DESC instead of DESCRIBE for short.

"},{"location":"3.ngql-guide/10.tag-statements/5.describe-tag/#example","title":"Example","text":"
nebula> DESCRIBE TAG player;\n+--------+----------+-------+---------+---------+\n| Field  | Type     | Null  | Default | Comment |\n+--------+----------+-------+---------+---------+\n| \"name\" | \"string\" | \"YES\" |         |         |\n| \"age\"  | \"int64\"  | \"YES\" |         |         |\n+--------+----------+-------+---------+---------+\n
"},{"location":"3.ngql-guide/10.tag-statements/6.delete-tag/","title":"DELETE TAG","text":"

DELETE TAG deletes a tag with the given name on a specified vertex.

"},{"location":"3.ngql-guide/10.tag-statements/6.delete-tag/#prerequisites","title":"Prerequisites","text":"

Running the DELETE TAG statement requires some privileges for the graph space. Otherwise, NebulaGraph throws an error.

"},{"location":"3.ngql-guide/10.tag-statements/6.delete-tag/#syntax","title":"Syntax","text":"
DELETE TAG <tag_name_list> FROM <VID_list>;\n
"},{"location":"3.ngql-guide/10.tag-statements/6.delete-tag/#example","title":"Example","text":"
nebula> CREATE TAG IF NOT EXISTS test1(p1 string, p2 int);\nnebula> CREATE TAG IF NOT EXISTS test2(p3 string, p4 int);\nnebula> INSERT VERTEX test1(p1, p2),test2(p3, p4) VALUES \"test\":(\"123\", 1, \"456\", 2);\nnebula> FETCH PROP ON * \"test\" YIELD vertex AS v;\n+------------------------------------------------------------+\n| v                                                          |\n+------------------------------------------------------------+\n| (\"test\" :test1{p1: \"123\", p2: 1} :test2{p3: \"456\", p4: 2}) |\n+------------------------------------------------------------+\nnebula> DELETE TAG test1 FROM \"test\";\nnebula> FETCH PROP ON * \"test\" YIELD vertex AS v;\n+-----------------------------------+\n| v                                 |\n+-----------------------------------+\n| (\"test\" :test2{p3: \"456\", p4: 2}) |\n+-----------------------------------+\nnebula> DELETE TAG * FROM \"test\";\nnebula> FETCH PROP ON * \"test\" YIELD vertex AS v;\n+---+\n| v |\n+---+\n+---+\n

Compatibility

"},{"location":"3.ngql-guide/10.tag-statements/improve-query-by-tag-index/","title":"Add and delete tags","text":"

OpenCypher has the features of SET label and REMOVE label to speed up the process of querying or labeling.

NebulaGraph achieves the same operations by creating and inserting tags to an existing vertex, which can quickly query vertices based on the tag name. Users can also run DELETE TAG to delete some vertices that are no longer needed.

"},{"location":"3.ngql-guide/10.tag-statements/improve-query-by-tag-index/#examples","title":"Examples","text":"

For example, in the basketballplayer data set, some basketball players are also team shareholders. Users can create an index for the shareholder tag shareholder for quick search. If the player is no longer a shareholder, users can delete the shareholder tag of the corresponding player by DELETE TAG.

//This example creates the shareholder tag and index.\nnebula> CREATE TAG IF NOT EXISTS shareholder();\nnebula> CREATE TAG INDEX IF NOT EXISTS shareholder_tag on shareholder();\n\n//This example adds a tag on the vertex.\nnebula> INSERT VERTEX shareholder() VALUES \"player100\":();\nnebula> INSERT VERTEX shareholder() VALUES \"player101\":();\n\n//This example queries all the shareholders.\nnebula> MATCH (v:shareholder) RETURN v;\n+--------------------------------------------------------------------+\n| v                                                                  |\n+--------------------------------------------------------------------+\n| (\"player100\" :player{age: 42, name: \"Tim Duncan\"} :shareholder{})  |\n| (\"player101\" :player{age: 36, name: \"Tony Parker\"} :shareholder{}) |\n+--------------------------------------------------------------------+\n\nnebula> LOOKUP ON shareholder YIELD id(vertex);\n+-------------+\n| id(VERTEX)  |\n+-------------+\n| \"player100\" |\n| \"player101\" |\n+-------------+\n\n//In this example, the \"player100\" is no longer a shareholder.\nnebula> DELETE TAG shareholder FROM \"player100\";\nnebula> LOOKUP ON shareholder YIELD id(vertex);\n+-------------+\n| id(VERTEX)  |\n+-------------+\n| \"player101\" |\n+-------------+\n

Note

If the index is created after inserting the test data, use the REBUILD TAG INDEX <index_name_list>; statement to rebuild the index.

"},{"location":"3.ngql-guide/11.edge-type-statements/1.create-edge/","title":"CREATE EDGE","text":"

CREATE EDGE creates an edge type with the given name in a graph space.

"},{"location":"3.ngql-guide/11.edge-type-statements/1.create-edge/#opencypher_compatibility","title":"OpenCypher compatibility","text":"

Edge types in nGQL are similar to relationship types in openCypher. But they are also quite different. For example, the ways to create them are different.

"},{"location":"3.ngql-guide/11.edge-type-statements/1.create-edge/#prerequisites","title":"Prerequisites","text":"

Running the CREATE EDGE statement requires some privileges for the graph space. Otherwise, NebulaGraph throws an error.

"},{"location":"3.ngql-guide/11.edge-type-statements/1.create-edge/#syntax","title":"Syntax","text":"

To create an edge type in a specific graph space, you must specify the current working space with the USE statement.

CREATE EDGE [IF NOT EXISTS] <edge_type_name>\n    (\n      <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']\n      [{, <prop_name> <data_type> [NULL | NOT NULL] [DEFAULT <default_value>] [COMMENT '<comment>']} ...] \n    )\n    [TTL_DURATION = <ttl_duration>]\n    [TTL_COL = <prop_name>]\n    [COMMENT = '<comment>'];\n
Parameter Description IF NOT EXISTS Detects if the edge type that you want to create exists. If it does not exist, a new one will be created. The edge type existence detection here only compares the edge type names (excluding properties). <edge_type_name> 1. The edge type name must be unique in a graph space. 2. Once the edge type name is set, it can not be altered. 3. By default, the name only supports 1-4 byte UTF-8 encoded characters, including English letters (case sensitive), numbers, Chinese characters, etc. However, it cannot include special characters other than the underscore (_), and cannot start with a number. 4. To use special characters, reserved keywords, or start with a number, quote the entire name with backticks (`) and do not include periods (.) within the pair of backticks (`). For more information, see Keywords and reserved words. Note:1. If you name an edge type in Chinese and encounter a SyntaxError, you need to quote the Chinese characters with backticks (`). 2. To include a backtick (`) in an edge type name, use a backslash to escape the backtick, such as \\`; to include a backslash, the backslash itself also needs to be escaped, such as \\ . <prop_name> The name of the property. It must be unique for each edge type. The rules for permitted property names are the same as those for edge type names. <data_type> The data type of the property. The following data types are supported: Numeric, Boolean, String, Date and time, and Geography. NULL | NOT NULL Specifies if the property supports NULL | NOT NULL. The default value is NULL. DEFAULT must be specified if NOT NULL is set. DEFAULT Specifies a default value for a property. The default value can be a literal value or an expression supported by NebulaGraph. If no value is specified, the default value is used when inserting a new edge. COMMENT The remarks of a certain property or the edge type itself. The maximum length is 256 bytes. By default, there will be no comments on an edge type. TTL_DURATION Specifies the life cycle for the property. The property that exceeds the specified TTL expires. The expiration threshold is the TTL_COL value plus the TTL_DURATION. The default value of TTL_DURATION is 0. It means the data never expires. TTL_COL Specifies the property to set a timeout on. The data type of the property must be int or timestamp. An edge type can only specify one field as TTL_COL. For more information on TTL, see TTL options."},{"location":"3.ngql-guide/11.edge-type-statements/1.create-edge/#examples","title":"Examples","text":"
nebula> CREATE EDGE IF NOT EXISTS follow(degree int);\n\n# The following example creates an edge type with no properties.\nnebula> CREATE EDGE IF NOT EXISTS no_property();\n\n# The following example creates an edge type with a default value.\nnebula> CREATE EDGE IF NOT EXISTS follow_with_default(degree int DEFAULT 20);\n\n# In the following example, the TTL of the p2 field is set to be 100 seconds.\nnebula> CREATE EDGE IF NOT EXISTS e1(p1 string, p2 int, p3 timestamp) \\\n        TTL_DURATION = 100, TTL_COL = \"p2\";\n
"},{"location":"3.ngql-guide/11.edge-type-statements/2.drop-edge/","title":"DROP EDGE","text":"

DROP EDGE drops an edge type with the given name in a graph space.

An edge can have only one edge type. After you drop it, the edge CANNOT be accessed. The edge will be deleted in the next compaction.

This operation only deletes the Schema data. All the files or directories in the disk will not be deleted directly until the next compaction.

"},{"location":"3.ngql-guide/11.edge-type-statements/2.drop-edge/#prerequisites","title":"Prerequisites","text":" "},{"location":"3.ngql-guide/11.edge-type-statements/2.drop-edge/#syntax","title":"Syntax","text":"
DROP EDGE [IF EXISTS] <edge_type_name>\n
"},{"location":"3.ngql-guide/11.edge-type-statements/2.drop-edge/#example","title":"Example","text":"
nebula> CREATE EDGE IF NOT EXISTS e1(p1 string, p2 int);\nnebula> DROP EDGE e1;\n
"},{"location":"3.ngql-guide/11.edge-type-statements/3.alter-edge/","title":"ALTER EDGE","text":"

ALTER EDGE alters the structure of an edge type with the given name in a graph space. You can add or drop properties, and change the data type of an existing property. You can also set a TTL (Time-To-Live) on a property, or change its TTL duration.

"},{"location":"3.ngql-guide/11.edge-type-statements/3.alter-edge/#notes","title":"Notes","text":" "},{"location":"3.ngql-guide/11.edge-type-statements/3.alter-edge/#syntax","title":"Syntax","text":"
ALTER EDGE <edge_type_name>\n    <alter_definition> [, alter_definition] ...]\n    [ttl_definition [, ttl_definition] ... ]\n    [COMMENT = '<comment>'];\n\nalter_definition:\n| ADD    (prop_name data_type)\n| DROP   (prop_name)\n| CHANGE (prop_name data_type)\n\nttl_definition:\n    TTL_DURATION = ttl_duration, TTL_COL = prop_name\n
"},{"location":"3.ngql-guide/11.edge-type-statements/3.alter-edge/#example","title":"Example","text":"
nebula> CREATE EDGE IF NOT EXISTS e1(p1 string, p2 int);\nnebula> ALTER EDGE e1 ADD (p3 int, p4 string);\nnebula> ALTER EDGE e1 TTL_DURATION = 2, TTL_COL = \"p2\";\nnebula> ALTER EDGE e1 COMMENT = 'edge1';\n
"},{"location":"3.ngql-guide/11.edge-type-statements/3.alter-edge/#implementation_of_the_operation","title":"Implementation of the operation","text":"

Trying to use a newly altered edge type may fail because the alteration of the edge type is implemented asynchronously. To make sure the follow-up operations work as expected, Wait for two heartbeat cycles, i.e., 20 seconds.

To change the heartbeat interval, modify the heartbeat_interval_secs parameter in the configuration files for all services.

"},{"location":"3.ngql-guide/11.edge-type-statements/4.show-edges/","title":"SHOW EDGES","text":"

SHOW EDGES shows all edge types in the current graph space.

You do not need any privileges for the graph space to run the SHOW EDGES statement. But the returned results are different based on role privileges.

"},{"location":"3.ngql-guide/11.edge-type-statements/4.show-edges/#syntax","title":"Syntax","text":"
SHOW EDGES;\n
"},{"location":"3.ngql-guide/11.edge-type-statements/4.show-edges/#example","title":"Example","text":"
nebula> SHOW EDGES;\n+----------+\n| Name     |\n+----------+\n| \"follow\" |\n| \"serve\"  |\n+----------+\n
"},{"location":"3.ngql-guide/11.edge-type-statements/5.describe-edge/","title":"DESCRIBE EDGE","text":"

DESCRIBE EDGE returns the information about an edge type with the given name in a graph space, such as field names, data type, and so on.

"},{"location":"3.ngql-guide/11.edge-type-statements/5.describe-edge/#prerequisites","title":"Prerequisites","text":"

Running the DESCRIBE EDGE statement requires some privileges for the graph space. Otherwise, NebulaGraph throws an error.

"},{"location":"3.ngql-guide/11.edge-type-statements/5.describe-edge/#syntax","title":"Syntax","text":"
DESC[RIBE] EDGE <edge_type_name>\n

You can use DESC instead of DESCRIBE for short.

"},{"location":"3.ngql-guide/11.edge-type-statements/5.describe-edge/#example","title":"Example","text":"
nebula> DESCRIBE EDGE follow;\n+----------+---------+-------+---------+---------+\n| Field    | Type    | Null  | Default | Comment |\n+----------+---------+-------+---------+---------+\n| \"degree\" | \"int64\" | \"YES\" |         |         |\n+----------+---------+-------+---------+---------+\n
"},{"location":"3.ngql-guide/12.vertex-statements/1.insert-vertex/","title":"INSERT VERTEX","text":"

The INSERT VERTEX statement inserts one or more vertices into a graph space in NebulaGraph.

"},{"location":"3.ngql-guide/12.vertex-statements/1.insert-vertex/#prerequisites","title":"Prerequisites","text":"

Running the INSERT VERTEX statement requires some privileges for the graph space. Otherwise, NebulaGraph throws an error.

"},{"location":"3.ngql-guide/12.vertex-statements/1.insert-vertex/#syntax","title":"Syntax","text":"
INSERT VERTEX [IF NOT EXISTS] [tag_props, [tag_props] ...]\nVALUES VID: ([prop_value_list])\n\ntag_props:\n  tag_name ([prop_name_list])\n\nprop_name_list:\n   [prop_name [, prop_name] ...]\n\nprop_value_list:\n   [prop_value [, prop_value] ...] \n

Caution

INSERT VERTEX and CREATE have different semantics.

Examples are as follows.

"},{"location":"3.ngql-guide/12.vertex-statements/1.insert-vertex/#examples","title":"Examples","text":"
# Insert a vertex without tag.\nnebula> INSERT VERTEX VALUES \"1\":();\n\n# The following examples create tag t1 with no property and inserts vertex \"10\" with no property.\nnebula> CREATE TAG IF NOT EXISTS t1();                   \nnebula> INSERT VERTEX t1() VALUES \"10\":(); \n
nebula> CREATE TAG IF NOT EXISTS t2 (name string, age int);                \nnebula> INSERT VERTEX t2 (name, age) VALUES \"11\":(\"n1\", 12);\n\n#  In the following example, the insertion fails because \"a13\" is not int.\nnebula> INSERT VERTEX t2 (name, age) VALUES \"12\":(\"n1\", \"a13\"); \n\n# The following example inserts two vertices at one time.\nnebula> INSERT VERTEX t2 (name, age) VALUES \"13\":(\"n3\", 12), \"14\":(\"n4\", 8); \n
nebula> CREATE TAG IF NOT EXISTS t3(p1 int);\nnebula> CREATE TAG IF NOT EXISTS t4(p2 string);\n\n# The following example inserts vertex \"21\" with two tags.\nnebula> INSERT VERTEX t3 (p1), t4(p2) VALUES \"21\": (321, \"hello\");\n

A vertex can be inserted/written with new values multiple times. Only the last written values can be read.

# The following examples insert vertex \"11\" with new values for multiple times.\nnebula> INSERT VERTEX t2 (name, age) VALUES \"11\":(\"n2\", 13);\nnebula> INSERT VERTEX t2 (name, age) VALUES \"11\":(\"n3\", 14);\nnebula> INSERT VERTEX t2 (name, age) VALUES \"11\":(\"n4\", 15);\nnebula> FETCH PROP ON t2 \"11\" YIELD properties(vertex);\n+-----------------------+\n| properties(VERTEX)    |\n+-----------------------+\n| {age: 15, name: \"n4\"} |\n+-----------------------+\n
nebula> CREATE TAG IF NOT EXISTS t5(p1 fixed_string(5) NOT NULL, p2 int, p3 int DEFAULT NULL);\nnebula> INSERT VERTEX t5(p1, p2, p3) VALUES \"001\":(\"Abe\", 2, 3);\n\n# In the following example, the insertion fails because the value of p1 cannot be NULL.\nnebula> INSERT VERTEX t5(p1, p2, p3) VALUES \"002\":(NULL, 4, 5);\n[ERROR (-1009)]: SemanticError: No schema found for `t5'\n\n# In the following example, the value of p3 is the default NULL.\nnebula> INSERT VERTEX t5(p1, p2) VALUES \"003\":(\"cd\", 5);\nnebula> FETCH PROP ON t5 \"003\" YIELD properties(vertex);\n+---------------------------------+\n| properties(VERTEX)              |\n+---------------------------------+\n| {p1: \"cd\", p2: 5, p3: __NULL__} |\n+---------------------------------+\n\n# In the following example, the allowed maximum length of p1 is 5.\nnebula> INSERT VERTEX t5(p1, p2) VALUES \"004\":(\"shalalalala\", 4);\nnebula> FETCH PROP on t5 \"004\" YIELD properties(vertex);\n+------------------------------------+\n| properties(VERTEX)                 |\n+------------------------------------+\n| {p1: \"shala\", p2: 4, p3: __NULL__} |\n+------------------------------------+\n

If you insert a vertex that already exists with IF NOT EXISTS, there will be no modification.

# The following example inserts vertex \"1\".\nnebula> INSERT VERTEX t2 (name, age) VALUES \"1\":(\"n2\", 13);\n# Modify vertex \"1\" with IF NOT EXISTS. But there will be no modification as vertex \"1\" already exists.\nnebula> INSERT VERTEX IF NOT EXISTS t2 (name, age) VALUES \"1\":(\"n3\", 14);\nnebula> FETCH PROP ON t2 \"1\" YIELD properties(vertex);\n+-----------------------+\n| properties(VERTEX)    |\n+-----------------------+\n| {age: 13, name: \"n2\"} |\n+-----------------------+\n
"},{"location":"3.ngql-guide/12.vertex-statements/2.update-vertex/","title":"UPDATE VERTEX","text":"

The UPDATE VERTEX statement updates properties on tags of a vertex.

In NebulaGraph, UPDATE VERTEX supports compare-and-set (CAS).

Note

An UPDATE VERTEX statement can only update properties on ONE TAG of a vertex.

"},{"location":"3.ngql-guide/12.vertex-statements/2.update-vertex/#syntax","title":"Syntax","text":"
UPDATE VERTEX ON <tag_name> <vid>\nSET <update_prop>\n[WHEN <condition>]\n[YIELD <output>]\n
Parameter Required Description Example ON <tag_name> Yes Specifies the tag of the vertex. The properties to be updated must be on this tag. ON player <vid> Yes Specifies the ID of the vertex to be updated. \"player100\" SET <update_prop> Yes Specifies the properties to be updated and how they will be updated. SET age = age +1 WHEN <condition> No Specifies the filter conditions. If <condition> evaluates to false, the SET clause will not take effect. WHEN name == \"Tim\" YIELD <output> No Specifies the output format of the statement. YIELD name AS Name"},{"location":"3.ngql-guide/12.vertex-statements/2.update-vertex/#example","title":"Example","text":"
// This query checks the properties of vertex \"player101\".\nnebula> FETCH PROP ON player \"player101\" YIELD properties(vertex);\n+--------------------------------+\n| properties(VERTEX)             |\n+--------------------------------+\n| {age: 36, name: \"Tony Parker\"} |\n+--------------------------------+\n\n// This query updates the age property and returns name and the new age.\nnebula> UPDATE VERTEX ON player \"player101\" \\\n        SET age = age + 2 \\\n        WHEN name == \"Tony Parker\" \\\n        YIELD name AS Name, age AS Age;\n+---------------+-----+\n| Name          | Age |\n+---------------+-----+\n| \"Tony Parker\" | 38  |\n+---------------+-----+\n
"},{"location":"3.ngql-guide/12.vertex-statements/3.upsert-vertex/","title":"UPSERT VERTEX","text":"

The UPSERT statement is a combination of UPDATE and INSERT. You can use UPSERT VERTEX to update the properties of a vertex if it exists or insert a new vertex if it does not exist.

Note

An UPSERT VERTEX statement can only update the properties on ONE TAG of a vertex.

The performance of UPSERT is much lower than that of INSERT because UPSERT is a read-modify-write serialization operation at the partition level.

Danger

Don't use UPSERT for scenarios with highly concurrent writes. You can use UPDATE or INSERT instead.

"},{"location":"3.ngql-guide/12.vertex-statements/3.upsert-vertex/#syntax","title":"Syntax","text":"
UPSERT VERTEX ON <tag> <vid>\nSET <update_prop>\n[WHEN <condition>]\n[YIELD <output>]\n
Parameter Required Description Example ON <tag> Yes Specifies the tag of the vertex. The properties to be updated must be on this tag. ON player <vid> Yes Specifies the ID of the vertex to be updated or inserted. \"player100\" SET <update_prop> Yes Specifies the properties to be updated and how they will be updated. SET age = age +1 WHEN <condition> No Specifies the filter conditions. WHEN name == \"Tim\" YIELD <output> No Specifies the output format of the statement. YIELD name AS Name"},{"location":"3.ngql-guide/12.vertex-statements/3.upsert-vertex/#insert_a_vertex_if_it_does_not_exist","title":"Insert a vertex if it does not exist","text":"

If a vertex does not exist, it is created no matter the conditions in the WHEN clause are met or not, and the SET clause always takes effect. The property values of the new vertex depend on:

For example, if:

Then the property values in different cases are listed as follows:

Are WHEN conditions met If properties have default values Value of name Value of age Yes Yes The default value 30 Yes No NULL 30 No Yes The default value 30 No No NULL 30

Here are some examples:

// This query checks if the following three vertices exist. The result \"Empty set\" indicates that the vertices do not exist.\nnebula> FETCH PROP ON * \"player666\", \"player667\", \"player668\" YIELD properties(vertex);\n+--------------------+\n| properties(VERTEX) |\n+--------------------+\n+--------------------+\nEmpty set\n\nnebula> UPSERT VERTEX ON player \"player666\" \\\n        SET age = 30 \\\n        WHEN name == \"Joe\" \\\n        YIELD name AS Name, age AS Age;\n+----------+----------+\n| Name     | Age      |\n+----------+----------+\n| __NULL__ | 30       |\n+----------+----------+\n\nnebula> UPSERT VERTEX ON player \"player666\" \\\n        SET age = 31 \\\n        WHEN name == \"Joe\" \\\n        YIELD name AS Name, age AS Age;\n+----------+-----+\n| Name     | Age |\n+----------+-----+\n| __NULL__ | 30  |\n+----------+-----+\n\nnebula> UPSERT VERTEX ON player \"player667\" \\\n        SET age = 31 \\\n        YIELD name AS Name, age AS Age;\n+----------+-----+\n| Name     | Age |\n+----------+-----+\n| __NULL__ | 31  |\n+----------+-----+\n\nnebula> UPSERT VERTEX ON player \"player668\" \\\n        SET name = \"Amber\", age = age + 1 \\\n        YIELD name AS Name, age AS Age;\n+---------+----------+\n| Name    | Age      |\n+---------+----------+\n| \"Amber\" | __NULL__ |\n+---------+----------+\n

In the last query of the preceding examples, since age has no default value, when the vertex is created, age is NULL, and age = age + 1 does not take effect. But if age has a default value, age = age + 1 will take effect. For example:

nebula> CREATE TAG IF NOT EXISTS player_with_default(name string, age int DEFAULT 20);\nExecution succeeded\n\nnebula> UPSERT VERTEX ON player_with_default \"player101\" \\\n        SET age = age + 1 \\\n        YIELD name AS Name, age AS Age;\n\n+----------+-----+\n| Name     | Age |\n+----------+-----+\n| __NULL__ | 21  |\n+----------+-----+\n
"},{"location":"3.ngql-guide/12.vertex-statements/3.upsert-vertex/#update_a_vertex_if_it_exists","title":"Update a vertex if it exists","text":"

If the vertex exists and the WHEN conditions are met, the vertex is updated.

nebula> FETCH PROP ON player \"player101\" YIELD properties(vertex);\n+--------------------------------+\n| properties(VERTEX)             |\n+--------------------------------+\n| {age: 36, name: \"Tony Parker\"} |\n+--------------------------------+\n\nnebula> UPSERT VERTEX ON player \"player101\" \\\n        SET age = age + 2 \\\n        WHEN name == \"Tony Parker\" \\\n        YIELD name AS Name, age AS Age;\n+---------------+-----+\n| Name          | Age |\n+---------------+-----+\n| \"Tony Parker\" | 38  |\n+---------------+-----+\n

If the vertex exists and the WHEN conditions are not met, the update does not take effect.

nebula> FETCH PROP ON player \"player101\" YIELD properties(vertex);\n+--------------------------------+\n| properties(VERTEX)             |\n+--------------------------------+\n| {age: 38, name: \"Tony Parker\"} |\n+--------------------------------+\n\nnebula> UPSERT VERTEX ON player \"player101\" \\\n        SET age = age + 2 \\\n        WHEN name == \"Someone else\" \\\n        YIELD name AS Name, age AS Age;\n+---------------+-----+\n| Name          | Age |\n+---------------+-----+\n| \"Tony Parker\" | 38  |\n+---------------+-----+\n
"},{"location":"3.ngql-guide/12.vertex-statements/4.delete-vertex/","title":"DELETE VERTEX","text":"

By default, the DELETE VERTEX statement deletes vertices but the incoming and outgoing edges of the vertices.

Compatibility

The DELETE VERTEX statement deletes one vertex or multiple vertices at a time. You can use DELETE VERTEX together with pipes. For more information about pipe, see Pipe operator.

Note

"},{"location":"3.ngql-guide/12.vertex-statements/4.delete-vertex/#syntax","title":"Syntax","text":"
DELETE VERTEX <vid> [, <vid> ...] [WITH EDGE];\n
"},{"location":"3.ngql-guide/12.vertex-statements/4.delete-vertex/#examples","title":"Examples","text":"

This query deletes the vertex whose ID is \"team1\".

# Delete the vertex whose VID is `team1` but the related incoming and outgoing edges are not deleted.\nnebula> DELETE VERTEX \"team1\";\n\n# Delete the vertex whose VID is `team1` and the related incoming and outgoing edges.\nnebula> DELETE VERTEX \"team1\" WITH EDGE;\n

This query shows that you can use DELETE VERTEX together with pipe to delete vertices.

nebula> GO FROM \"player100\" OVER serve WHERE properties(edge).start_year == \"2021\" YIELD dst(edge) AS id | DELETE VERTEX $-.id;\n
"},{"location":"3.ngql-guide/12.vertex-statements/4.delete-vertex/#process_of_deleting_vertices","title":"Process of deleting vertices","text":"

Once NebulaGraph deletes the vertices, all edges (incoming and outgoing edges) of the target vertex will become dangling edges. When NebulaGraph deletes the vertices WITH EDGE, NebulaGraph traverses the incoming and outgoing edges related to the vertices and deletes them all. Then NebulaGraph deletes the vertices.

Caution

"},{"location":"3.ngql-guide/13.edge-statements/1.insert-edge/","title":"INSERT EDGE","text":"

The INSERT EDGE statement inserts an edge or multiple edges into a graph space from a source vertex (given by src_vid) to a destination vertex (given by dst_vid) with a specific rank in NebulaGraph.

When inserting an edge that already exists, INSERT EDGE overrides the edge.

"},{"location":"3.ngql-guide/13.edge-statements/1.insert-edge/#syntax","title":"Syntax","text":"
INSERT EDGE [IF NOT EXISTS] <edge_type> ( <prop_name_list> ) VALUES \n<src_vid> -> <dst_vid>[@<rank>] : ( <prop_value_list> )\n[, <src_vid> -> <dst_vid>[@<rank>] : ( <prop_value_list> ), ...];\n\n<prop_name_list> ::=\n  [ <prop_name> [, <prop_name> ] ...]\n\n<prop_value_list> ::=\n  [ <prop_value> [, <prop_value> ] ...]\n