diff --git a/.circleci/config.yml b/.circleci/config.yml
index e8886cd9b5..23bd8011e4 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -87,7 +87,7 @@ jobs:
command: |
mkdir -p .circleci && cd .circleci
fetched=false
- for i in $(seq 1 6); do
+ for i in $(seq 1 5); do
echo ""
res=$(curl -fsS https://api.github.com/repos/arangodb/docs-hugo/contents/.circleci?ref=$CIRCLE_SHA1) || curlStatus=$?
if [[ -z "${curlStatus:-}" ]]; then
@@ -103,7 +103,7 @@ jobs:
fi
unset curlStatus
unset jqStatus
- sleep 10
+ sleep 60
done
if [[ "$fetched" = false ]]; then
echo "Failed to fetch download URLs"
diff --git a/.circleci/generate_config.py b/.circleci/generate_config.py
index 4d12e8194d..8548703ae6 100644
--- a/.circleci/generate_config.py
+++ b/.circleci/generate_config.py
@@ -18,7 +18,7 @@
## Load versions
versions = yaml.safe_load(open("versions.yaml", "r"))
-versions = sorted(versions, key=lambda d: d['name'])
+versions = sorted(versions["/arangodb/"], key=lambda d: d['name'])
print(f"Loaded versions {versions}")
diff --git a/README.md b/README.md
index 0466fbd353..530c4c1c7e 100644
--- a/README.md
+++ b/README.md
@@ -367,8 +367,8 @@ Inner shortcode
Tags let you display badges, usually below a headline.
This is mainly used for pointing out if a feature is only available in the
-ArangoDB Platform, the ArangoGraph Insights Platform, or both.
-See [Environment remarks](#environment-remarks) for details.
+GenAI Suite, the Data Platform, the Arango Managed Platform (AMP), or multiple
+of them. See [Environment remarks](#environment-remarks) for details.
It is also used for [Edition remarks](#edition-remarks) in content before
version 3.12.5.
@@ -570,7 +570,7 @@ The following shortcodes also exist but are rarely used:
- _DB-Server_, not ~~dbserver~~, ~~db-server~~, ~~DBserver~~ (unless it is a code value)
- _Coordinator_ (uppercase C)
- _Agent_, _Agency_ (uppercase A)
- - _ArangoGraph Insights Platform_ and _ArangoGraph_ for short, but not
+ - _Arango Managed Platform (AMP)_ and _ArangoGraph_ for short, but not
~~Oasis~~, ~~ArangoDB Oasis~~, or ~~ArangoDB Cloud~~
- _Deployment mode_ (single server, cluster, etc.), not ~~deployment type~~
@@ -586,7 +586,7 @@ For external links, use standard Markdown. Clicking these links automatically
opens them in a new tab:
```markdown
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud)
+[Arango Managed Platform (AMP)](https://dashboard.arangodb.cloud)
```
For internal links, use relative paths to the Markdown files. Always link to
@@ -674,25 +674,24 @@ deprecated features in the same manner with `Deprecated in: ...`.
### Environment remarks
Pages and sections about features that are only available in certain environments
-such as the ArangoDB Platform, the ArangoGraph Insight Platform, or the
-ArangoDB Shell should indicate where they are available using the `tag` shortcode.
+such as in ArangoDB Shell should indicate where they are available using the
+`tag` shortcode.
-In the unified Platform and ArangoGraph but not in the Core:
+Features exclusive to the Data Platform, GenAI Data Platform,
+Arango Managed Platform (AMP), and ArangoDB generally don't need to be tagged
+because they are in dedicated parts of the documentation. However, if there are
+subsections with different procedures, each can be tagged accordingly.
-```markdown
-{{< tag "ArangoDB Platform" "ArangoGraph" >}}
-```
-
-In the unified Platform only:
+In the GenAI Data Platform only:
```markdown
-{{< tag "ArangoDB Platform" >}}
+{{< tag "GenAI Data Platform" >}}
```
-In ArangoGraph only:
+In the Arango Managed Platform only:
```markdown
-{{< tag "ArangoGraph" >}}
+{{< tag "AMP" >}}
```
In the ArangoDB Shell but not the server-side JavaScript API:
@@ -719,7 +718,7 @@ Enterprise Edition features should indicate that the Enterprise Edition is
required using a tag. Use the following include in the general case:
```markdown
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
+{{< tag "ArangoDB Enterprise Edition" "AMP" >}}
```
### Add lead paragraphs
diff --git a/site/config/_default/config.yaml b/site/config/_default/config.yaml
index 67c2321b61..25e4930333 100644
--- a/site/config/_default/config.yaml
+++ b/site/config/_default/config.yaml
@@ -21,12 +21,18 @@ module:
# Version folders can be ignored temporarily for faster local builds
# of a single version (here: 3.12)
-# - excludeFiles:
-# - 3.10/*
-# - 3.11/*
-# - 3.13/*
-# source: content
-# target: content
+ - source: content
+ target: content
+ excludeFiles:
+# - arangodb/3.10/*
+# - arangodb/3.11/*
+# - arangodb/3.13/*
+
+ - source: content/arangodb/3.12
+ target: content/arangodb/stable
+
+ - source: content/arangodb/3.13
+ target: content/arangodb/devel
markup:
highlight:
diff --git a/site/content/3.10/_index.md b/site/content/3.10/_index.md
deleted file mode 100644
index dee4818818..0000000000
--- a/site/content/3.10/_index.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: Recommended Resources
-menuTitle: '3.10'
-weight: 0
-layout: default
----
-{{< cloudbanner >}}
-
-{{< cards >}}
-
-{{% card title="What is ArangoDB?" link="about-arangodb/" %}}
-Get to know graphs, ArangoDB's use cases and features.
-{{% /card %}}
-
-{{% card title="Get started" link="get-started/" %}}
-Learn about ArangoDB's core concepts, how to interact with the database system,
-and get a server instance up and running.
-{{% /card %}}
-
-{{% card title="ArangoGraph Insights Platform" link="arangograph/" %}}
-Try out ArangoDB's fully-managed cloud offering for a faster time to value.
-{{% /card %}}
-
-{{% card title="AQL" link="aql/" %}}
-ArangoDB's Query Language AQL lets you use graphs, JSON documents, and search
-via a single, composable query language.
-{{% /card %}}
-
-{{% card title="Data Science" link="data-science/" %}}
-Discover the graph analytics and machine learning features of ArangoDB.
-{{% /card %}}
-
-{{% card title="Deploy" link="deploy/" %}}
-Find the right deployment mode and set up your ArangoDB instance.
-{{% /card %}}
-
-{{% card title="Develop" link="develop/" %}}
-See the in-depth feature and API documentation to start developing applications
-with ArangoDB as your backend.
-{{% /card %}}
-
-{{< /cards >}}
diff --git a/site/content/3.10/about-arangodb/_index.md b/site/content/3.10/about-arangodb/_index.md
deleted file mode 100644
index 9b96a70c37..0000000000
--- a/site/content/3.10/about-arangodb/_index.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: What is ArangoDB?
-menuTitle: About ArangoDB
-weight: 5
-description: >-
- ArangoDB is a scalable graph database system to drive value from connected
- data, faster
-aliases:
- - introduction
- - introduction/about-arangodb
----
-
-
-ArangoDB combines the analytical power of native graphs with an integrated
-search engine, JSON support, and a variety of data access patterns via a single,
-composable query language.
-
-ArangoDB is available in an open-source and a commercial [edition](features/_index.md).
-You can use it for on-premises deployments, as well as a fully managed
-cloud service, the [ArangoGraph Insights Platform](../arangograph/_index.md).
-
-## What are Graphs?
-
-Graphs are information networks comprised of nodes and relations.
-
-
-
-A social network is a common example of a graph. People are represented by nodes
-and their friendships by relations.
-
-
-
-Nodes are also called vertices (singular: vertex), and relations are edges that
-connect vertices.
-A vertex typically represents a specific entity (a person, a book, a sensor
-reading, etc.) and an edge defines how one entity relates to another.
-
-
-
-This paradigm of storing data feels natural because it closely matches the
-cognitive model of humans. It is an expressive data model that allows you to
-represent many problem domains and solve them with semantic queries and graph
-analytics.
-
-## Beyond Graphs
-
-Not everything is a graph use case. ArangoDB lets you equally work with
-structured, semi-structured, and unstructured data in the form of schema-free
-JSON objects, without having to connect these objects to form a graph.
-
-
-
-
-
-Depending on your needs, you may mix graphs and unconnected data.
-ArangoDB is designed from the ground up to support multiple data models with a
-single, composable query language.
-
-```aql
-FOR book IN Books
- FILTER book.title == "ArangoDB"
- FOR person IN 2..2 INBOUND book Sales, OUTBOUND People
- RETURN person.name
-```
-
-ArangoDB also comes with an integrated search engine for information retrieval,
-such as full-text search with relevance ranking.
-
-ArangoDB is written in C++ for high performance and built to work at scale, in
-the cloud or on-premises.
-
-
diff --git a/site/content/3.10/about-arangodb/features/_index.md b/site/content/3.10/about-arangodb/features/_index.md
deleted file mode 100644
index d33b990636..0000000000
--- a/site/content/3.10/about-arangodb/features/_index.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-title: Features and Capabilities
-menuTitle: Features
-weight: 20
-description: >-
- ArangoDB is a graph database with a powerful set of features for data management and analytics,
- supported by a rich ecosystem of integrations and drivers
-aliases:
- - ../introduction/features
----
-## On-premises versus Cloud
-
-### Fully managed cloud service
-
-The fully managed multi-cloud
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-is the easiest and fastest way to get started. It runs the Enterprise Edition
-of ArangoDB, lets you deploy clusters with just a few clicks, and is operated
-by a dedicated team of ArangoDB engineers day and night. You can choose from a
-variety of support plans to meet your needs.
-
-- Supports many cloud deployment regions across the main cloud providers
- (AWS, Azure, GCP)
-- High availability featuring multi-region zone clusters, managed backups,
- and zero-downtime upgrades
-- Integrated monitoring, alerting, and log management
-- Highly secure with encryption at transit and at rest
-- Includes elastic scalability for all deployment models (OneShard and Sharded clusters)
-
-To learn more, go to the [ArangoGraph documentation](../../arangograph/_index.md).
-
-### Self-managed in the cloud
-
-ArangoDB can be self-deployed on AWS or other cloud platforms, too. However, when
-using a self-managed deployment, you take full control of managing the resources
-needed to run it in the cloud. This involves tasks such as configuring,
-provisioning, and monitoring the system. For more details, see
-[self-deploying ArangoDB in the cloud](../../deploy/in-the-cloud.md).
-
-ArangoDB supports Kubernetes through its official
-[Kubernetes Operator](../../deploy/kubernetes.md) that allows you to easily
-deploy and manage clusters within a Kubernetes environment.
-
-### On-premises
-
-Running ArangoDB on-premises means that ArangoDB is installed locally, on your
-organization's computers and servers, and involves managing all the necessary
-resources within the organization's environment, rather than using external
-services.
-
-You can install ArangoDB locally by downloading and running the
-[official packages](https://arangodb.com/download/) or run it using
-[Docker images](../../operations/installation/docker.md).
-
-You can deploy it on-premises as a
-[single server](../../deploy/single-instance/_index.md)
-or as a [cluster](../../deploy/cluster/_index.md)
-comprised of multiple nodes with synchronous replication and automatic failover
-for high availability and resilience. For the highest level of data safety,
-you can additionally set up off-site replication for your entire cluster
-([Datacenter-to-Datacenter Replication](../../deploy/arangosync/_index.md)).
-
-ArangoDB also integrates with Kubernetes, offering a
-[Kubernetes Operator](../../deploy/kubernetes.md) that lets you deploy in your
-Kubernetes cluster.
-
-## ArangoDB Editions
-
-### Community Edition
-
-ArangoDB is freely available in a **Community Edition** under the Apache 2.0
-open-source license. It is a fully-featured version without time or size
-restrictions and includes cluster support.
-
-- Open source under a permissive license
-- One database core for all graph, document, key-value, and search needs
-- A single composable query language for all data models
-- Extensible through microservices with custom REST APIs and user-definable
- query functions
-- Cluster deployments for high availability and resilience
-
-See all [Community Edition Features](community-edition.md).
-
-### Enterprise Edition
-
-ArangoDB is also available in a commercial version, called the
-**Enterprise Edition**. It includes additional features for performance and
-security, such as for scaling graphs and managing your data safely.
-
-- Includes all Community Edition features
-- Performance options to smartly shard and replicate graphs and datasets for
- optimal data locality
-- Multi-tenant deployment option for the transactional guarantees and
- performance of a single server
-- Enhanced data security with on-disk and backup encryption, key rotation,
- audit logging, and LDAP authentication
-- Incremental backups without downtime and off-site replication
-
-See all [Enterprise Edition Features](enterprise-edition.md).
-
-### Differences between the Editions
-
-| Community Edition | Enterprise Edition |
-|-------------------|--------------------|
-| Apache 2.0 License | Commercial License |
-| Sharding using consistent hashing on the default or custom shard keys | In addition, **smart sharding** for improved data locality |
-| Only hash-based graph sharding | **SmartGraphs** to intelligently shard large graph datasets and **EnterpriseGraphs** with an automatic sharding key selection |
-| Only regular collection replication without data locality optimizations | **SatelliteCollections** to replicate collections on all cluster nodes and data locality optimizations for queries |
-| No optimizations when querying sharded graphs and replicated collections together | **SmartGraphs using SatelliteCollections** to enable more local execution of graph queries |
-| Only regular graph replication without local execution optimizations | **SatelliteGraphs** to execute graph traversals locally on a cluster node |
-| Collections can be sharded alike but joins do not utilize co-location | **SmartJoins** for co-located joins in a cluster using identically sharded collections |
-| Graph traversals without parallel execution | **Parallel execution of traversal queries** with many start vertices |
-| Graph traversals always load full documents | **Traversal projections** optimize the data loading of AQL traversal queries if only a few document attributes are accessed |
-| Iterative graph processing (Pregel) for single servers | **Pregel graph processing for clusters** and single servers |
-| Inverted indexes and Views without support for search highlighting and nested search | **Search highlighting** for getting the substring positions of matches and **nested search** for matching arrays with all the conditions met by a single object |
-| Only standard Jaccard index calculation | **Jaccard similarity approximation** with MinHash for entity resolution, such as for finding duplicate records, based on how many common elements they have |{{% comment %}} Experimental feature
-| No fastText model support | Classification of text tokens and finding similar tokens using supervised **fastText word embedding models** |
-{{% /comment %}}
-| Only regular cluster deployments | **OneShard** deployment option to store all collections of a database on a single cluster node, to combine the performance of a single server and ACID semantics with a fault-tolerant cluster setup |
-| ACID transactions for multi-document / multi-collection queries on single servers, for single document operations in clusters, and for multi-document queries in clusters for collections with a single shard | In addition, ACID transactions for multi-collection queries using the OneShard feature |
-| Always read from leader shards in clusters | Optionally allow dirty reads to **read from followers** to scale reads |
-| TLS key and certificate rotation | In addition, **key rotation for JWT secrets** and **server name indication** (SNI) |
-| Built-in user management and authentication | Additional **LDAP authentication** option |
-| Only server logs | **Audit log** of server interactions |
-| No on-disk encryption | **Encryption at Rest** with hardware-accelerated on-disk encryption and key rotation |
-| Only regular backups | **Datacenter-to-Datacenter Replication** for disaster recovery |
-| Only unencrypted backups and basic data masking for backups | **Hot Backups**, **encrypted backups**, and **enhanced data masking** for backups |
diff --git a/site/content/3.10/about-arangodb/use-cases.md b/site/content/3.10/about-arangodb/use-cases.md
deleted file mode 100644
index fab9e86a90..0000000000
--- a/site/content/3.10/about-arangodb/use-cases.md
+++ /dev/null
@@ -1,164 +0,0 @@
----
-title: ArangoDB Use Cases
-menuTitle: Use Cases
-weight: 15
-description: >-
- ArangoDB is a database system with a large solution space because it combines
- graphs, documents, key-value, search engine, and machine learning all in one
-pageToc:
- maxHeadlineLevel: 2
-aliases:
- - ../introduction/use-cases
----
-## ArangoDB as a Graph Database
-
-ArangoDB as a graph database is a great fit for use cases like fraud detection,
-knowledge graphs, recommendation engines, identity and access management,
-network and IT operations, social media management, traffic management, and many
-more.
-
-### Fraud Detection
-
-{{< image src="../../images/icon-fraud-detection.png" alt="Fraud Detection icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Uncover illegal activities by discovering difficult-to-detect patterns.
-ArangoDB lets you look beyond individual data points in disparate data sources,
-allowing you to integrate and harmonize data to analyze activities and
-relationships all together, for a broader view of connection patterns, to detect
-complex fraudulent behavior such as fraud rings.
-
-### Recommendation Engine
-
-{{< image src="../../images/icon-recommendation-engine.png" alt="Recommendation Engine icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Suggest products, services, and information to users based on data relationships.
-For example, you can use ArangoDB together with PyTorch Geometric to build a
-[movie recommendation system](https://www.arangodb.com/2022/04/integrate-arangodb-with-pytorch-geometric-to-build-recommendation-systems/),
-by analyzing the movies users watched and then predicting links between the two
-with a graph neural network (GNN).
-
-### Network Management
-
-{{< image src="../../images/icon-network-management.png" alt="Network Management icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Reduce downtime by connecting and visualizing network, infrastructure, and code.
-Network devices and how they interconnect can naturally be modeled as a graph.
-Traversal algorithms let you explore the routes between different nodes, with the
-option to stop at subnet boundaries or to take things like the connection
-bandwidth into account when path-finding.
-
-### Customer 360
-
-{{< image src="../../images/icon-customer-360.png" alt="Customer 360 icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Gain a complete understanding of your customers by integrating multiple data
-sources and code. ArangoDB can act as the platform to merge and consolidate
-information in any shape, with the added ability to link related records and to
-track data origins using graph features.
-
-### Identity and Access Management
-
-{{< image src="../../images/icon-identity-management.png" alt="Identity Management icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Increase security and compliance by managing data access based on role and
-position. You can map out an organization chart as a graph and use ArangoDB to
-determine who is authorized to see which information. Put ArangoDB's graph
-capabilities to work to implement access control lists and permission
-inheritance.
-
-### Supply Chain
-
-{{< image src="../../images/icon-supply-chain.png" alt="Supply Chain icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Speed shipments by monitoring and optimizing the flow of goods through a
-supply chain. You can represent your inventory, supplier, and delivery
-information as a graph to understand what the possible sources of delays and
-disruptions are.
-
-## ArangoDB as a Document Database
-
-ArangoDB can be used as the backend for heterogeneous content management,
-e-commerce systems, Internet of Things applications, and more generally as a
-persistence layer for a broad range of services that benefit from an agile
-and scalable data store.
-
-### Content Management
-
-{{< image src="../../images/icon-content-management.png" alt="Content management icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Store information of any kind without upfront schema declaration. ArangoDB is
-schema-free, storing every data record as a self-contained document, allowing
-you to manage heterogeneous content with ease. Build the next (headless)
-content management system on top of ArangoDB.
-
-### E-Commerce Systems
-
-{{< image src="../../images/icon-e-commerce.png" alt="E-commerce icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-ArangoDB combines data modeling freedom with strong consistency and resilience
-features to power online shops and ordering systems. Handle product catalog data
-with ease using any combination of free text and structured data, and process
-checkouts with the necessary transactional guarantees.
-
-### Internet of Things
-
-{{< image src="../../images/icon-internet-of-things.png" alt="Internet of things icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Collect sensor readings and other IoT data in ArangoDB for a single view of
-everything. Store all data points in the same system that also lets you run
-aggregation queries using sliding windows for efficient data analysis.
-
-## ArangoDB as a Key-Value Database
-
-{{< image src="../../images/icon-key-value.png" alt="Key value icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Key-value stores are the simplest kind of database systems. Each record is
-stored as a block of data under a key that uniquely identifies the record.
-The data is opaque, which means the system doesn't know anything about the
-contained information, it simply stores it and can retrieve it for you via
-the identifiers.
-
-This paradigm is used at the heart of ArangoDB and allows it to scale well,
-but without the limitations of a pure key-value store. Every document has a
-`_key` attribute, which is either user-provided or automatically generated.
-You can create additional indexes and work with subsets of attributes as
-needed, requiring the system to be aware of the stored data structures - unlike
-pure key-value stores.
-
-While ArangoDB can store binary data, it is not designed for
-binary large objects (BLOBs) and works best with small to medium-sized
-JSON objects.
-
-For more information about how ArangoDB persists data, see
-[Storage Engine](../components/arangodb-server/storage-engine.md).
-
-## ArangoDB as a Search Engine
-
-{{< image src="../../images/icon-search-engine.png" alt="Search engine icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-ArangoDB has a natively integrated search engine for a broad range of
-information retrieval needs. It is powered by inverted indexes and can index
-full-text, GeoJSON, as well as arbitrary JSON data. It supports various
-kinds of search patterns (tokens, phrases, wildcard, fuzzy, geo-spatial, etc.)
-and it can rank results by relevance and similarity using popular
-scoring algorithms.
-
-It also features natural language processing (NLP) capabilities.
-{{% comment %}} Experimental feature
-and can classify or find similar terms using word embedding models.
-{{% /comment %}}
-
-For more information about the search engine, see [ArangoSearch](../index-and-search/arangosearch/_index.md).
-
-## ArangoDB for Machine Learning
-
-You can use ArangoDB as the foundation for machine learning based on graphs
-at enterprise scale. You can use it as a metadata store for model training
-parameters, run analytical algorithms in the database, or serve operative
-queries using data that you computed.
-
-ArangoDB integrates well into existing data infrastructures and provides
-connectors for popular machine learning frameworks and data processing
-ecosystems.
-
-
diff --git a/site/content/3.10/aql/examples-and-query-patterns/remove-vertex.md b/site/content/3.10/aql/examples-and-query-patterns/remove-vertex.md
deleted file mode 100644
index 60a845ad94..0000000000
--- a/site/content/3.10/aql/examples-and-query-patterns/remove-vertex.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-title: Remove vertices with AQL
-menuTitle: Remove vertex
-weight: 45
-description: >-
- Removing connected edges along with vertex documents directly in AQL is
- possible in a limited way
----
-Deleting vertices with associated edges is currently not handled via AQL while
-the [graph management interface](../../graphs/general-graphs/management.md#remove-a-vertex)
-and the
-[REST API for the graph module](../../develop/http-api/graphs/named-graphs.md#remove-a-vertex)
-offer a vertex deletion functionality.
-However, as shown in this example based on the
-[Knows Graph](../../graphs/example-graphs.md#knows-graph), a query for this
-use case can be created.
-
-
-
-When deleting vertex **eve** from the graph, we also want the edges
-`eve -> alice` and `eve -> bob` to be removed.
-The involved graph and its only edge collection has to be known. In this case it
-is the graph **knows_graph** and the edge collection **knows**.
-
-This query will delete **eve** with its adjacent edges:
-
-```aql
----
-name: GRAPHTRAV_removeVertex1
-description: ''
-dataset: knows_graph
----
-LET edgeKeys = (FOR v, e IN 1..1 ANY 'persons/eve' GRAPH 'knows_graph' RETURN e._key)
-LET r = (FOR key IN edgeKeys REMOVE key IN knows)
-REMOVE 'eve' IN persons
-```
-
-This query executed several actions:
-- use a graph traversal of depth 1 to get the `_key` of **eve's** adjacent edges
-- remove all of these edges from the `knows` collection
-- remove vertex **eve** from the `persons` collection
-
-The following query shows a different design to achieve the same result:
-
-```aql
----
-name: GRAPHTRAV_removeVertex2
-description: ''
-dataset: knows_graph
----
-LET edgeKeys = (FOR v, e IN 1..1 ANY 'persons/eve' GRAPH 'knows_graph'
- REMOVE e._key IN knows)
-REMOVE 'eve' IN persons
-```
-
-**Note**: The query has to be adjusted to match a graph with multiple vertex/edge collections.
-
-For example, the [City Graph](../../graphs/example-graphs.md#city-graph)
-contains several vertex collections - `germanCity` and `frenchCity` and several
-edge collections - `french / german / international Highway`.
-
-
-
-To delete city **Berlin** all edge collections `french / german / international Highway`
-have to be considered. The **REMOVE** operation has to be applied on all edge
-collections with `OPTIONS { ignoreErrors: true }`. Not using this option will stop the query
-whenever a non existing key should be removed in a collection.
-
-```aql
----
-name: GRAPHTRAV_removeVertex3
-description: ''
-dataset: routeplanner
----
-LET edgeKeys = (FOR v, e IN 1..1 ANY 'germanCity/Berlin' GRAPH 'routeplanner' RETURN e._key)
-LET r = (FOR key IN edgeKeys REMOVE key IN internationalHighway
- OPTIONS { ignoreErrors: true } REMOVE key IN germanHighway
- OPTIONS { ignoreErrors: true } REMOVE key IN frenchHighway
- OPTIONS { ignoreErrors: true })
-REMOVE 'Berlin' IN germanCity
-```
diff --git a/site/content/3.10/aql/examples-and-query-patterns/traversals.md b/site/content/3.10/aql/examples-and-query-patterns/traversals.md
deleted file mode 100644
index 3b9452edbc..0000000000
--- a/site/content/3.10/aql/examples-and-query-patterns/traversals.md
+++ /dev/null
@@ -1,118 +0,0 @@
----
-title: Combining AQL Graph Traversals
-menuTitle: Traversals
-weight: 40
-description: >-
- You can combine graph queries with other AQL features like geo-spatial search
----
-## Finding the start vertex via a geo query
-
-Our first example will locate the start vertex for a graph traversal via [a geo index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md).
-We use the [City Graph](../../graphs/example-graphs.md#city-graph) and its geo indexes:
-
-
-
-```js
----
-name: COMBINING_GRAPH_01_create_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("routeplanner");
-~examples.dropGraph("routeplanner");
-```
-
-We search all german cities in a range of 400 km around the ex-capital **Bonn**: **Hamburg** and **Cologne**.
-We won't find **Paris** since its in the `frenchCity` collection.
-
-```aql
----
-name: COMBINING_GRAPH_02_show_geo
-description: ''
-dataset: routeplanner
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- RETURN startCity._key
-```
-
-Let's revalidate that the geo indexes are actually used:
-
-```aql
----
-name: COMBINING_GRAPH_03_explain_geo
-description: ''
-dataset: routeplanner
-explain: true
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- RETURN startCity._key
-```
-
-And now combine this with a graph traversal:
-
-```aql
----
-name: COMBINING_GRAPH_04_combine
-description: ''
-dataset: routeplanner
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- FOR v, e, p IN 1..1 OUTBOUND startCity
- GRAPH 'routeplanner'
- RETURN {startcity: startCity._key, traversedCity: v._key}
-```
-
-The geo index query returns us `startCity` (**Cologne** and **Hamburg**) which we then use as starting point for our graph traversal.
-For simplicity we only return their direct neighbours. We format the return result so we can see from which `startCity` the traversal came.
-
-Alternatively we could use a `LET` statement with a subquery to group the traversals by their `startCity` efficiently:
-
-```aql
----
-name: COMBINING_GRAPH_05_combine_let
-description: ''
-dataset: routeplanner
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- LET oneCity = (
- FOR v, e, p IN 1..1 OUTBOUND startCity
- GRAPH 'routeplanner' RETURN v._key
- )
- RETURN {startCity: startCity._key, connectedCities: oneCity}
-```
-
-Finally, we clean up again:
-
-```js
----
-name: COMBINING_GRAPH_06_cleanup
-description: ''
----
-~var examples = require("@arangodb/graph-examples/example-graph");
-~var g = examples.loadGraph("routeplanner");
-examples.dropGraph("routeplanner");
-```
diff --git a/site/content/3.10/aql/functions/arangosearch.md b/site/content/3.10/aql/functions/arangosearch.md
deleted file mode 100644
index 0b0107c595..0000000000
--- a/site/content/3.10/aql/functions/arangosearch.md
+++ /dev/null
@@ -1,1371 +0,0 @@
----
-title: ArangoSearch functions in AQL
-menuTitle: ArangoSearch
-weight: 5
-description: >-
- ArangoSearch offers various AQL functions for search queries to control the search context, for filtering and scoring
-pageToc:
- maxHeadlineLevel: 3
----
-You can form search expressions by composing ArangoSearch function calls,
-logical operators and comparison operators. This allows you to filter Views
-as well as to utilize inverted indexes to filter collections.
-
-The AQL [`SEARCH` operation](../high-level-operations/search.md) accepts search expressions,
-such as `PHRASE(doc.text, "foo bar", "text_en")`, for querying Views. You can
-combine ArangoSearch filter and context functions as well as operators like
-`AND` and `OR` to form complex search conditions. Similarly, the
-[`FILTER` operation](../high-level-operations/filter.md) accepts such search expressions
-when using [inverted indexes](../../index-and-search/indexing/working-with-indexes/inverted-indexes.md).
-
-Scoring functions allow you to rank matches and to sort results by relevance.
-They are limited to Views.
-
-Search highlighting functions let you retrieve the string positions of matches.
-They are limited to Views.
-
-You can use most functions also without an inverted index or a View and the
-`SEARCH` keyword, but then they are not accelerated by an index.
-
-See [Information Retrieval with ArangoSearch](../../index-and-search/arangosearch/_index.md) for an
-introduction.
-
-## Context Functions
-
-### ANALYZER()
-
-`ANALYZER(expr, analyzer) → retVal`
-
-Sets the Analyzer for the given search expression.
-
-{{< info >}}
-The `ANALYZER()` function is only applicable for queries against `arangosearch` Views.
-
-In queries against `search-alias` Views and inverted indexes, you don't need to
-specify Analyzers because every field can be indexed with a single Analyzer only
-and they are inferred from the index definition.
-{{< /info >}}
-
-The default Analyzer is `identity` for any search expression that is used for
-filtering `arangosearch` Views. This utility function can be used
-to wrap a complex expression to set a particular Analyzer. It also sets it for
-all the nested functions which require such an argument to avoid repeating the
-Analyzer parameter. If an Analyzer argument is passed to a nested function
-regardless, then it takes precedence over the Analyzer set via `ANALYZER()`.
-
-The `TOKENS()` function is an exception. It requires the Analyzer name to be
-passed in in all cases even if wrapped in an `ANALYZER()` call, because it is
-not an ArangoSearch function but a regular string function which can be used
-outside of `SEARCH` operations.
-
-- **expr** (expression): any valid search expression
-- **analyzer** (string): name of an [Analyzer](../../index-and-search/analyzers.md).
-- returns **retVal** (any): the expression result that it wraps
-
-#### Example: Using a custom Analyzer
-
-Assuming a View definition with an Analyzer whose name and type is `delimiter`:
-
-```json
-{
- "links": {
- "coll": {
- "analyzers": [ "delimiter" ],
- "includeAllFields": true,
- }
- },
- ...
-}
-```
-
-… with the Analyzer properties `{ "delimiter": "|" }` and an example document
-`{ "text": "foo|bar|baz" }` in the collection `coll`, the following query would
-return the document:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(doc.text == "bar", "delimiter")
- RETURN doc
-```
-
-The expression `doc.text == "bar"` has to be wrapped by `ANALYZER()` in order
-to set the Analyzer to `delimiter`. Otherwise the expression would be evaluated
-with the default `identity` Analyzer. `"foo|bar|baz" == "bar"` would not match,
-but the View does not even process the indexed fields with the `identity`
-Analyzer. The following query would also return an empty result because of
-the Analyzer mismatch:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.text == "foo|bar|baz"
- //SEARCH ANALYZER(doc.text == "foo|bar|baz", "identity")
- RETURN doc
-```
-
-#### Example: Setting the Analyzer context with and without `ANALYZER()`
-
-In below query, the search expression is swapped by `ANALYZER()` to set the
-`text_en` Analyzer for both `PHRASE()` functions:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(PHRASE(doc.text, "foo") OR PHRASE(doc.text, "bar"), "text_en")
- RETURN doc
-```
-
-Without the usage of `ANALYZER()`:
-
-```aql
-FOR doc IN viewName
- SEARCH PHRASE(doc.text, "foo", "text_en") OR PHRASE(doc.text, "bar", "text_en")
- RETURN doc
-```
-
-#### Example: Analyzer precedence and specifics of the `TOKENS()` function
-
-In the following example `ANALYZER()` is used to set the Analyzer `text_en`,
-but in the second call to `PHRASE()` a different Analyzer is set (`identity`)
-which overrules `ANALYZER()`. Therefore, the `text_en` Analyzer is used to find
-the phrase *foo* and the `identity` Analyzer to find *bar*:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(PHRASE(doc.text, "foo") OR PHRASE(doc.text, "bar", "identity"), "text_en")
- RETURN doc
-```
-
-Despite the wrapping `ANALYZER()` function, the Analyzer name cannot be
-omitted in calls to the `TOKENS()` function. Both occurrences of `text_en`
-are required, to set the Analyzer for the expression `doc.text IN ...` and
-for the `TOKENS()` function itself. This is because the `TOKENS()` function
-is a regular string function that does not take the Analyzer context into
-account:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(doc.text IN TOKENS("foo", "text_en"), "text_en")
- RETURN doc
-```
-
-### BOOST()
-
-`BOOST(expr, boost) → retVal`
-
-Override boost in the context of a search expression with a specified value,
-making it available for scorer functions. By default, the context has a boost
-value equal to `1.0`.
-
-- **expr** (expression): any valid search expression
-- **boost** (number): numeric boost value
-- returns **retVal** (any): the expression result that it wraps
-
-#### Example: Boosting a search sub-expression
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(BOOST(doc.text == "foo", 2.5) OR doc.text == "bar", "text_en")
- LET score = BM25(doc)
- SORT score DESC
- RETURN { text: doc.text, score }
-```
-
-Assuming a View with the following documents indexed and processed by the
-`text_en` Analyzer:
-
-```js
-{ "text": "foo bar" }
-{ "text": "foo" }
-{ "text": "bar" }
-{ "text": "foo baz" }
-{ "text": "baz" }
-```
-
-… the result of above query would be:
-
-```json
-[
- {
- "text": "foo bar",
- "score": 2.787301540374756
- },
- {
- "text": "foo baz",
- "score": 1.6895781755447388
- },
- {
- "text": "foo",
- "score": 1.525835633277893
- },
- {
- "text": "bar",
- "score": 0.9913395643234253
- }
-]
-```
-
-## Filter Functions
-
-### EXISTS()
-
-{{< info >}}
-If you use `arangosearch` Views, the `EXISTS()` function only matches values if
-you set the **storeValues** link property to `"id"` in the View definition
-(the default is `"none"`).
-{{< /info >}}
-
-#### Testing for attribute presence
-
-`EXISTS(path)`
-
-Match documents where the attribute at `path` is present.
-
-- **path** (attribute path expression): the attribute to test in the document
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text)
- RETURN doc
-```
-
-#### Testing for attribute type
-
-`EXISTS(path, type)`
-
-Match documents where the attribute at `path` is present _and_ is of the
-specified data type.
-
-- **path** (attribute path expression): the attribute to test in the document
-- **type** (string): data type to test for, can be one of:
- - `"null"`
- - `"bool"` / `"boolean"`
- - `"numeric"`
- - `"type"` (matches `null`, `boolean`, and `numeric` values)
- - `"string"`
- - `"analyzer"` (see below)
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text, "string")
- RETURN doc
-```
-
-#### Testing for Analyzer index status
-
-`EXISTS(path, "analyzer", analyzer)`
-
-Match documents where the attribute at `path` is present _and_ was indexed
-by the specified `analyzer`.
-
-- **path** (attribute path expression): the attribute to test in the document
-- **type** (string): string literal `"analyzer"`
-- **analyzer** (string, _optional_): name of an [Analyzer](../../index-and-search/analyzers.md).
- Uses the Analyzer of a wrapping `ANALYZER()` call if not specified or
- defaults to `"identity"`
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text, "analyzer", "text_en")
- RETURN doc
-```
-
-#### Testing for nested fields
-
-`EXISTS(path, "nested")`
-
-Match documents where the attribute at `path` is present _and_ is indexed
-as a nested field for [nested search with Views](../../index-and-search/arangosearch/nested-search.md)
-or [inverted indexes](../../index-and-search/indexing/working-with-indexes/inverted-indexes.md#nested-search-enterprise-edition).
-
-- **path** (attribute path expression): the attribute to test in the document
-- **type** (string): string literal `"nested"`
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-**Examples**
-
-Only return documents from the View `viewName` whose `text` attribute is indexed
-as a nested field:
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text, "nested")
- RETURN doc
-```
-
-Only return documents whose `attr` attribute and its nested `text` attribute are
-indexed as nested fields:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.attr[? FILTER EXISTS(CURRENT.text, "nested")]
- RETURN doc
-```
-
-Only return documents from the collection `coll` whose `text` attribute is indexed
-as a nested field by an inverted index:
-
-```aql
-FOR doc IN coll OPTIONS { indexHint: "inv-idx", forceIndexHint: true }
- FILTER EXISTS(doc.text, "nested")
- RETURN doc
-```
-
-Only return documents whose `attr` attribute and its nested `text` attribute are
-indexed as nested fields:
-
-```aql
-FOR doc IN coll OPTIONS { indexHint: "inv-idx", forceIndexHint: true }
- FILTER doc.attr[? FILTER EXISTS(CURRENT.text, "nested")]
- RETURN doc
-```
-
-### IN_RANGE()
-
-`IN_RANGE(path, low, high, includeLow, includeHigh) → included`
-
-Match documents where the attribute at `path` is greater than (or equal to)
-`low` and less than (or equal to) `high`.
-
-You can use `IN_RANGE()` for searching more efficiently compared to an equivalent
-expression that combines two comparisons with a logical conjunction:
-
-- `IN_RANGE(path, low, high, true, true)` instead of `low <= value AND value <= high`
-- `IN_RANGE(path, low, high, true, false)` instead of `low <= value AND value < high`
-- `IN_RANGE(path, low, high, false, true)` instead of `low < value AND value <= high`
-- `IN_RANGE(path, low, high, false, false)` instead of `low < value AND value < high`
-
-`low` and `high` can be numbers or strings (technically also `null`, `true`
-and `false`), but the data type must be the same for both.
-
-{{< warning >}}
-The alphabetical order of characters is not taken into account by ArangoSearch,
-i.e. range queries in SEARCH operations against Views will not follow the
-language rules as per the defined Analyzer locale (except for the
-[`collation` Analyzer](../../index-and-search/analyzers.md#collation)) nor the server language
-(startup option `--default-language`)!
-Also see [Known Issues](../../release-notes/version-3.10/known-issues-in-3-10.md#arangosearch).
-{{< /warning >}}
-
-There is a corresponding [`IN_RANGE()` Miscellaneous Function](miscellaneous.md#in_range)
-that is used outside of `SEARCH` operations.
-
-- **path** (attribute path expression):
- the path of the attribute to test in the document
-- **low** (number\|string): minimum value of the desired range
-- **high** (number\|string): maximum value of the desired range
-- **includeLow** (bool): whether the minimum value shall be included in
- the range (left-closed interval) or not (left-open interval)
-- **includeHigh** (bool): whether the maximum value shall be included in
- the range (right-closed interval) or not (right-open interval)
-- returns **included** (bool): whether `value` is in the range
-
-If `low` and `high` are the same, but `includeLow` and/or `includeHigh` is set
-to `false`, then nothing will match. If `low` is greater than `high` nothing will
-match either.
-
-#### Example: Using numeric ranges
-
-To match documents with the attribute `value >= 3` and `value <= 5` using the
-default `"identity"` Analyzer you would write the following query:
-
-```aql
-FOR doc IN viewName
- SEARCH IN_RANGE(doc.value, 3, 5, true, true)
- RETURN doc.value
-```
-
-This will also match documents which have an array of numbers as `value`
-attribute where at least one of the numbers is in the specified boundaries.
-
-#### Example: Using string ranges
-
-Using string boundaries and a text Analyzer allows to match documents which
-have at least one token within the specified character range:
-
-```aql
-FOR doc IN valView
- SEARCH ANALYZER(IN_RANGE(doc.value, "a","f", true, false), "text_en")
- RETURN doc
-```
-
-This will match `{ "value": "bar" }` and `{ "value": "foo bar" }` because the
-_b_ of _bar_ is in the range (`"a" <= "b" < "f"`), but not `{ "value": "foo" }`
-because the _f_ of _foo_ is excluded (`high` is "f" but `includeHigh` is false).
-
-### MIN_MATCH()
-
-`MIN_MATCH(expr1, ... exprN, minMatchCount) → fulfilled`
-
-Match documents where at least `minMatchCount` of the specified
-search expressions are satisfied.
-
-There is a corresponding [`MIN_MATCH()` Miscellaneous function](miscellaneous.md#min_match)
-that is used outside of `SEARCH` operations.
-
-- **expr** (expression, _repeatable_): any valid search expression
-- **minMatchCount** (number): minimum number of search expressions that should
- be satisfied
-- returns **fulfilled** (bool): whether at least `minMatchCount` of the
- specified expressions are `true`
-
-#### Example: Matching a subset of search sub-expressions
-
-Assuming a View with a text Analyzer, you may use it to match documents where
-the attribute contains at least two out of three tokens:
-
-```aql
-LET t = TOKENS("quick brown fox", "text_en")
-FOR doc IN viewName
- SEARCH ANALYZER(MIN_MATCH(doc.text == t[0], doc.text == t[1], doc.text == t[2], 2), "text_en")
- RETURN doc.text
-```
-
-This will match `{ "text": "the quick brown fox" }` and `{ "text": "some brown fox" }`,
-but not `{ "text": "snow fox" }` which only fulfills one of the conditions.
-
-Note that you can also use the `AT LEAST` [array comparison operator](../high-level-operations/search.md#array-comparison-operators)
-in the specific case of matching a subset of tokens against a single attribute:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(TOKENS("quick brown fox", "text_en") AT LEAST (2) == doc.text, "text_en")
- RETURN doc.text
-```
-
-### MINHASH_MATCH()
-
-`MINHASH_MATCH(path, target, threshold, analyzer) → fulfilled`
-
-Match documents with an approximate Jaccard similarity of at least the
-`threshold`, approximated with the specified `minhash` Analyzer.
-
-To only compute the MinHash signatures, see the
-[`MINHASH()` Miscellaneous function](miscellaneous.md#minhash).
-
-- **path** (attribute path expression\|string): the path of the attribute in
- a document or a string
-- **target** (string): the string to hash with the specified Analyzer and to
- compare against the stored attribute
-- **threshold** (number, _optional_): a value between `0.0` and `1.0`.
-- **analyzer** (string): the name of a [`minhash` Analyzer](../../index-and-search/analyzers.md#minhash).
-- returns **fulfilled** (bool): `true` if the approximate Jaccard similarity
- is greater than or equal to the specified threshold, `false` otherwise
-
-#### Example: Find documents with a text similar to a target text
-
-Assuming a View with a `minhash` Analyzer, you can use the stored
-MinHash signature to find candidates for the more expensive Jaccard similarity
-calculation:
-
-```aql
-LET target = "the quick brown fox jumps over the lazy dog"
-LET targetSignature = TOKENS(target, "myMinHash")
-
-FOR doc IN viewName
- SEARCH MINHASH_MATCH(doc.text, target, 0.5, "myMinHash") // approximation
- LET jaccard = JACCARD(targetSignature, TOKENS(doc.text, "myMinHash"))
- FILTER jaccard > 0.75
- SORT jaccard DESC
- RETURN doc.text
-```
-
-### NGRAM_MATCH()
-
-Introduced in: v3.7.0
-
-`NGRAM_MATCH(path, target, threshold, analyzer) → fulfilled`
-
-Match documents whose attribute value has an
-[_n_-gram similarity](https://webdocs.cs.ualberta.ca/~kondrak/papers/spire05.pdf)
-higher than the specified threshold compared to the target value.
-
-The similarity is calculated by counting how long the longest sequence of
-matching _n_-grams is, divided by the target's total _n_-gram count.
-Only fully matching _n_-grams are counted.
-
-The _n_-grams for both attribute and target are produced by the specified
-Analyzer. Increasing the _n_-gram length will increase accuracy, but reduce
-error tolerance. In most cases a size of 2 or 3 will be a good choice.
-
-Also see the String Functions
-[`NGRAM_POSITIONAL_SIMILARITY()`](string.md#ngram_positional_similarity)
-and [`NGRAM_SIMILARITY()`](string.md#ngram_similarity)
-for calculating _n_-gram similarity that cannot be accelerated by a View index.
-
-- **path** (attribute path expression\|string): the path of the attribute in
- a document or a string
-- **target** (string): the string to compare against the stored attribute
-- **threshold** (number, _optional_): a value between `0.0` and `1.0`. Defaults
- to `0.7` if none is specified.
-- **analyzer** (string): the name of an [Analyzer](../../index-and-search/analyzers.md).
-- returns **fulfilled** (bool): `true` if the evaluated _n_-gram similarity value
- is greater than or equal to the specified threshold, `false` otherwise
-
-{{< info >}}
-Use an Analyzer of type `ngram` with `preserveOriginal: false` and `min` equal
-to `max`. Otherwise, the similarity score calculated internally will be lower
-than expected.
-
-The Analyzer must have the `"position"` and `"frequency"` features enabled or
-the `NGRAM_MATCH()` function will not find anything.
-{{< /info >}}
-
-#### Example: Using a custom bigram Analyzer
-
-Given a View indexing an attribute `text`, a custom _n_-gram Analyzer `"bigram"`
-(`min: 2, max: 2, preserveOriginal: false, streamType: "utf8"`) and a document
-`{ "text": "quick red fox" }`, the following query would match it (with a
-threshold of `1.0`):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH(doc.text, "quick fox", "bigram")
- RETURN doc.text
-```
-
-The following will also match (note the low threshold value):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH(doc.text, "quick blue fox", 0.4, "bigram")
- RETURN doc.text
-```
-
-The following will not match (note the high threshold value):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH(doc.text, "quick blue fox", 0.9, "bigram")
- RETURN doc.text
-```
-
-#### Example: Using constant values
-
-`NGRAM_MATCH()` can be called with constant arguments, but for such calls the
-`analyzer` argument is mandatory (even for calls inside of a `SEARCH` clause):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH("quick fox", "quick blue fox", 0.9, "bigram")
- RETURN doc.text
-```
-
-```aql
-RETURN NGRAM_MATCH("quick fox", "quick blue fox", "bigram")
-```
-
-### PHRASE()
-
-`PHRASE(path, phrasePart, analyzer)`
-
-`PHRASE(path, phrasePart1, skipTokens1, ... phrasePartN, skipTokensN, analyzer)`
-
-`PHRASE(path, [ phrasePart1, skipTokens1, ... phrasePartN, skipTokensN ], analyzer)`
-
-Search for a phrase in the referenced attribute. It only matches documents in
-which the tokens appear in the specified order. To search for tokens in any
-order use [`TOKENS()`](string.md#tokens) instead.
-
-The phrase can be expressed as an arbitrary number of `phraseParts` separated by
-*skipTokens* number of tokens (wildcards), either as separate arguments or as
-array as second argument.
-
-- **path** (attribute path expression): the attribute to test in the document
-- **phrasePart** (string\|array\|object): text to search for in the tokens.
- Can also be an [array](#example-using-phrase-with-an-array-of-tokens)
- comprised of string, array and [object tokens](#object-tokens), or tokens
- interleaved with numbers of `skipTokens`. The specified `analyzer` is applied
- to string and array tokens, but not for object tokens.
-- **skipTokens** (number, _optional_): amount of tokens to treat
- as wildcards
-- **analyzer** (string, _optional_): name of an [Analyzer](../../index-and-search/analyzers.md).
- Uses the Analyzer of a wrapping `ANALYZER()` call if not specified or
- defaults to `"identity"`
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-{{< info >}}
-The selected Analyzer must have the `"position"` and `"frequency"` features
-enabled. The `PHRASE()` function will otherwise not find anything.
-{{< /info >}}
-
-#### Object tokens
-
-Introduced in v3.7.0
-
-- `{IN_RANGE: [low, high, includeLow, includeHigh]}`:
- see [IN_RANGE()](#in_range). *low* and *high* can only be strings.
-- `{LEVENSHTEIN_MATCH: [token, maxDistance, transpositions, maxTerms, prefix]}`:
- - `token` (string): a string to search
- - `maxDistance` (number): maximum Levenshtein / Damerau-Levenshtein distance
- - `transpositions` (bool, _optional_): if set to `false`, a Levenshtein
- distance is computed, otherwise a Damerau-Levenshtein distance (default)
- - `maxTerms` (number, _optional_): consider only a specified number of the
- most relevant terms. One can pass `0` to consider all matched terms, but it may
- impact performance negatively. The default value is `64`.
- - `prefix` (string, _optional_): if defined, then a search for the exact
- prefix is carried out, using the matches as candidates. The Levenshtein /
- Damerau-Levenshtein distance is then computed for each candidate using the
- remainders of the strings. This option can improve performance in cases where
- there is a known common prefix. The default value is an empty string
- (introduced in v3.7.13, v3.8.1).
-- `{STARTS_WITH: [prefix]}`: see [STARTS_WITH()](#starts_with).
- Array brackets are optional
-- `{TERM: [token]}`: equal to `token` but without Analyzer tokenization.
- Array brackets are optional
-- `{TERMS: [token1, ..., tokenN]}`: one of `token1, ..., tokenN` can be found
- in specified position. Inside an array the object syntax can be replaced with
- the object field value, e.g., `[..., [token1, ..., tokenN], ...]`.
-- `{WILDCARD: [token]}`: see [LIKE()](#like).
- Array brackets are optional
-
-An array token inside an array can be used in the `TERMS` case only.
-
-Also see [Example: Using object tokens](#example-using-object-tokens).
-
-#### Example: Using a text Analyzer for a phrase search
-
-Given a View indexing an attribute `text` with the `"text_en"` Analyzer and a
-document `{ "text": "Lorem ipsum dolor sit amet, consectetur adipiscing elit" }`,
-the following query would match it:
-
-```aql
-FOR doc IN viewName
- SEARCH PHRASE(doc.text, "lorem ipsum", "text_en")
- RETURN doc.text
-```
-
-However, this search expression does not because the tokens `"ipsum"` and
-`"lorem"` do not appear in this order:
-
-```aql
-PHRASE(doc.text, "ipsum lorem", "text_en")
-```
-
-#### Example: Skip tokens for a proximity search
-
-To match `"ipsum"` and `"amet"` with any two tokens in between, you can use the
-following search expression:
-
-```aql
-PHRASE(doc.text, "ipsum", 2, "amet", "text_en")
-```
-
-The `skipTokens` value of `2` defines how many wildcard tokens have to appear
-between *ipsum* and *amet*. A `skipTokens` value of `0` means that the tokens
-must be adjacent. Negative values are allowed, but not very useful. These three
-search expressions are equivalent:
-
-```aql
-PHRASE(doc.text, "lorem ipsum", "text_en")
-PHRASE(doc.text, "lorem", 0, "ipsum", "text_en")
-PHRASE(doc.text, "ipsum", -1, "lorem", "text_en")
-```
-
-#### Example: Using `PHRASE()` with an array of tokens
-
-The `PHRASE()` function also accepts an array as second argument with
-`phrasePart` and `skipTokens` parameters as elements.
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick brown fox"], "text_en") RETURN doc
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick", "brown", "fox"], "text_en") RETURN doc
-```
-
-This syntax variation enables the usage of computed expressions:
-
-```aql
-LET proximityCondition = [ "foo", ROUND(RAND()*10), "bar" ]
-FOR doc IN viewName
- SEARCH PHRASE(doc.text, proximityCondition, "text_en")
- RETURN doc
-```
-
-```aql
-LET tokens = TOKENS("quick brown fox", "text_en") // ["quick", "brown", "fox"]
-FOR doc IN myView SEARCH PHRASE(doc.title, tokens, "text_en") RETURN doc
-```
-
-Above example is equivalent to the more cumbersome and static form:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 0, "brown", 0, "fox", "text_en") RETURN doc
-```
-
-You can optionally specify the number of skipTokens in the array form before
-every string element:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick", 1, "fox", "jumps"], "text_en") RETURN doc
-```
-
-It is the same as the following:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 1, "fox", 0, "jumps", "text_en") RETURN doc
-```
-
-#### Example: Handling of arrays with no members
-
-Empty arrays are skipped:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 1, [], 1, "jumps", "text_en") RETURN doc
-```
-
-The query is equivalent to:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 2 "jumps", "text_en") RETURN doc
-```
-
-Providing only empty arrays is valid, but will yield no results.
-
-#### Example: Using object tokens
-
-Using object tokens `STARTS_WITH`, `WILDCARD`, `LEVENSHTEIN_MATCH`, `TERMS` and
-`IN_RANGE`:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title,
- {STARTS_WITH: ["qui"]}, 0,
- {WILDCARD: ["b%o_n"]}, 0,
- {LEVENSHTEIN_MATCH: ["foks", 2]}, 0,
- {TERMS: ["jump", "run"]}, 0, // Analyzer not applied!
- {IN_RANGE: ["over", "through", true, false]},
- "text_en") RETURN doc
-```
-
-Note that the `text_en` Analyzer has stemming enabled, but for object tokens
-the Analyzer isn't applied. `{TERMS: ["jumps", "runs"]}` would not match the
-indexed (and stemmed!) attribute value. Therefore, the trailing `s` which would
-be stemmed away is removed from both words manually in the example.
-
-Above example is equivalent to:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title,
-[
- {STARTS_WITH: "qui"}, 0,
- {WILDCARD: "b%o_n"}, 0,
- {LEVENSHTEIN_MATCH: ["foks", 2]}, 0,
- ["jumps", "runs"], 0, // Analyzer is applied using this syntax
- {IN_RANGE: ["over", "through", true, false]}
-], "text_en") RETURN doc
-```
-
-### STARTS_WITH()
-
-`STARTS_WITH(path, prefix) → startsWith`
-
-Match the value of the attribute that starts with `prefix`. If the attribute
-is processed by a tokenizing Analyzer (type `"text"` or `"delimiter"`) or if it
-is an array, then a single token/element starting with the prefix is sufficient
-to match the document.
-
-{{< warning >}}
-The alphabetical order of characters is not taken into account by ArangoSearch,
-i.e. range queries in SEARCH operations against Views will not follow the
-language rules as per the defined Analyzer locale (except for the
-[`collation` Analyzer](../../index-and-search/analyzers.md#collation)) nor the server language
-(startup option `--default-language`)!
-Also see [Known Issues](../../release-notes/version-3.10/known-issues-in-3-10.md#arangosearch).
-{{< /warning >}}
-
-There is a corresponding [`STARTS_WITH()` String function](string.md#starts_with)
-that is used outside of `SEARCH` operations.
-
-- **path** (attribute path expression): the path of the attribute to compare
- against in the document
-- **prefix** (string): a string to search at the start of the text
-- returns **startsWith** (bool): whether the specified attribute starts with
- the given prefix
-
----
-
-`STARTS_WITH(path, prefixes, minMatchCount) → startsWith`
-
-Introduced in: v3.7.1
-
-Match the value of the attribute that starts with one of the `prefixes`, or
-optionally with at least `minMatchCount` of the prefixes.
-
-- **path** (attribute path expression): the path of the attribute to compare
- against in the document
-- **prefixes** (array): an array of strings to search at the start of the text
-- **minMatchCount** (number, _optional_): minimum number of search prefixes
- that should be satisfied (see
- [example](#example-searching-for-one-or-multiple-prefixes)). The default is `1`
-- returns **startsWith** (bool): whether the specified attribute starts with at
- least `minMatchCount` of the given prefixes
-
-#### Example: Searching for an exact value prefix
-
-To match a document `{ "text": "lorem ipsum..." }` using a prefix and the
-`"identity"` Analyzer you can use it like this:
-
-```aql
-FOR doc IN viewName
- SEARCH STARTS_WITH(doc.text, "lorem ip")
- RETURN doc
-```
-
-#### Example: Searching for a prefix in text
-
-This query will match `{ "text": "lorem ipsum" }` as well as
-`{ "text": [ "lorem", "ipsum" ] }` given a View which indexes the `text`
-attribute and processes it with the `"text_en"` Analyzer:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, "ips"), "text_en")
- RETURN doc.text
-```
-
-Note that it will not match `{ "text": "IPS (in-plane switching)" }` without
-modification to the query. The prefixes were passed to `STARTS_WITH()` as-is,
-but the built-in `text_en` Analyzer used for indexing has stemming enabled.
-So the indexed values are the following:
-
-```aql
-RETURN TOKENS("IPS (in-plane switching)", "text_en")
-```
-
-```json
-[
- [
- "ip",
- "in",
- "plane",
- "switch"
- ]
-]
-```
-
-The *s* is removed from *ips*, which leads to the prefix *ips* not matching
-the indexed token *ip*. You may either create a custom text Analyzer with
-stemming disabled to avoid this issue, or apply stemming to the prefixes:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, TOKENS("ips", "text_en")), "text_en")
- RETURN doc.text
-```
-
-#### Example: Searching for one or multiple prefixes
-
-The `STARTS_WITH()` function accepts an array of prefix alternatives of which
-only one has to match:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, ["something", "ips"]), "text_en")
- RETURN doc.text
-```
-
-It will match a document `{ "text": "lorem ipsum" }` but also
-`{ "text": "that is something" }`, as at least one of the words start with a
-given prefix.
-
-The same query again, but with an explicit `minMatchCount`:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, ["wrong", "ips"], 1), "text_en")
- RETURN doc.text
-```
-
-The number can be increased to require that at least this many prefixes must
-be present:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, ["lo", "ips", "something"], 2), "text_en")
- RETURN doc.text
-```
-
-This will still match `{ "text": "lorem ipsum" }` because at least two prefixes
-(`lo` and `ips`) are found, but not `{ "text": "that is something" }` which only
-contains one of the prefixes (`something`).
-
-### LEVENSHTEIN_MATCH()
-
-Introduced in: v3.7.0
-
-`LEVENSHTEIN_MATCH(path, target, distance, transpositions, maxTerms, prefix) → fulfilled`
-
-Match documents with a [Damerau-Levenshtein distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)
-lower than or equal to `distance` between the stored attribute value and
-`target`. It can optionally match documents using a pure Levenshtein distance.
-
-See [LEVENSHTEIN_DISTANCE()](string.md#levenshtein_distance)
-if you want to calculate the edit distance of two strings.
-
-- **path** (attribute path expression\|string): the path of the attribute to
- compare against in the document or a string
-- **target** (string): the string to compare against the stored attribute
-- **distance** (number): the maximum edit distance, which can be between
- `0` and `4` if `transpositions` is `false`, and between `0` and `3` if
- it is `true`
-- **transpositions** (bool, _optional_): if set to `false`, a Levenshtein
- distance is computed, otherwise a Damerau-Levenshtein distance (default)
-- **maxTerms** (number, _optional_): consider only a specified number of the
- most relevant terms. One can pass `0` to consider all matched terms, but it may
- impact performance negatively. The default value is `64`.
-- returns **fulfilled** (bool): `true` if the calculated distance is less than
- or equal to *distance*, `false` otherwise
-- **prefix** (string, _optional_): if defined, then a search for the exact
- prefix is carried out, using the matches as candidates. The Levenshtein /
- Damerau-Levenshtein distance is then computed for each candidate using
- the `target` value and the remainders of the strings, which means that the
- **prefix needs to be removed from `target`** (see
- [example](#example-matching-with-prefix-search)). This option can improve
- performance in cases where there is a known common prefix. The default value
- is an empty string (introduced in v3.7.13, v3.8.1).
-
-#### Example: Matching with and without transpositions
-
-The Levenshtein distance between _quick_ and _quikc_ is `2` because it requires
-two operations to go from one to the other (remove _k_, insert _k_ at a
-different position).
-
-```aql
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, "quikc", 2, false) // matches "quick"
- RETURN doc.text
-```
-
-The Damerau-Levenshtein distance is `1` (move _k_ to the end).
-
-```aql
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, "quikc", 1) // matches "quick"
- RETURN doc.text
-```
-
-#### Example: Matching with prefix search
-
-Match documents with a Levenshtein distance of 1 with the prefix `qui`. The edit
-distance is calculated using the search term `kc` (`quikc` with the prefix `qui`
-removed) and the stored value without the prefix (e.g. `ck`). The prefix `qui`
-is constant.
-
-```aql
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, "kc", 1, false, 64, "qui") // matches "quick"
- RETURN doc.text
-```
-
-You may compute the prefix and suffix from the input string as follows:
-
-```aql
-LET input = "quikc"
-LET prefixSize = 3
-LET prefix = LEFT(input, prefixSize)
-LET suffix = SUBSTRING(input, prefixSize)
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, suffix, 1, false, 64, prefix) // matches "quick"
- RETURN doc.text
-```
-
-#### Example: Basing the edit distance on string length
-
-You may want to pick the maximum edit distance based on string length.
-If the stored attribute is the string _quick_ and the target string is
-_quicksands_, then the Levenshtein distance is 5, with 50% of the
-characters mismatching. If the inputs are _q_ and _qu_, then the distance
-is only 1, although it is also a 50% mismatch.
-
-```aql
-LET target = "input"
-LET targetLength = LENGTH(target)
-LET maxDistance = (targetLength > 5 ? 2 : (targetLength >= 3 ? 1 : 0))
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, target, maxDistance, true)
- RETURN doc.text
-```
-
-### LIKE()
-
-Introduced in: v3.7.2
-
-`LIKE(path, search) → bool`
-
-Check whether the pattern `search` is contained in the attribute denoted by `path`,
-using wildcard matching.
-
-- `_`: A single arbitrary character
-- `%`: Zero, one or many arbitrary characters
-- `\\_`: A literal underscore
-- `\\%`: A literal percent sign
-
-{{< info >}}
-Literal backlashes require different amounts of escaping depending on the
-context:
-- `\` in bind variables (_Table_ view mode) in the web interface (automatically
- escaped to `\\` unless the value is wrapped in double quotes and already
- escaped properly)
-- `\\` in bind variables (_JSON_ view mode) and queries in the web interface
-- `\\` in bind variables in arangosh
-- `\\\\` in queries in arangosh
-- Double the amount compared to arangosh in shells that use backslashes for
-escaping (`\\\\` in bind variables and `\\\\\\\\` in queries)
-{{< /info >}}
-
-Searching with the `LIKE()` function in the context of a `SEARCH` operation
-is backed by View indexes. The [String `LIKE()` function](string.md#like)
-is used in other contexts such as in `FILTER` operations and cannot be
-accelerated by any sort of index on the other hand. Another difference is that
-the ArangoSearch variant does not accept a third argument to enable
-case-insensitive matching. This can be controlled with Analyzers instead.
-
-- **path** (attribute path expression): the path of the attribute to compare
- against in the document
-- **search** (string): a search pattern that can contain the wildcard characters
- `%` (meaning any sequence of characters, including none) and `_` (any single
- character). Literal `%` and `_` must be escaped with backslashes.
-- returns **bool** (bool): `true` if the pattern is contained in `text`,
- and `false` otherwise
-
-#### Example: Searching with wildcards
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(LIKE(doc.text, "foo%b_r"), "text_en")
- RETURN doc.text
-```
-
-`LIKE` can also be used in operator form:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(doc.text LIKE "foo%b_r", "text_en")
- RETURN doc.text
-```
-
-## Geo functions
-
-The following functions can be accelerated by View indexes. There are
-corresponding [Geo Functions](geo.md) for the regular geo index
-type, but also general purpose functions such as GeoJSON constructors that can
-be used in conjunction with ArangoSearch.
-
-### GEO_CONTAINS()
-
-Introduced in: v3.8.0
-
-`GEO_CONTAINS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](geo.md#geojson) `geoJsonA`
-fully contains `geoJsonB` (every point in B is also in A).
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- returns **bool** (bool): `true` when every point in B is also contained in A,
- `false` otherwise
-
-### GEO_DISTANCE()
-
-Introduced in: v3.8.0
-
-`GEO_DISTANCE(geoJsonA, geoJsonB) → distance`
-
-Return the distance between two [GeoJSON objects](geo.md#geojson),
-measured from the `centroid` of each shape.
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- returns **distance** (number): the distance between the centroid points of
- the two objects on the reference ellipsoid
-
-### GEO_IN_RANGE()
-
-Introduced in: v3.8.0
-
-`GEO_IN_RANGE(geoJsonA, geoJsonB, low, high, includeLow, includeHigh) → bool`
-
-Checks whether the distance between two [GeoJSON objects](geo.md#geojson)
-lies within a given interval. The distance is measured from the `centroid` of
-each shape.
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **low** (number): minimum value of the desired range
-- **high** (number): maximum value of the desired range
-- **includeLow** (bool, optional): whether the minimum value shall be included
- in the range (left-closed interval) or not (left-open interval). The default
- value is `true`
-- **includeHigh** (bool): whether the maximum value shall be included in the
- range (right-closed interval) or not (right-open interval). The default value
- is `true`
-- returns **bool** (bool): whether the evaluated distance lies within the range
-
-### GEO_INTERSECTS()
-
-Introduced in: v3.8.0
-
-`GEO_INTERSECTS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](geo.md#geojson) `geoJsonA`
-intersects with `geoJsonB` (i.e. at least one point of B is in A or vice versa).
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- returns **bool** (bool): `true` if A and B intersect, `false` otherwise
-
-## Scoring Functions
-
-Scoring functions return a ranking value for the documents found by a
-[SEARCH operation](../high-level-operations/search.md). The better the documents match
-the search expression the higher the returned number.
-
-The first argument to any scoring function is always the document emitted by
-a `FOR` operation over an `arangosearch` View.
-
-To sort the result set by relevance, with the more relevant documents coming
-first, sort in **descending order** by the score (e.g. `SORT BM25(...) DESC`).
-
-You may calculate custom scores based on a scoring function using document
-attributes and numeric functions (e.g. `TFIDF(doc) * LOG(doc.value)`):
-
-```aql
-FOR movie IN imdbView
- SEARCH PHRASE(movie.title, "Star Wars", "text_en")
- SORT BM25(movie) * LOG(movie.runtime + 1) DESC
- RETURN movie
-```
-
-Sorting by more than one score is allowed. You may also sort by a mix of
-scores and attributes from multiple Views as well as collections:
-
-```aql
-FOR a IN viewA
- FOR c IN coll
- FOR b IN viewB
- SORT TFIDF(b), c.name, BM25(a)
- ...
-```
-
-### BM25()
-
-`BM25(doc, k, b) → score`
-
-Sorts documents using the
-[**Best Matching 25** algorithm](https://en.wikipedia.org/wiki/Okapi_BM25)
-(Okapi BM25).
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **k** (number, _optional_): calibrates the text term frequency scaling.
- The value needs to be non-negative (`0.0` or higher), or the returned
- score is an undefined value that may cause unpredictable results.
- The default is `1.2`. A `k` value of `0` corresponds to a binary model
- (no term frequency), and a large value corresponds to using raw term frequency
-- **b** (number, _optional_): determines the scaling by the total text length.
- The value needs to be between `0.0` and `1.0` (inclusive), or the returned
- score is an undefined value that may cause unpredictable results.
- The default is `0.75`. At the extreme values of the coefficient `b`, BM25
- turns into the ranking functions known as:
- - BM11 for `b` = `1` (corresponds to fully scaling the term weight by the
- total text length)
- - BM15 for `b` = `0` (corresponds to no length normalization)
-- returns **score** (number): computed ranking value
-
-{{< info >}}
-The Analyzers used for indexing document attributes must have the `"frequency"`
-feature enabled. The `BM25()` function will otherwise return a score of 0.
-The Analyzers should have the `"norm"` feature enabled, too, or normalization
-will be disabled, which is not meaningful for BM25 and BM11. BM15 does not need
-the `"norm"` feature as it has no length normalization.
-{{< /info >}}
-
-#### Example: Sorting by default `BM25()` score
-
-Sorting by relevance with BM25 at default settings:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT BM25(doc) DESC
- RETURN doc
-```
-
-#### Example: Sorting with tuned `BM25()` ranking
-
-Sorting by relevance, with double-weighted term frequency and with full text
-length normalization:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT BM25(doc, 2.4, 1) DESC
- RETURN doc
-```
-
-### TFIDF()
-
-`TFIDF(doc, normalize) → score`
-
-Sorts documents using the
-[**term frequency–inverse document frequency** algorithm](https://en.wikipedia.org/wiki/TF-IDF)
-(TF-IDF).
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **normalize** (bool, _optional_): specifies whether scores should be
- normalized. The default is `false`.
-- returns **score** (number): computed ranking value
-
-{{< info >}}
-The Analyzers used for indexing document attributes must have the `"frequency"`
-feature enabled. The `TFIDF()` function will otherwise return a score of 0.
-The Analyzers need to have the `"norm"` feature enabled, too, if you want to use
-`TFIDF()` with the `normalize` parameter set to `true`.
-{{< /info >}}
-
-#### Example: Sorting by default `TFIDF()` score
-
-Sort by relevance using the TF-IDF score:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT TFIDF(doc) DESC
- RETURN doc
-```
-
-#### Example: Sorting by `TFIDF()` score with normalization
-
-Sort by relevance using a normalized TF-IDF score:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT TFIDF(doc, true) DESC
- RETURN doc
-```
-
-#### Example: Sort by value and `TFIDF()`
-
-Sort by the value of the `text` attribute in ascending order, then by the TFIDF
-score in descending order where the attribute values are equivalent:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT doc.text, TFIDF(doc) DESC
- RETURN doc
-```
-
-## Search Highlighting Functions
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-### OFFSET_INFO()
-
-`OFFSET_INFO(doc, paths) → offsetInfo`
-
-Returns the attribute paths and substring offsets of matched terms, phrases, or
-_n_-grams for search highlighting purposes.
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **paths** (string\|array): a string or an array of strings, each describing an
- attribute and array element path you want to get the offsets for. Use `.` to
- access nested objects, and `[n]` with `n` being an array index to specify array
- elements. The attributes need to be indexed by Analyzers with the `offset`
- feature enabled.
-- returns **offsetInfo** (array): an array of objects, limited to a default of
- 10 offsets per path. Each object has the following attributes:
- - **name** (array): the attribute and array element path as an array of
- strings and numbers. You can pass this name to the
- [`VALUE()` function](document-object.md) to dynamically look up the value.
- - **offsets** (array): an array of arrays with the matched positions. Each
- inner array has two elements with the start offset and the length of a match.
-
- {{< warning >}}
- The offsets describe the positions in bytes, not characters. You may need
- to account for characters encoded using multiple bytes.
- {{< /warning >}}
-
----
-
-`OFFSET_INFO(doc, rules) → offsetInfo`
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **rules** (array): an array of objects with the following attributes:
- - **name** (string): an attribute and array element path
- you want to get the offsets for. Use `.` to access nested objects,
- and `[n]` with `n` being an array index to specify array elements. The
- attributes need to be indexed by Analyzers with the `offset` feature enabled.
- - **options** (object): an object with the following attributes:
- - **maxOffsets** (number, _optional_): the total number of offsets to
- collect per path. Default: `10`.
- - **limits** (object, _optional_): an object with the following attributes:
- - **term** (number, _optional_): the total number of term offsets to
- collect per path. Default: 232 .
- - **phrase** (number, _optional_): the total number of phrase offsets to
- collect per path. Default: 232 .
- - **ngram** (number, _optional_): the total number of _n_-gram offsets to
- collect per path. Default: 232 .
-- returns **offsetInfo** (array): an array of objects, each with the following
- attributes:
- - **name** (array): the attribute and array element path as an array of
- strings and numbers. You can pass this name to the
- [VALUE()](document-object.md) to dynamically look up the value.
- - **offsets** (array): an array of arrays with the matched positions, capped
- to the specified limits. Each inner array has two elements with the start
- offset and the length of a match.
-
- {{< warning >}}
- The start offsets and lengths describe the positions in bytes, not characters.
- You may need to account for characters encoded using multiple bytes.
- {{< /warning >}}
-
-**Examples**
-
-Search a View and get the offset information for the matches:
-
-```js
----
-name: aqlOffsetInfo
-description: ''
----
-~db._create("food");
-~db.food.save({ name: "avocado", description: { en: "The avocado is a medium-sized, evergreen tree, native to the Americas." } });
-~db.food.save({ name: "tomato", description: { en: "The tomato is the edible berry of the tomato plant." } });
-~var analyzers = require("@arangodb/analyzers");
-~var analyzer = analyzers.save("text_en_offset", "text", { locale: "en", stopwords: [] }, ["frequency", "norm", "position", "offset"]);
-~db._createView("food_view", "arangosearch", { links: { food: { fields: { description: { fields: { en: { analyzers: ["text_en_offset"] } } } } } } });
-~assert(db._query(`FOR d IN food_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 2);
-db._query(`
- FOR doc IN food_view
- SEARCH ANALYZER(TOKENS("avocado tomato", "text_en_offset") ANY == doc.description.en, "text_en_offset")
- RETURN OFFSET_INFO(doc, ["description.en"])`);
-~db._dropView("food_view");
-~db._drop("food");
-~analyzers.remove(analyzer.name);
-```
-
-For full examples, see [Search Highlighting](../../index-and-search/arangosearch/search-highlighting.md).
diff --git a/site/content/3.10/aql/functions/geo.md b/site/content/3.10/aql/functions/geo.md
deleted file mode 100644
index b35d8c375b..0000000000
--- a/site/content/3.10/aql/functions/geo.md
+++ /dev/null
@@ -1,966 +0,0 @@
----
-title: Geo-spatial functions in AQL
-menuTitle: Geo
-weight: 35
-description: >-
- AQL supports functions for geo-spatial queries and a subset of calls can be
- accelerated by geo-spatial indexes
----
-## Geo-spatial data representations
-
-You can model geo-spatial information in different ways using the data types
-available in ArangoDB. The recommended way is to use objects with **GeoJSON**
-geometry but you can also use **longitude and latitude coordinate pairs**
-for points. Both models are supported by
-[Geo-Spatial Indexes](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md).
-
-### Coordinate pairs
-
-Longitude and latitude coordinates are numeric values and can be stored in the
-following ways:
-
-- Coordinates using an array with two numbers in `[longitude, latitude]` order,
- for example, in a user-chosen attribute called `location`:
-
- ```json
- {
- "location": [ -73.983, 40.764 ]
- }
- ```
-
-- Coordinates using an array with two numbers in `[latitude, longitude]` order,
- for example, in a user-chosen attribute called `location`:
-
- ```json
- {
- "location": [ 40.764, -73.983 ]
- }
- ```
-
-- Coordinates using two separate numeric attributes, for example, in two
- user-chosen attributes called `lat` and `lng` as sub-attributes of a `location`
- attribute:
-
- ```json
- {
- "location": {
- "lat": 40.764,
- "lng": -73.983
- }
- }
- ```
-
-### GeoJSON
-
-GeoJSON is a geospatial data format based on JSON. It defines several different
-types of JSON objects and the way in which they can be combined to represent
-data about geographic shapes on the Earth surface.
-
-Example of a document with a GeoJSON Point stored in a user-chosen attribute
-called `location` (with coordinates in `[longitude, latitude]` order):
-
-```json
-{
- "location": {
- "type": "Point",
- "coordinates": [ -73.983, 40.764 ]
- }
-}
-```
-
-GeoJSON uses a geographic coordinate reference system,
-World Geodetic System 1984 (WGS 84), and units of decimal degrees.
-
-Internally ArangoDB maps all coordinate pairs onto a unit sphere. Distances are
-projected onto a sphere with the Earth's *Volumetric mean radius* of *6371
-km*. ArangoDB implements a useful subset of the GeoJSON format
-[(RFC 7946)](https://tools.ietf.org/html/rfc7946).
-Feature Objects and the GeometryCollection type are not supported.
-Supported geometry object types are:
-
-- Point
-- MultiPoint
-- LineString
-- MultiLineString
-- Polygon
-- MultiPolygon
-
-#### Point
-
-A [GeoJSON Point](https://tools.ietf.org/html/rfc7946#section-3.1.2) is a
-[position](https://tools.ietf.org/html/rfc7946#section-3.1.1) comprised of
-a longitude and a latitude:
-
-```json
-{
- "type": "Point",
- "coordinates": [100.0, 0.0]
-}
-```
-
-#### MultiPoint
-
-A [GeoJSON MultiPoint](https://tools.ietf.org/html/rfc7946#section-3.1.7) is
-an array of positions:
-
-```json
-{
- "type": "MultiPoint",
- "coordinates": [
- [100.0, 0.0],
- [101.0, 1.0]
- ]
-}
-```
-
-#### LineString
-
-A [GeoJSON LineString](https://tools.ietf.org/html/rfc7946#section-3.1.4) is
-an array of two or more positions:
-
-```json
-{
- "type": "LineString",
- "coordinates": [
- [100.0, 0.0],
- [101.0, 1.0]
- ]
-}
-```
-
-#### MultiLineString
-
-A [GeoJSON MultiLineString](https://tools.ietf.org/html/rfc7946#section-3.1.5) is
-an array of LineString coordinate arrays:
-
-```json
-{
- "type": "MultiLineString",
- "coordinates": [
- [
- [100.0, 0.0],
- [101.0, 1.0]
- ],
- [
- [102.0, 2.0],
- [103.0, 3.0]
- ]
- ]
-}
-```
-
-#### Polygon
-
-A [GeoJSON Polygon](https://tools.ietf.org/html/rfc7946#section-3.1.6) consists
-of a series of closed `LineString` objects (ring-like). These *Linear Ring*
-objects consist of four or more coordinate pairs with the first and last
-coordinate pair being equal. Coordinate pairs of a Polygon are an array of
-linear ring coordinate arrays. The first element in the array represents
-the exterior ring. Any subsequent elements represent interior rings
-(holes within the surface).
-
-The orientation of the first linear ring is crucial: the right-hand-rule
-is applied, so that the area to the left of the path of the linear ring
-(when walking on the surface of the Earth) is considered to be the
-"interior" of the polygon. All other linear rings must be contained
-within this interior. According to the GeoJSON standard, the subsequent
-linear rings must be oriented following the right-hand-rule, too,
-that is, they must run **clockwise** around the hole (viewed from
-above). However, ArangoDB is tolerant here (as suggested by the
-[GeoJSON standard](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6)),
-all but the first linear ring are inverted if the orientation is wrong.
-
-In the end, a point is considered to be in the interior of the polygon,
-if and only if one has to cross an odd number of linear rings to reach the
-exterior of the polygon prescribed by the first linear ring.
-
-A number of additional rules apply (and are enforced by the GeoJSON
-parser):
-
-- A polygon must contain at least one linear ring, i.e., it must not be
- empty.
-- A linear ring may not be empty, it needs at least three _distinct_
- coordinate pairs, that is, at least 4 coordinate pairs (since the first and
- last must be the same).
-- No two edges of linear rings in the polygon must intersect, in
- particular, no linear ring may be self-intersecting.
-- Within the same linear ring, consecutive coordinate pairs may be the same,
- otherwise all coordinate pairs need to be distinct (except the first and last one).
-- Linear rings of a polygon must not share edges, but they may share coordinate pairs.
-- A linear ring defines two regions on the sphere. ArangoDB always
- interprets the region that lies to the left of the boundary ring (in
- the direction of its travel on the surface of the Earth) as the
- interior of the ring. This is in contrast to earlier versions of
- ArangoDB before 3.10, which always took the **smaller** of the two
- regions as the interior. Therefore, from 3.10 on one can now have
- polygons whose outer ring encloses more than half the Earth's surface.
-- The interior rings must be contained in the (interior) of the outer ring.
-- Interior rings should follow the above rule for orientation
- (counterclockwise external rings, clockwise internal rings, interior
- always to the left of the line).
-
-Here is an example with no holes:
-
-```json
-{
- "type": "Polygon",
- "coordinates": [
- [
- [100.0, 0.0],
- [101.0, 0.0],
- [101.0, 1.0],
- [100.0, 1.0],
- [100.0, 0.0]
- ]
- ]
-}
-```
-
-Here is an example with a hole:
-
-```json
-{
- "type": "Polygon",
- "coordinates": [
- [
- [100.0, 0.0],
- [101.0, 0.0],
- [101.0, 1.0],
- [100.0, 1.0],
- [100.0, 0.0]
- ],
- [
- [100.8, 0.8],
- [100.8, 0.2],
- [100.2, 0.2],
- [100.2, 0.8],
- [100.8, 0.8]
- ]
- ]
-}
-```
-
-#### MultiPolygon
-
-A [GeoJSON MultiPolygon](https://tools.ietf.org/html/rfc7946#section-3.1.6) consists
-of multiple polygons. The "coordinates" member is an array of
-_Polygon_ coordinate arrays. See [above](#polygon) for the rules and
-the meaning of polygons.
-
-If the polygons in a MultiPolygon are disjoint, then a point is in the
-interior of the MultiPolygon if and only if it is
-contained in one of the polygons. If some polygon P2 in a MultiPolygon
-is contained in another polygon P1, then P2 is treated like a hole
-in P1 and containment of points is defined with the even-odd-crossings rule
-(see [Polygon](#polygon)).
-
-Additionally, the following rules apply and are enforced for
-MultiPolygons:
-
-- No two edges in the linear rings of the polygons of a MultiPolygon
- may intersect.
-- Polygons in the same MultiPolygon may not share edges, but they may share
- coordinate pairs.
-
-Example with two polygons, the second one with a hole:
-
-```json
-{
- "type": "MultiPolygon",
- "coordinates": [
- [
- [
- [102.0, 2.0],
- [103.0, 2.0],
- [103.0, 3.0],
- [102.0, 3.0],
- [102.0, 2.0]
- ]
- ],
- [
- [
- [100.0, 0.0],
- [101.0, 0.0],
- [101.0, 1.0],
- [100.0, 1.0],
- [100.0, 0.0]
- ],
- [
- [100.2, 0.2],
- [100.2, 0.8],
- [100.8, 0.8],
- [100.8, 0.2],
- [100.2, 0.2]
- ]
- ]
- ]
-}
-```
-
-## GeoJSON interpretation
-
-Note the following technical detail about GeoJSON: The
-[GeoJSON standard, Section 3.1.1 Position](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.1)
-prescribes that lines are **cartesian lines in cylindrical coordinates**
-(longitude/latitude). However, this definition is inconvenient in practice,
-since such lines are not geodesic on the surface of the Earth.
-Furthermore, the best available algorithms for geospatial computations on Earth
-typically use geodesic lines as the boundaries of polygons on Earth.
-
-Therefore, ArangoDB uses the **syntax of the GeoJSON** standard,
-but then interprets lines (and boundaries of polygons) as
-**geodesic lines (pieces of great circles) on Earth**. This is a
-violation of the GeoJSON standard, but serving a practical purpose.
-
-Note in particular that this can sometimes lead to unexpected results.
-Consider the following polygon (remember that GeoJSON has
-**longitude before latitude** in coordinate pairs):
-
-```json
-{ "type": "Polygon", "coordinates": [[
- [4, 54], [4, 47], [16, 47], [16, 54], [4, 54]
-]] }
-```
-
-
-
-It does not contain the point `[10, 47]` since the shortest path (geodesic)
-from `[4, 47]` to `[16, 47]` lies North relative to the parallel of latitude at
-47 degrees. On the contrary, the polygon does contain the point `[10, 54]` as it
-lies South of the parallel of latitude at 54 degrees.
-
-{{< info >}}
-ArangoDB version before 3.10 did an inconsistent special detection of "rectangle"
-polygons that later versions from 3.10 onward no longer do, see
-[Legacy Polygons](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md#legacy-polygons).
-{{< /info >}}
-
-Furthermore, there is an issue with the interpretation of linear rings
-(boundaries of polygons) according to
-[GeoJSON standard, Section 3.1.6 Polygon](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6).
-This section states explicitly:
-
-> A linear ring MUST follow the right-hand rule with respect to the
-> area it bounds, i.e., exterior rings are counter-clockwise, and
-> holes are clockwise.
-
-This rather misleading phrase means that when a linear ring is used as
-the boundary of a polygon, the "interior" of the polygon lies **to the left**
-of the boundary when one travels on the surface of the Earth and
-along the linear ring. For
-example, the polygon below travels **counter-clockwise** around the point
-`[10, 50]`, and thus the interior of the polygon contains this point and
-its surroundings, but not, for example, the North Pole and the South
-Pole.
-
-```json
-{ "type": "Polygon", "coordinates": [[
- [4, 54], [4, 47], [16, 47], [16, 54], [4, 54]
-]] }
-```
-
-
-
-On the other hand, the following polygon travels **clockwise** around the point
-`[10, 50]`, and thus its "interior" does not contain `[10, 50]`, but does
-contain the North Pole and the South Pole:
-
-```json
-{ "type": "Polygon", "coordinates": [[
- [4, 54], [16, 54], [16, 47], [4, 47], [4, 54]
-]] }
-```
-
-
-
-Remember that the "interior" is to the left of the given
-linear ring, so this second polygon is basically the complement on Earth
-of the previous polygon!
-
-ArangoDB versions before 3.10 did not follow this rule and always took the
-"smaller" connected component of the surface as the "interior" of the polygon.
-This made it impossible to specify polygons which covered more than half of the
-sphere. From version 3.10 onward, ArangoDB recognizes this correctly.
-See [Legacy Polygons](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md#legacy-polygons)
-for how to deal with this issue.
-
-## Geo utility functions
-
-The following helper functions **can** use geo indexes, but do not have to in
-all cases. You can use all of these functions in combination with each other,
-and if you have configured a geo index it may be utilized,
-see [Geo Indexing](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md).
-
-### DISTANCE()
-
-`DISTANCE(latitude1, longitude1, latitude2, longitude2) → distance`
-
-Calculate the distance between two arbitrary points in meters (as birds
-would fly). The value is computed using the haversine formula, which is based
-on a spherical Earth model. It's fast to compute and is accurate to around 0.3%,
-which is sufficient for most use cases such as location-aware services.
-
-- **latitude1** (number): the latitude of the first point
-- **longitude1** (number): the longitude of the first point
-- **latitude2** (number): the latitude of the second point
-- **longitude2** (number): the longitude of the second point
-- returns **distance** (number): the distance between both points in **meters**
-
-```aql
-// Distance from Brandenburg Gate (Berlin) to ArangoDB headquarters (Cologne)
-DISTANCE(52.5163, 13.3777, 50.9322, 6.94) // 476918.89688380965 (~477km)
-
-// Sort a small number of documents based on distance to Central Park (New York)
-FOR doc IN coll // e.g. documents returned by a traversal
- SORT DISTANCE(doc.latitude, doc.longitude, 40.78, -73.97)
- RETURN doc
-```
-
-### GEO_CONTAINS()
-
-`GEO_CONTAINS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](#geojson) `geoJsonA`
-fully contains `geoJsonB` (every point in B is also in A). The object `geoJsonA`
-has to be of type _Polygon_ or _MultiPolygon_. For other types containment is
-not well-defined because of numerical stability problems.
-
-- **geoJsonA** (object): first GeoJSON object
-- **geoJsonB** (object): second GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- returns **bool** (bool): `true` if every point in B is also contained in A,
- otherwise `false`
-
-{{< info >}}
-ArangoDB follows and exposes the same behavior as the underlying
-S2 geometry library. As stated in the S2 documentation:
-
-> Point containment is defined such that if the sphere is subdivided
-> into faces (loops), every point is contained by exactly one face.
-> This implies that linear rings do not necessarily contain their vertices.
-
-As a consequence, a linear ring or polygon does not necessarily contain its
-boundary edges!
-{{< /info >}}
-
-You can optimize queries that contain a `FILTER` expression of the following
-form with an S2-based [geospatial index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md):
-
-```aql
-FOR doc IN coll
- FILTER GEO_CONTAINS(geoJson, doc.geo)
- ...
-```
-
-In this example, you would create the index for the collection `coll`, on the
-attribute `geo`. You need to set the `geoJson` index option to `true`.
-The `geoJson` variable needs to evaluate to a valid GeoJSON object. Also note
-the argument order: the stored document attribute `doc.geo` is passed as the
-second argument. Passing it as the first argument, like
-`FILTER GEO_CONTAINS(doc.geo, geoJson)` to test whether `doc.geo` contains
-`geoJson`, cannot utilize the index.
-
-### GEO_DISTANCE()
-
-`GEO_DISTANCE(geoJsonA, geoJsonB, ellipsoid) → distance`
-
-Return the distance between two GeoJSON objects in meters, measured from the
-**centroid** of each shape. For a list of supported types see the
-[geo index page](#geojson).
-
-- **geoJsonA** (object): first GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- **geoJsonB** (object): second GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- **ellipsoid** (string, *optional*): reference ellipsoid to use.
- Supported are `"sphere"` (default) and `"wgs84"`.
-- returns **distance** (number): the distance between the centroid points of
- the two objects on the reference ellipsoid in **meters**
-
-```aql
-LET polygon = {
- type: "Polygon",
- coordinates: [[[-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]]]
-}
-FOR doc IN collectionName
- LET distance = GEO_DISTANCE(doc.geometry, polygon) // calculates the distance
- RETURN distance
-```
-
-You can optimize queries that contain a `FILTER` expression of the following
-form with an S2-based [geospatial index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md):
-
-```aql
-FOR doc IN coll
- FILTER GEO_DISTANCE(geoJson, doc.geo) <= limit
- ...
-```
-
-In this example, you would create the index for the collection `coll`, on the
-attribute `geo`. You need to set the `geoJson` index option to `true`.
-`geoJson` needs to evaluate to a valid GeoJSON object. `limit` must be a
-distance in meters; it cannot be an expression. An upper bound with `<`,
-a lower bound with `>` or `>=`, or both, are equally supported.
-
-You can also optimize queries that use a `SORT` condition of the following form
-with a geospatial index:
-
-```aql
- SORT GEO_DISTANCE(geoJson, doc.geo)
-```
-
-The index covers returning matches from closest to furthest away, or vice versa.
-You may combine such a `SORT` with a `FILTER` expression that utilizes the
-geospatial index, too, via the [`GEO_DISTANCE()`](#geo_distance),
-[`GEO_CONTAINS()`](#geo_contains), and [`GEO_INTERSECTS()`](#geo_intersects)
-functions.
-
-### GEO_AREA()
-
-Introduced in: v3.5.1
-
-`GEO_AREA(geoJson, ellipsoid) → area`
-
-Return the area for a [Polygon](#polygon) or [MultiPolygon](#multipolygon)
-on a sphere with the average Earth radius, or an ellipsoid.
-
-- **geoJson** (object): a GeoJSON object
-- **ellipsoid** (string, *optional*): reference ellipsoid to use.
- Supported are `"sphere"` (default) and `"wgs84"`.
-- returns **area** (number): the area of the polygon in **square meters**
-
-```aql
-LET polygon = {
- type: "Polygon",
- coordinates: [[[-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]]]
-}
-RETURN GEO_AREA(polygon, "wgs84")
-```
-
-### GEO_EQUALS()
-
-`GEO_EQUALS(geoJsonA, geoJsonB) → bool`
-
-Checks whether two [GeoJSON objects](#geojson) are equal or not.
-
-- **geoJsonA** (object): first GeoJSON object.
-- **geoJsonB** (object): second GeoJSON object.
-- returns **bool** (bool): `true` if they are equal, otherwise `false`.
-
-```aql
-LET polygonA = GEO_POLYGON([
- [-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]
-])
-LET polygonB = GEO_POLYGON([
- [-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]
-])
-RETURN GEO_EQUALS(polygonA, polygonB) // true
-```
-
-```aql
-LET polygonA = GEO_POLYGON([
- [-11.1, 24.0], [-10.5, 26.1], [-11.2, 27.1], [-11.1, 24.0]
-])
-LET polygonB = GEO_POLYGON([
- [-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]
-])
-RETURN GEO_EQUALS(polygonA, polygonB) // false
-```
-
-### GEO_INTERSECTS()
-
-`GEO_INTERSECTS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](#geojson) `geoJsonA`
-intersects with `geoJsonB` (i.e. at least one point in B is also in A or vice-versa).
-
-- **geoJsonA** (object): first GeoJSON object
-- **geoJsonB** (object): second GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- returns **bool** (bool): true if B intersects A, false otherwise
-
-You can optimize queries that contain a `FILTER` expression of the following
-form with an S2-based [geospatial index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md):
-
-```aql
-FOR doc IN coll
- FILTER GEO_INTERSECTS(geoJson, doc.geo)
- ...
-```
-
-In this example, you would create the index for the collection `coll`, on the
-attribute `geo`. You need to set the `geoJson` index option to `true`.
-`geoJson` needs to evaluate to a valid GeoJSON object. Also note
-the argument order: the stored document attribute `doc.geo` is passed as the
-second argument. Passing it as the first argument, like
-`FILTER GEO_INTERSECTS(doc.geo, geoJson)` to test whether `doc.geo` intersects
-`geoJson`, cannot utilize the index.
-
-### GEO_IN_RANGE()
-
-Introduced in: v3.8.0
-
-`GEO_IN_RANGE(geoJsonA, geoJsonB, low, high, includeLow, includeHigh) → bool`
-
-Checks whether the distance between two [GeoJSON objects](#geojson)
-lies within a given interval. The distance is measured from the **centroid** of
-each shape.
-
-- **geoJsonA** (object\|array): first GeoJSON object, or a coordinate array
- in `[longitude, latitude]` order
-- **geoJsonB** (object\|array): second GeoJSON object, or a coordinate array
- in `[longitude, latitude]` order
-- **low** (number): minimum value of the desired range
-- **high** (number): maximum value of the desired range
-- **includeLow** (bool, optional): whether the minimum value shall be included
- in the range (left-closed interval) or not (left-open interval). The default
- value is `true`
-- **includeHigh** (bool): whether the maximum value shall be included in the
- range (right-closed interval) or not (right-open interval). The default value
- is `true`
-- returns **bool** (bool): whether the evaluated distance lies within the range
-
-### IS_IN_POLYGON()
-
-Determine whether a point is inside a polygon.
-
-{{< warning >}}
-The `IS_IN_POLYGON()` AQL function is **deprecated** as of ArangoDB 3.4.0 in
-favor of the new [`GEO_CONTAINS()` AQL function](#geo_contains), which works with
-[GeoJSON](https://tools.ietf.org/html/rfc7946) Polygons and MultiPolygons.
-{{< /warning >}}
-
-`IS_IN_POLYGON(polygon, latitude, longitude) → bool`
-
-- **polygon** (array): an array of arrays with 2 elements each, representing the
- points of the polygon in the format `[latitude, longitude]`
-- **latitude** (number): the latitude of the point to search
-- **longitude** (number): the longitude of the point to search
-- returns **bool** (bool): `true` if the point (`[latitude, longitude]`) is
- inside the `polygon` or `false` if it's not. The result is undefined (can be
- `true` or `false`) if the specified point is exactly on a boundary of the
- polygon.
-
-```aql
-// checks if the point (latitude 4, longitude 7) is contained inside the polygon
-IS_IN_POLYGON( [ [ 0, 0 ], [ 0, 10 ], [ 10, 10 ], [ 10, 0 ] ], 4, 7 )
-```
-
----
-
-`IS_IN_POLYGON(polygon, coord, useLonLat) → bool`
-
-The 2nd parameter can alternatively be specified as an array with two values.
-
-By default, each array element in `polygon` is expected to be in the format
-`[latitude, longitude]`. This can be changed by setting the 3rd parameter to `true` to
-interpret the points as `[longitude, latitude]`. `coord` is then also interpreted in
-the same way.
-
-- **polygon** (array): an array of arrays with 2 elements each, representing the
- points of the polygon
-- **coord** (array): the point to search as a numeric array with two elements
-- **useLonLat** (bool, *optional*): if set to `true`, the coordinates in
- `polygon` and the coordinate pair `coord` are interpreted as
- `[longitude, latitude]` (like in GeoJSON). The default is `false` and the
- format `[latitude, longitude]` is expected.
-- returns **bool** (bool): `true` if the point `coord` is inside the `polygon`
- or `false` if it's not. The result is undefined (can be `true` or `false`) if
- the specified point is exactly on a boundary of the polygon.
-
-```aql
-// checks if the point (lat 4, lon 7) is contained inside the polygon
-IS_IN_POLYGON( [ [ 0, 0 ], [ 0, 10 ], [ 10, 10 ], [ 10, 0 ] ], [ 4, 7 ] )
-
-// checks if the point (lat 4, lon 7) is contained inside the polygon
-IS_IN_POLYGON( [ [ 0, 0 ], [ 10, 0 ], [ 10, 10 ], [ 0, 10 ] ], [ 7, 4 ], true )
-```
-
-## GeoJSON Constructors
-
-The following helper functions are available to easily create valid GeoJSON
-output. In all cases you can write equivalent JSON yourself, but these functions
-will help you to make all your AQL queries shorter and easier to read.
-
-### GEO_LINESTRING()
-
-`GEO_LINESTRING(points) → geoJson`
-
-Construct a GeoJSON LineString.
-Needs at least two longitude/latitude pairs.
-
-- **points** (array): an array of `[longitude, latitude]` pairs
-- returns **geoJson** (object): a valid GeoJSON LineString
-
-```aql
----
-name: aqlGeoLineString_1
-description: ''
----
-RETURN GEO_LINESTRING([
- [35, 10], [45, 45]
-])
-```
-
-### GEO_MULTILINESTRING()
-
-`GEO_MULTILINESTRING(points) → geoJson`
-
-Construct a GeoJSON MultiLineString.
-Needs at least two elements consisting valid LineStrings coordinate arrays.
-
-- **points** (array): array of LineStrings
-- returns **geoJson** (object): a valid GeoJSON MultiLineString
-
-```aql
----
-name: aqlGeoMultiLineString_1
-description: ''
----
-RETURN GEO_MULTILINESTRING([
- [[100.0, 0.0], [101.0, 1.0]],
- [[102.0, 2.0], [101.0, 2.3]]
-])
-```
-
-### GEO_MULTIPOINT()
-
-`GEO_MULTIPOINT(points) → geoJson`
-
-Construct a GeoJSON LineString. Needs at least two longitude/latitude pairs.
-
-- **points** (array): an array of `[longitude, latitude]` pairs
-- returns **geoJson** (object): a valid GeoJSON Point
-
-```aql
----
-name: aqlGeoMultiPoint_1
-description: ''
----
-RETURN GEO_MULTIPOINT([
- [35, 10], [45, 45]
-])
-```
-
-### GEO_POINT()
-
-`GEO_POINT(longitude, latitude) → geoJson`
-
-Construct a valid GeoJSON Point.
-
-- **longitude** (number): the longitude portion of the point
-- **latitude** (number): the latitude portion of the point
-- returns **geoJson** (object): a GeoJSON Point
-
-```aql
----
-name: aqlGeoPoint_1
-description: ''
----
-RETURN GEO_POINT(1.0, 2.0)
-```
-
-### GEO_POLYGON()
-
-`GEO_POLYGON(points) → geoJson`
-
-Construct a GeoJSON Polygon. Needs at least one array representing
-a linear ring. Each linear ring consists of an array with at least four
-longitude/latitude pairs. The first linear ring must be the outermost, while
-any subsequent linear ring will be interpreted as holes.
-
-For details about the rules, see [GeoJSON Polygon](#polygon).
-
-- **points** (array): an array of (arrays of) `[longitude, latitude]` pairs
-- returns **geoJson** (object\|null): a valid GeoJSON Polygon
-
-A validation step is performed using the S2 geometry library. If the
-validation is not successful, an AQL warning is issued and `null` is
-returned.
-
-Simple Polygon:
-
-```aql
----
-name: aqlGeoPolygon_1
-description: ''
----
-RETURN GEO_POLYGON([
- [0.0, 0.0], [7.5, 2.5], [0.0, 5.0], [0.0, 0.0]
-])
-```
-
-Advanced Polygon with a hole inside:
-
-```aql
----
-name: aqlGeoPolygon_2
-description: ''
----
-RETURN GEO_POLYGON([
- [[35, 10], [45, 45], [15, 40], [10, 20], [35, 10]],
- [[20, 30], [30, 20], [35, 35], [20, 30]]
-])
-```
-
-### GEO_MULTIPOLYGON()
-
-`GEO_MULTIPOLYGON(polygons) → geoJson`
-
-Construct a GeoJSON MultiPolygon. Needs at least two Polygons inside.
-See [GEO_POLYGON()](#geo_polygon) and [GeoJSON MultiPolygon](#multipolygon)
-for the rules of Polygon and MultiPolygon construction.
-
-- **polygons** (array): an array of arrays of arrays of `[longitude, latitude]` pairs
-- returns **geoJson** (object\|null): a valid GeoJSON MultiPolygon
-
-A validation step is performed using the S2 geometry library, if the
-validation is not successful, an AQL warning is issued and `null` is
-returned.
-
-MultiPolygon comprised of a simple Polygon and a Polygon with hole:
-
-```aql
----
-name: aqlGeoMultiPolygon_1
-description: ''
----
-RETURN GEO_MULTIPOLYGON([
- [
- [[40, 40], [20, 45], [45, 30], [40, 40]]
- ],
- [
- [[20, 35], [10, 30], [10, 10], [30, 5], [45, 20], [20, 35]],
- [[30, 20], [20, 15], [20, 25], [30, 20]]
- ]
-])
-```
-
-## Geo Index Functions
-
-{{< warning >}}
-The AQL functions `NEAR()`, `WITHIN()` and `WITHIN_RECTANGLE()` are
-deprecated starting from version 3.4.0.
-Please use the [Geo utility functions](#geo-utility-functions) instead.
-{{< /warning >}}
-
-AQL offers the following functions to filter data based on
-[geo indexes](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md). These functions require the collection
-to have at least one geo index. If no geo index can be found, calling this
-function will fail with an error at runtime. There is no error when explaining
-the query however.
-
-### NEAR()
-
-{{< warning >}}
-`NEAR()` is a deprecated AQL function from version 3.4.0 on.
-Use [`DISTANCE()`](#distance) in a query like this instead:
-
-```aql
-FOR doc IN coll
- SORT DISTANCE(doc.latitude, doc.longitude, paramLatitude, paramLongitude) ASC
- RETURN doc
-```
-Assuming there exists a geo-type index on `latitude` and `longitude`, the
-optimizer will recognize it and accelerate the query.
-{{< /warning >}}
-
-`NEAR(coll, latitude, longitude, limit, distanceName) → docArray`
-
-Return at most *limit* documents from collection *coll* that are near
-*latitude* and *longitude*. The result contains at most *limit* documents,
-returned sorted by distance, with closest distances being returned first.
-Optionally, the distances in meters between the specified coordinate pair
-(*latitude* and *longitude*) and the stored coordinate pairs can be returned as
-well. To make use of that, the desired attribute name for the distance result
-has to be specified in the *distanceName* argument. The result documents will
-contain the distance value in an attribute of that name.
-
-- **coll** (collection): a collection
-- **latitude** (number): the latitude of the point to search
-- **longitude** (number): the longitude of the point to search
-- **limit** (number, *optional*): cap the result to at most this number of
- documents. The default is 100. If more documents than *limit* are found,
- it is undefined which ones will be returned.
-- **distanceName** (string, *optional*): include the distance (in meters)
- between the reference point and the stored point in the result, using the
- attribute name *distanceName*
-- returns **docArray** (array): an array of documents, sorted by distance
- (shortest distance first)
-
-### WITHIN()
-
-{{< warning >}}
-`WITHIN()` is a deprecated AQL function from version 3.4.0 on.
-Use [`DISTANCE()`](#distance) in a query like this instead:
-
-```aql
-FOR doc IN coll
- LET d = DISTANCE(doc.latitude, doc.longitude, paramLatitude, paramLongitude)
- FILTER d <= radius
- SORT d ASC
- RETURN doc
-```
-
-Assuming there exists a geo-type index on `latitude` and `longitude`, the
-optimizer will recognize it and accelerate the query.
-{{< /warning >}}
-
-`WITHIN(coll, latitude, longitude, radius, distanceName) → docArray`
-
-Return all documents from collection *coll* that are within a radius of *radius*
-around the specified coordinate pair (*latitude* and *longitude*). The documents
-returned are sorted by distance to the reference point, with the closest
-distances being returned first. Optionally, the distance (in meters) between the
-reference point and the stored point can be returned as well. To make
-use of that, an attribute name for the distance result has to be specified in
-the *distanceName* argument. The result documents will contain the distance
-value in an attribute of that name.
-
-- **coll** (collection): a collection
-- **latitude** (number): the latitude of the point to search
-- **longitude** (number): the longitude of the point to search
-- **radius** (number): radius in meters
-- **distanceName** (string, *optional*): include the distance (in meters)
- between the reference point and stored point in the result, using the
- attribute name *distanceName*
-- returns **docArray** (array): an array of documents, sorted by distance
- (shortest distance first)
-
-### WITHIN_RECTANGLE()
-
-{{< warning >}}
-`WITHIN_RECTANGLE()` is a deprecated AQL function from version 3.4.0 on. Use
-[`GEO_CONTAINS()`](#geo_contains) and a GeoJSON polygon instead - but note that
-this uses geodesic lines from version 3.10.0 onward
-(see [GeoJSON interpretation](#geojson-interpretation)):
-
-```aql
-LET rect = GEO_POLYGON([ [
- [longitude1, latitude1], // bottom-left
- [longitude2, latitude1], // bottom-right
- [longitude2, latitude2], // top-right
- [longitude1, latitude2], // top-left
- [longitude1, latitude1], // bottom-left
-] ])
-FOR doc IN coll
- FILTER GEO_CONTAINS(rect, [doc.longitude, doc.latitude])
- RETURN doc
-```
-
-Assuming there exists a geo-type index on `latitude` and `longitude`, the
-optimizer will recognize it and accelerate the query.
-{{< /warning >}}
-
-`WITHIN_RECTANGLE(coll, latitude1, longitude1, latitude2, longitude2) → docArray`
-
-Return all documents from collection *coll* that are positioned inside the
-bounding rectangle with the points (*latitude1*, *longitude1*) and (*latitude2*,
-*longitude2*). There is no guaranteed order in which the documents are returned.
-
-- **coll** (collection): a collection
-- **latitude1** (number): the latitude of the bottom-left point to search
-- **longitude1** (number): the longitude of the bottom-left point to search
-- **latitude2** (number): the latitude of the top-right point to search
-- **longitude2** (number): the longitude of the top-right point to search
-- returns **docArray** (array): an array of documents, in random order
diff --git a/site/content/3.10/aql/graphs/all-shortest-paths.md b/site/content/3.10/aql/graphs/all-shortest-paths.md
deleted file mode 100644
index 1dc67cc001..0000000000
--- a/site/content/3.10/aql/graphs/all-shortest-paths.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: All Shortest Paths in AQL
-menuTitle: All Shortest Paths
-weight: 20
-description: >-
- Find all paths of shortest length between a start and target vertex
----
-## General query idea
-
-This type of query finds all paths of shortest length between two given
-documents (*startVertex* and *targetVertex*) in your graph.
-
-Every returned path is a JSON object with two attributes:
-
-- An array containing the `vertices` on the path.
-- An array containing the `edges` on the path.
-
-**Example**
-
-A visual representation of the example graph:
-
-
-
-Each ellipse stands for a train station with the name of the city written inside
-of it. They are the vertices of the graph. Arrows represent train connections
-between cities and are the edges of the graph.
-
-Assuming that you want to go from **Carlisle** to **London** by train, the
-expected two shortest paths are:
-
-1. Carlisle – Birmingham – London
-2. Carlisle – York – London
-
-Another path that connects Carlisle and London is
-Carlisle – Glasgow – Edinburgh – York – London, but it is has two more stops and
-is therefore not a path of the shortest length.
-
-## Syntax
-
-The syntax for All Shortest Paths queries is similar to the one for
-[Shortest Path](shortest-path.md) and there are also two options to
-either use a named graph or a set of edge collections. It only emits a path
-variable however, whereas `SHORTEST_PATH` emits a vertex and an edge variable.
-
-### Working with named graphs
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY ALL_SHORTEST_PATHS
- startVertex TO targetVertex
- GRAPH graphName
-```
-
-- `FOR`: emits the variable **path** which contains one shortest path as an
- object, with the `vertices` and `edges` of the path.
-- `IN` `OUTBOUND|INBOUND|ANY`: defines in which direction
- edges are followed (outgoing, incoming, or both)
-- `ALL_SHORTEST_PATHS`: the keyword to compute All Shortest Paths
-- **startVertex** `TO` **targetVertex** (both string\|object): the two vertices between
- which the paths will be computed. This can be specified in the form of
- a ID string or in the form of a document with the attribute `_id`. All other
- values result in a warning and an empty result. If one of the specified
- documents does not exist, the result is empty as well and there is no warning.
-- `GRAPH` **graphName** (string): the name identifying the named graph. Its vertex and
- edge collections will be looked up.
-
-{{< info >}}
-All Shortest Paths traversals do not support edge weights.
-{{< /info >}}
-
-### Working with collection sets
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY ALL_SHORTEST_PATHS
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
-```
-
-Instead of `GRAPH graphName` you can specify a list of edge collections.
-The involved vertex collections are determined by the edges of the given
-edge collections.
-
-### Traversing in mixed directions
-
-For All Shortest Paths with a list of edge collections, you can optionally specify the
-direction for some of the edge collections. Say, for example, you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken into
-account. In this case you can use `OUTBOUND` as a general search direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR path IN OUTBOUND ALL_SHORTEST_PATHS
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction will use the
-direction defined after `IN` (here: `OUTBOUND`). This allows using a different
-direction for each collection in your path search.
-
-## Examples
-
-Load an example graph to get a named graph that reflects some possible
-train connections in Europe and North America:
-
-
-
-```js
----
-name: GRAPHASP_01_create_graph
-description: ''
----
-~addIgnoreCollection("places");
-~addIgnoreCollection("connections");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-```
-
-Suppose you want to query a route from **Carlisle** to **London**, and
-compare the outputs of `SHORTEST_PATH`, `K_SHORTEST_PATHS` and `ALL_SHORTEST_PATHS`.
-Note that `SHORTEST_PATH` returns any of the shortest paths, whereas
-`ALL_SHORTEST_PATHS` returns all of them. `K_SHORTEST_PATHS` returns the
-shortest paths first but continues with longer paths, until it found all routes
-or reaches the defined limit (the number of paths).
-
-Using `SHORTEST_PATH` to get one shortest path:
-
-```aql
----
-name: GRAPHASP_01_Carlisle_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e IN OUTBOUND SHORTEST_PATH 'places/Carlisle' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { place: v.label }
-```
-
-Using `ALL_SHORTEST_PATHS` to get both shortest paths:
-
-```aql
----
-name: GRAPHASP_02_Carlisle_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND ALL_SHORTEST_PATHS 'places/Carlisle' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label }
-```
-
-Using `K_SHORTEST_PATHS` without a limit to get all paths in order of
-increasing length:
-
-```aql
----
-name: GRAPHASP_03_Carlisle_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Carlisle' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label }
-```
-
-If you ask for routes that don't exist, you get an empty result
-(from **Carlisle** to **Toronto**):
-
-```aql
----
-name: GRAPHASP_04_Carlisle_to_Toronto
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND ALL_SHORTEST_PATHS 'places/Carlisle' TO 'places/Toronto'
-GRAPH 'kShortestPathsGraph'
- RETURN {
- places: p.vertices[*].label
- }
-```
-
-And finally clean up by removing the named graph:
-
-```js
----
-name: GRAPHASP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("kShortestPathsGraph");
-~removeIgnoreCollection("places");
-~removeIgnoreCollection("connections");
-```
diff --git a/site/content/3.10/aql/graphs/k-paths.md b/site/content/3.10/aql/graphs/k-paths.md
deleted file mode 100644
index d7e6aabe2a..0000000000
--- a/site/content/3.10/aql/graphs/k-paths.md
+++ /dev/null
@@ -1,237 +0,0 @@
----
-title: k Paths in AQL
-menuTitle: k Paths
-weight: 30
-description: >-
- Determine all paths between a start and end vertex limited specified path
- lengths
----
-## General query idea
-
-This type of query finds all paths between two given documents,
-*startVertex* and *targetVertex* in your graph. The paths are restricted
-by minimum and maximum length of the paths.
-
-Every such path will be returned as a JSON object with two components:
-
-- an array containing the `vertices` on the path
-- an array containing the `edges` on the path
-
-**Example**
-
-Let us take a look at a simple example to explain how it works.
-This is the graph that we are going to find some paths on:
-
-
-
-Each ellipse stands for a train station with the name of the city written inside
-of it. They are the vertices of the graph. Arrows represent train connections
-between cities and are the edges of the graph. The numbers near the arrows
-describe how long it takes to get from one station to another. They are used
-as edge weights.
-
-Let us assume that we want to go from **Aberdeen** to **London** by train.
-
-Here we have a couple of alternatives:
-
-a) Straight way
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. York
- 5. London
-
-b) Detour at York
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. York
- 5. **Carlisle**
- 6. **Birmingham**
- 7. London
-
-c) Detour at Edinburgh
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. **Glasgow**
- 5. **Carlisle**
- 6. **Birmingham**
- 7. London
-
-d) Detour at Edinburgh to York
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. **Glasgow**
- 5. **Carlisle**
- 6. York
- 7. London
-
-Note that we only consider paths as valid that do not contain the same vertex
-twice. The following alternative would visit Aberdeen twice and will not be returned by k Paths:
-
-1. Aberdeen
-2. **Inverness**
-3. **Aberdeen**
-4. Leuchars
-5. Edinburgh
-6. York
-7. London
-
-## Example Use Cases
-
-The use-cases for k Paths are about the same as for unweighted k Shortest Paths.
-The main difference is that k Shortest Paths will enumerate all paths with
-**increasing length**. It will stop as soon as a given limit is reached.
-k Paths will instead only enumerate **all paths** within a given range of
-path length, and are thereby upper-bounded.
-
-The k Paths traversal can be used as foundation for several other algorithms:
-
-- **Transportation** of any kind (e.g. road traffic, network package routing)
-- **Flow problems**: We need to transfer items from A to B, which alternatives
- do we have? What is their capacity?
-
-## Syntax
-
-The syntax for k Paths queries is similar to the one for
-[K Shortest Path](k-shortest-paths.md) with the addition to define the
-minimum and maximum length of the path.
-
-{{< warning >}}
-It is highly recommended that you use a reasonable maximum path length or a
-**LIMIT** statement, as k Paths is a potentially expensive operation. On large
-connected graphs it can return a large number of paths.
-{{< /warning >}}
-
-### Working with named graphs
-
-```aql
-FOR path
- IN MIN..MAX OUTBOUND|INBOUND|ANY K_PATHS
- startVertex TO targetVertex
- GRAPH graphName
- [OPTIONS options]
-```
-
-- `FOR`: emits the variable **path** which contains one path as an object
- containing `vertices` and `edges` of the path.
-- `IN` `MIN..MAX`: the minimal and maximal depth for the traversal:
- - **min** (number, *optional*): paths returned by this query will
- have at least a length of *min* many edges.
- If not specified, it defaults to 1. The minimal possible value is 0.
- - **max** (number, *optional*): paths returned by this query will
- have at most a length of *max* many edges.
- If omitted, *max* defaults to *min*. Thus only the vertices and edges in
- the range of *min* are returned. *max* cannot be specified without *min*.
-- `OUTBOUND|INBOUND|ANY`: defines in which direction
- edges are followed (outgoing, incoming, or both)
-- `K_PATHS`: the keyword to compute all Paths
-- **startVertex** `TO` **targetVertex** (both string\|object): the two vertices
- between which the paths will be computed. This can be specified in the form of
- a document identifier string or in the form of an object with the attribute
- `_id`. All other values will lead to a warning and an empty result. This is
- also the case if one of the specified documents does not exist.
-- `GRAPH` **graphName** (string): the name identifying the named graph.
- Its vertex and edge collections will be looked up.
-- `OPTIONS` **options** (object, *optional*): used to modify the execution of
- the search. Right now there are no options that trigger an effect.
- However, this may change in the future.
-
-### Working with collection sets
-
-```aql
-FOR path
- IN MIN..MAX OUTBOUND|INBOUND|ANY K_PATHS
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
- [OPTIONS options]
-```
-
-Instead of `GRAPH graphName` you can specify a list of edge collections.
-The involved vertex collections are determined by the edges of the given
-edge collections.
-
-### Traversing in mixed directions
-
-For k paths with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken
-into account. In this case you can use `OUTBOUND` as general search direction
-and `ANY` specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND K_PATHS
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction will use the
-direction defined after `IN` (here: `OUTBOUND`). This allows to use a different
-direction for each collection in your path search.
-
-## Examples
-
-We load an example graph to get a named graph that reflects some possible
-train connections in Europe and North America.
-
-
-
-```js
----
-name: GRAPHKP_01_create_graph
-description: ''
----
-~addIgnoreCollection("places");
-~addIgnoreCollection("connections");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-```
-
-Suppose we want to query all routes from **Aberdeen** to **London**.
-
-```aql
----
-name: GRAPHKP_01_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN 1..10 OUTBOUND K_PATHS 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label, travelTimes: p.edges[*].travelTime }
-```
-
-If we ask for routes that don't exist we get an empty result
-(from **Aberdeen** to **Toronto**):
-
-```aql
----
-name: GRAPHKP_02_Aberdeen_to_Toronto
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN 1..10 OUTBOUND K_PATHS 'places/Aberdeen' TO 'places/Toronto'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label, travelTimes: p.edges[*].travelTime }
-```
-
-And finally clean up by removing the named graph:
-
-```js
----
-name: GRAPHKP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("kShortestPathsGraph");
-~removeIgnoreCollection("places");
-~removeIgnoreCollection("connections");
-```
diff --git a/site/content/3.10/aql/graphs/k-shortest-paths.md b/site/content/3.10/aql/graphs/k-shortest-paths.md
deleted file mode 100644
index bb2ba93017..0000000000
--- a/site/content/3.10/aql/graphs/k-shortest-paths.md
+++ /dev/null
@@ -1,295 +0,0 @@
----
-title: k Shortest Paths in AQL
-menuTitle: k Shortest Paths
-weight: 25
-description: >-
- Determine a specified number of shortest paths in increasing path length or
- weight order
----
-## General query idea
-
-This type of query finds the first *k* paths in order of length
-(or weight) between two given documents, *startVertex* and *targetVertex* in
-your graph.
-
-Every such path will be returned as a JSON object with three components:
-
-- an array containing the `vertices` on the path
-- an array containing the `edges` on the path
-- the `weight` of the path, that is the sum of all edge weights
-
-If no `weightAttribute` is given, the weight of the path is just its length.
-
-{{< youtube id="XdITulJFdVo" >}}
-
-**Example**
-
-Let us take a look at a simple example to explain how it works.
-This is the graph that we are going to find some shortest path on:
-
-
-
-Each ellipse stands for a train station with the name of the city written inside
-of it. They are the vertices of the graph. Arrows represent train connections
-between cities and are the edges of the graph. The numbers near the arrows
-describe how long it takes to get from one station to another. They are used
-as edge weights.
-
-Let us assume that we want to go from **Aberdeen** to **London** by train.
-
-We expect to see the following vertices on *the* shortest path, in this order:
-
-1. Aberdeen
-2. Leuchars
-3. Edinburgh
-4. York
-5. London
-
-By the way, the weight of the path is: 1.5 + 1.5 + 3.5 + 1.8 = **8.3**.
-
-Let us look at alternative paths next, for example because we know that the
-direct connection between York and London does not operate currently.
-An alternative path, which is slightly longer, goes like this:
-
-1. Aberdeen
-2. Leuchars
-3. Edinburgh
-4. York
-5. **Carlisle**
-6. **Birmingham**
-7. London
-
-Its weight is: 1.5 + 1.5 + 3.5 + 2.0 + 1.5 = **10.0**.
-
-Another route goes via Glasgow. There are seven stations on the path as well,
-however, it is quicker if we compare the edge weights:
-
-1. Aberdeen
-2. Leuchars
-3. Edinburgh
-4. **Glasgow**
-5. Carlisle
-6. Birmingham
-7. London
-
-The path weight is lower: 1.5 + 1.5 + 1.0 + 1.0 + 2.0 + 1.5 = **8.5**.
-
-## Syntax
-
-The syntax for k Shortest Paths queries is similar to the one for
-[Shortest Path](shortest-path.md) and there are also two options to
-either use a named graph or a set of edge collections. It only emits a path
-variable however, whereas `SHORTEST_PATH` emits a vertex and an edge variable.
-
-{{< warning >}}
-It is highly recommended that you use a **LIMIT** statement, as
-k Shortest Paths is a potentially expensive operation. On large connected
-graphs it can return a large number of paths, or perform an expensive
-(but unsuccessful) search for more short paths.
-{{< /warning >}}
-
-### Working with named graphs
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY K_SHORTEST_PATHS
- startVertex TO targetVertex
- GRAPH graphName
- [OPTIONS options]
- [LIMIT offset, count]
-```
-
-- `FOR`: emits the variable **path** which contains one path as an object containing
- `vertices`, `edges`, and the `weight` of the path.
-- `IN` `OUTBOUND|INBOUND|ANY`: defines in which direction
- edges are followed (outgoing, incoming, or both)
-- `K_SHORTEST_PATHS`: the keyword to compute k Shortest Paths
-- **startVertex** `TO` **targetVertex** (both string\|object): the two vertices between
- which the paths will be computed. This can be specified in the form of
- a ID string or in the form of a document with the attribute `_id`. All other
- values will lead to a warning and an empty result. If one of the specified
- documents does not exist, the result is empty as well and there is no warning.
-- `GRAPH` **graphName** (string): the name identifying the named graph. Its vertex and
- edge collections will be looked up.
-- `OPTIONS` **options** (object, *optional*): used to modify the execution of the
- traversal. Only the following attributes have an effect, all others are ignored:
- - **weightAttribute** (string): a top-level edge attribute that should be used
- to read the edge weight. If the attribute does not exist or is not numeric, the
- *defaultWeight* will be used instead. The attribute value must not be negative.
- - **defaultWeight** (number): this value will be used as fallback if there is
- no *weightAttribute* in the edge document, or if it's not a number. The value
- must not be negative. The default is `1`.
-- `LIMIT` (see [LIMIT operation](../high-level-operations/limit.md), *optional*):
- the maximal number of paths to return. It is highly recommended to use
- a `LIMIT` for `K_SHORTEST_PATHS`.
-
-{{< info >}}
-k Shortest Paths traversals do not support negative weights. If a document
-attribute (as specified by `weightAttribute`) with a negative value is
-encountered during traversal, or if `defaultWeight` is set to a negative
-number, then the query is aborted with an error.
-{{< /info >}}
-
-### Working with collection sets
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY K_SHORTEST_PATHS
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
- [OPTIONS options]
- [LIMIT offset, count]
-```
-
-Instead of `GRAPH graphName` you can specify a list of edge collections.
-The involved vertex collections are determined by the edges of the given
-edge collections.
-
-### Traversing in mixed directions
-
-For k shortest paths with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken into
-account. In this case you can use `OUTBOUND` as general search direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND K_SHORTEST_PATHS
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction will use the
-direction defined after `IN` (here: `OUTBOUND`). This allows to use a different
-direction for each collection in your path search.
-
-## Examples
-
-We load an example graph to get a named graph that reflects some possible
-train connections in Europe and North America.
-
-
-
-```js
----
-name: GRAPHKSP_01_create_graph
-description: ''
----
-~addIgnoreCollection("places");
-~addIgnoreCollection("connections");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-```
-
-Suppose we want to query a route from **Aberdeen** to **London**, and
-compare the outputs of `SHORTEST_PATH` and `K_SHORTEST_PATHS` with
-`LIMIT 1`. Note that while `SHORTEST_PATH` and `K_SHORTEST_PATH` with
-`LIMIT 1` should return a path of the same length (or weight), they do
-not need to return the same path.
-
-Using `SHORTEST_PATH`:
-
-```aql
----
-name: GRAPHKSP_01_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e IN OUTBOUND SHORTEST_PATH 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { place: v.label, travelTime: e.travelTime }
-```
-
-Using `K_SHORTEST_PATHS`:
-
-```aql
----
-name: GRAPHKSP_02_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- LIMIT 1
- RETURN { places: p.vertices[*].label, travelTimes: p.edges[*].travelTime }
-```
-
-With `K_SHORTEST_PATHS` we can ask for more than one option for a route:
-
-```aql
----
-name: GRAPHKSP_03_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- LIMIT 3
- RETURN {
- places: p.vertices[*].label,
- travelTimes: p.edges[*].travelTime,
- travelTimeTotal: SUM(p.edges[*].travelTime)
- }
-```
-
-If we ask for routes that don't exist we get an empty result
-(from **Aberdeen** to **Toronto**):
-
-```aql
----
-name: GRAPHKSP_04_Aberdeen_to_Toronto
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Aberdeen' TO 'places/Toronto'
-GRAPH 'kShortestPathsGraph'
- LIMIT 3
- RETURN {
- places: p.vertices[*].label,
- travelTimes: p.edges[*].travelTime,
- travelTimeTotal: SUM(p.edges[*].travelTime)
- }
-```
-
-We can use the attribute *travelTime* that connections have as edge weights to
-take into account which connections are quicker. A high default weight is set,
-to be used if an edge has no *travelTime* attribute (not the case with the
-example graph). This returns the top three routes with the fewest changes
-and favoring the least travel time for the connection **Saint Andrews**
-to **Cologne**:
-
-```aql
----
-name: GRAPHKSP_05_StAndrews_to_Cologne
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/StAndrews' TO 'places/Cologne'
-GRAPH 'kShortestPathsGraph'
-OPTIONS {
- weightAttribute: 'travelTime',
- defaultWeight: 15
-}
- LIMIT 3
- RETURN {
- places: p.vertices[*].label,
- travelTimes: p.edges[*].travelTime,
- travelTimeTotal: SUM(p.edges[*].travelTime)
- }
-```
-
-And finally clean up by removing the named graph:
-
-```js
----
-name: GRAPHKSP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("kShortestPathsGraph");
-~removeIgnoreCollection("places");
-~removeIgnoreCollection("connections");
-```
diff --git a/site/content/3.10/aql/graphs/shortest-path.md b/site/content/3.10/aql/graphs/shortest-path.md
deleted file mode 100644
index 29d689422b..0000000000
--- a/site/content/3.10/aql/graphs/shortest-path.md
+++ /dev/null
@@ -1,209 +0,0 @@
----
-title: Shortest Path in AQL
-menuTitle: Shortest Path
-weight: 15
-description: >-
- With the shortest path algorithm, you can find one shortest path between
- two vertices using AQL
----
-## General query idea
-
-This type of query is supposed to find the shortest path between two given documents
-(*startVertex* and *targetVertex*) in your graph. For all vertices on this shortest
-path you will get a result in form of a set with two items:
-
-1. The vertex on this path.
-2. The edge pointing to it.
-
-### Example execution
-
-Let's take a look at a simple example to explain how it works.
-This is the graph that you are going to find a shortest path on:
-
-
-
-You can use the following parameters for the query:
-
-1. You start at the vertex **A**.
-2. You finish with the vertex **D**.
-
-So, obviously, you have the vertices **A**, **B**, **C** and **D** on the
-shortest path in exactly this order. Then, the shortest path statement
-returns the following pairs:
-
-| Vertex | Edge |
-|--------|-------|
-| A | null |
-| B | A → B |
-| C | B → C |
-| D | C → D |
-
-Note that the first edge is always `null` because there is no edge pointing
-to the *startVertex*.
-
-## Syntax
-
-The next step is to see how you can write a shortest path query.
-You have two options here, you can either use a named graph or a set of edge
-collections (anonymous graph).
-
-### Working with named graphs
-
-```aql
-FOR vertex[, edge]
- IN OUTBOUND|INBOUND|ANY SHORTEST_PATH
- startVertex TO targetVertex
- GRAPH graphName
- [OPTIONS options]
-```
-
-- `FOR`: emits up to two variables:
- - **vertex** (object): the current vertex on the shortest path
- - **edge** (object, *optional*): the edge pointing to the vertex
-- `IN` `OUTBOUND|INBOUND|ANY`: defines in which direction edges are followed
- (outgoing, incoming, or both)
-- **startVertex** `TO` **targetVertex** (both string\|object): the two vertices between
- which the shortest path will be computed. This can be specified in the form of
- an ID string or in the form of a document with the attribute `_id`. All other
- values will lead to a warning and an empty result. If one of the specified
- documents does not exist, the result is empty as well and there is no warning.
-- `GRAPH` **graphName** (string): the name identifying the named graph. Its vertex and
- edge collections will be looked up.
-- `OPTIONS` **options** (object, *optional*): used to modify the execution of the
- traversal. Only the following attributes have an effect, all others are ignored:
- - **weightAttribute** (string): a top-level edge attribute that should be used
- to read the edge weight. If the attribute is not existent or not numeric, the
- *defaultWeight* will be used instead. The attribute value must not be negative.
- - **defaultWeight** (number): this value will be used as fallback if there is
- no *weightAttribute* in the edge document, or if it is not a number.
- The value must not be negative. The default is `1`.
-
-{{< info >}}
-Shortest Path traversals do not support negative weights. If a document
-attribute (as specified by `weightAttribute`) with a negative value is
-encountered during traversal, or if `defaultWeight` is set to a negative
-number, then the query is aborted with an error.
-{{< /info >}}
-
-### Working with collection sets
-
-```aql
-FOR vertex[, edge]
- IN OUTBOUND|INBOUND|ANY SHORTEST_PATH
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
- [OPTIONS options]
-```
-
-Instead of `GRAPH graphName` you may specify a list of edge collections (anonymous
-graph). The involved vertex collections are determined by the edges of the given
-edge collections. The rest of the behavior is similar to the named version.
-
-### Traversing in mixed directions
-
-For shortest path with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken into
-account. In this case you can use `OUTBOUND` as general search direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND SHORTEST_PATH
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction will use the
-direction defined after `IN` (here: `OUTBOUND`). This allows to use a different
-direction for each collection in your path search.
-
-## Conditional shortest path
-
-The `SHORTEST_PATH` computation only finds an unconditioned shortest path.
-With this construct it is not possible to define a condition like: "Find the
-shortest path where all edges are of type *X*". If you want to do this, use a
-normal [Traversal](traversals.md) instead with the option
-`{order: "bfs"}` in combination with `LIMIT 1`.
-
-Please also consider using [`WITH`](../high-level-operations/with.md) to specify the
-collections you expect to be involved.
-
-## Examples
-Creating a simple symmetric traversal demonstration graph:
-
-
-
-```js
----
-name: GRAPHSP_01_create_graph
-description: ''
----
-~addIgnoreCollection("circles");
-~addIgnoreCollection("edges");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("traversalGraph");
-db.circles.toArray();
-db.edges.toArray();
-```
-
-Start with the shortest path from **A** to **D** as above:
-
-```js
----
-name: GRAPHSP_02_A_to_D
-description: ''
----
-db._query(`
- FOR v, e IN OUTBOUND SHORTEST_PATH 'circles/A' TO 'circles/D' GRAPH 'traversalGraph'
- RETURN [v._key, e._key]
-`);
-
-db._query(`
- FOR v, e IN OUTBOUND SHORTEST_PATH 'circles/A' TO 'circles/D' edges
- RETURN [v._key, e._key]
-`);
-```
-
-You can see that expectations are fulfilled. You find the vertices in the
-correct ordering and the first edge is *null*, because no edge is pointing
-to the start vertex on this path.
-
-You can also compute shortest paths based on documents found in collections:
-
-```js
----
-name: GRAPHSP_03_A_to_D
-description: ''
----
-db._query(`
- FOR a IN circles
- FILTER a._key == 'A'
- FOR d IN circles
- FILTER d._key == 'D'
- FOR v, e IN OUTBOUND SHORTEST_PATH a TO d GRAPH 'traversalGraph'
- RETURN [v._key, e._key]
-`);
-
-db._query(`
- FOR a IN circles
- FILTER a._key == 'A'
- FOR d IN circles
- FILTER d._key == 'D'
- FOR v, e IN OUTBOUND SHORTEST_PATH a TO d edges
- RETURN [v._key, e._key]
-`);
-```
-
-And finally clean it up again:
-
-```js
----
-name: GRAPHSP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("traversalGraph");
-~removeIgnoreCollection("circles");
-~removeIgnoreCollection("edges");
-```
diff --git a/site/content/3.10/aql/graphs/traversals-explained.md b/site/content/3.10/aql/graphs/traversals-explained.md
deleted file mode 100644
index a211ae6087..0000000000
--- a/site/content/3.10/aql/graphs/traversals-explained.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: AQL graph traversals explained
-menuTitle: Traversals explained
-weight: 5
-description: >-
- Traversing a graph means to follow edges connected to a start vertex and
- neighboring vertices until a specified depth
----
-## General query idea
-
-A traversal starts at one specific document (*startVertex*) and follows all
-edges connected to this document. For all documents (*vertices*) that are
-targeted by these edges it will again follow all edges connected to them and
-so on. It is possible to define how many of these follow iterations should be
-executed at least (*min* depth) and at most (*max* depth).
-
-For all vertices that were visited during this process in the range between
-*min* depth and *max* depth iterations you will get a result in form of a
-set with three items:
-
-1. The visited vertex.
-2. The edge pointing to it.
-3. The complete path from startVertex to the visited vertex as object with an
- attribute *edges* and an attribute *vertices*, each a list of the corresponding
- elements. These lists are sorted, which means the first element in *vertices*
- is the *startVertex* and the last is the visited vertex, and the n-th element
- in *edges* connects the n-th element with the (n+1)-th element in *vertices*.
-
-## Example execution
-
-Let's take a look at a simple example to explain how it works.
-This is the graph that we are going to traverse:
-
-
-
-We use the following parameters for our query:
-
-1. We start at the vertex **A**.
-2. We use a *min* depth of 1.
-3. We use a *max* depth of 2.
-4. We follow only in `OUTBOUND` direction of edges
-
-
-
-Now it walks to one of the direct neighbors of **A**, say **B** (note: ordering
-is not guaranteed!):
-
-
-
-The query will remember the state (red circle) and will emit the first result
-**A** → **B** (black box). This will also prevent the traverser to be trapped
-in cycles. Now again it will visit one of the direct neighbors of **B**, say **E**:
-
-
-
-We have limited the query with a *max* depth of *2*, so it will not pick any
-neighbor of **E**, as the path from **A** to **E** already requires *2* steps.
-Instead, we will go back one level to **B** and continue with any other direct
-neighbor there:
-
-
-
-Again after we produced this result we will step back to **B**.
-But there is no neighbor of **B** left that we have not yet visited.
-Hence we go another step back to **A** and continue with any other neighbor there.
-
-
-
-And identical to the iterations before we will visit **H**:
-
-
-
-And **J**:
-
-
-
-After these steps there is no further result left. So all together this query
-has returned the following paths:
-
-1. **A** → **B**
-2. **A** → **B** → **E**
-3. **A** → **B** → **C**
-4. **A** → **G**
-5. **A** → **G** → **H**
-6. **A** → **G** → **J**
diff --git a/site/content/3.10/aql/graphs/traversals.md b/site/content/3.10/aql/graphs/traversals.md
deleted file mode 100644
index 283703f0b7..0000000000
--- a/site/content/3.10/aql/graphs/traversals.md
+++ /dev/null
@@ -1,847 +0,0 @@
----
-title: Graph traversals in AQL
-menuTitle: Traversals
-weight: 10
-description: >-
- You can traverse named graphs and anonymous graphs with a native AQL
- language construct
----
-## Syntax
-
-There are two slightly different syntaxes for traversals in AQL, one for
-- [named graphs](../../graphs/_index.md#named-graphs) and another to
-- specify a [set of edge collections](#working-with-collection-sets)
- ([anonymous graph](../../graphs/_index.md#anonymous-graphs)).
-
-### Working with named graphs
-
-The syntax for AQL graph traversals using named graphs is as follows
-(square brackets denote optional parts and `|` denotes alternatives):
-
-```aql
-FOR vertex[, edge[, path]]
- IN [min[..max]]
- OUTBOUND|INBOUND|ANY startVertex
- GRAPH graphName
- [PRUNE [pruneVariable = ]pruneCondition]
- [OPTIONS options]
-```
-
-- `FOR`: emits up to three variables:
- - **vertex** (object): the current vertex in a traversal
- - **edge** (object, *optional*): the current edge in a traversal
- - **path** (object, *optional*): representation of the current path with
- two members:
- - `vertices`: an array of all vertices on this path
- - `edges`: an array of all edges on this path
-- `IN` `min..max`: the minimal and maximal depth for the traversal:
- - **min** (number, *optional*): edges and vertices returned by this query
- start at the traversal depth of *min* (thus edges and vertices below it are
- not returned). If not specified, it defaults to 1. The minimal
- possible value is 0.
- - **max** (number, *optional*): up to *max* length paths are traversed.
- If omitted, *max* defaults to *min*. Thus only the vertices and edges in
- the range of *min* are returned. *max* cannot be specified without *min*.
-- `OUTBOUND|INBOUND|ANY`: follow outgoing, incoming, or edges pointing in either
- direction in the traversal. Note that this can't be replaced by a bind parameter.
-- **startVertex** (string\|object): a vertex where the traversal originates from.
- This can be specified in the form of an ID string or in the form of a document
- with the `_id` attribute. All other values lead to a warning and an empty
- result. If the specified document does not exist, the result is empty as well
- and there is no warning.
-- `GRAPH` **graphName** (string): the name identifying the named graph.
- Its vertex and edge collections are looked up. Note that the graph name
- is like a regular string, hence it must be enclosed by quote marks, like
- `GRAPH "graphName"`.
-- `PRUNE` **expression** (AQL expression, *optional*):
- An expression, like in a `FILTER` statement, which is evaluated in every step of
- the traversal, as early as possible. The semantics of this expression are as follows:
- - If the expression evaluates to `false`, the traversal continues on the current path.
- - If the expression evaluates to `true`, the traversal does not continue on the
- current path. However, the paths up to this point are considered as a result
- (they might still be post-filtered or ignored due to depth constraints).
- For example, a traversal over the graph `(A) -> (B) -> (C)` starting at `A`
- and pruning on `B` results in `(A)` and `(A) -> (B)` being valid paths,
- whereas `(A) -> (B) -> (C)` is not returned because it gets pruned on `B`.
-
- You can only use a single `PRUNE` clause per `FOR` traversal operation, but
- the prune expression can contain an arbitrary number of conditions using `AND`
- and `OR` statements for complex expressions. You can use the variables emitted
- by the `FOR` operation in the prune expression, as well as all variables
- defined before the traversal.
-
- You can optionally assign the prune expression to a variable like
- `PRUNE var = ` to use the evaluated result elsewhere in the query,
- typically in a `FILTER` expression.
-
- See [Pruning](#pruning) for details.
-- `OPTIONS` **options** (object, *optional*): used to modify the execution of the
- traversal. Only the following attributes have an effect, all others are ignored:
- - **order** (string): optionally specify which traversal algorithm to use
- - `"bfs"` – the traversal is executed breadth-first. The results
- first contain all vertices at depth 1, then all vertices at depth 2 and so on.
- - `"dfs"` (default) – the traversal is executed depth-first. It
- first returns all paths from *min* depth to *max* depth for one vertex at
- depth 1, then for the next vertex at depth 1 and so on.
- - `"weighted"` - the traversal is a weighted traversal
- (introduced in v3.8.0). Paths are enumerated with increasing cost.
- Also see `weightAttribute` and `defaultWeight`. A returned path has an
- additional attribute `weight` containing the cost of the path after every
- step. The order of paths having the same cost is non-deterministic.
- Negative weights are not supported and abort the query with an error.
- - **bfs** (bool): deprecated, use `order: "bfs"` instead.
- - **uniqueVertices** (string): optionally ensure vertex uniqueness
- - `"path"` – it is guaranteed that there is no path returned with a duplicate vertex
- - `"global"` – it is guaranteed that each vertex is visited at most once during
- the traversal, no matter how many paths lead from the start vertex to this one.
- If you start with a `min depth > 1` a vertex that was found before *min* depth
- might not be returned at all (it still might be part of a path).
- It is required to set `order: "bfs"` or `order: "weighted"` because with
- depth-first search the results would be unpredictable. **Note:**
- Using this configuration the result is not deterministic any more. If there
- are multiple paths from *startVertex* to *vertex*, one of those is picked.
- In case of a `weighted` traversal, the path with the lowest weight is
- picked, but in case of equal weights it is undefined which one is chosen.
- - `"none"` (default) – no uniqueness check is applied on vertices
- - **uniqueEdges** (string): optionally ensure edge uniqueness
- - `"path"` (default) – it is guaranteed that there is no path returned with a
- duplicate edge
- - `"none"` – no uniqueness check is applied on edges. **Note:**
- Using this configuration, the traversal follows edges in cycles.
- - **edgeCollections** (string\|array): Optionally restrict edge
- collections the traversal may visit (introduced in v3.7.0). If omitted,
- or an empty array is specified, then there are no restrictions.
- - A string parameter is treated as the equivalent of an array with a single
- element.
- - Each element of the array should be a string containing the name of an
- edge collection.
- - **vertexCollections** (string\|array): Optionally restrict vertex
- collections the traversal may visit (introduced in v3.7.0). If omitted,
- or an empty array is specified, then there are no restrictions.
- - A string parameter is treated as the equivalent of an array with a single
- element.
- - Each element of the array should be a string containing the name of a
- vertex collection.
- - The starting vertex is always allowed, even if it does not belong to one
- of the collections specified by a restriction.
- - **parallelism** (number, *optional*):
-
- {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
- Optionally parallelize traversal execution. If omitted or set to a value of `1`,
- traversal execution is not parallelized. If set to a value greater than `1`,
- then up to that many worker threads can be used for concurrently executing
- the traversal. The value is capped by the number of available cores on the
- target machine.
-
- Parallelizing a traversal is normally useful when there are many inputs (start
- vertices) that the nested traversal can work on concurrently. This is often the
- case when a nested traversal is fed with several tens of thousands of start
- vertices, which can then be distributed randomly to worker threads for parallel
- execution.
- - **maxProjections** (number, *optional*):
-
- {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
- Specifies the number of document attributes per `FOR` loop to be used as
- projections. The default value is `5`.
- - **weightAttribute** (string, *optional*): Specifies the name of an attribute
- that is used to look up the weight of an edge. If no attribute is specified
- or if it is not present in the edge document then the `defaultWeight` is used.
- The attribute value must not be negative.
- - **defaultWeight** (number, *optional*): Specifies the default weight of an edge.
- The value must not be negative. The default value is `1`.
-
-{{< info >}}
-Weighted traversals do not support negative weights. If a document
-attribute (as specified by `weightAttribute`) with a negative value is
-encountered during traversal, or if `defaultWeight` is set to a negative
-number, then the query is aborted with an error.
-{{< /info >}}
-
-### Working with collection sets
-
-The syntax for AQL graph traversals using collection sets is as follows
-(square brackets denote optional parts and `|` denotes alternatives):
-
-```aql
-[WITH vertexCollection1[, vertexCollection2[, vertexCollectionN]]]
-FOR vertex[, edge[, path]]
- IN [min[..max]]
- OUTBOUND|INBOUND|ANY startVertex
- edgeCollection1[, edgeCollection2[, edgeCollectionN]]
- [PRUNE [pruneVariable = ]pruneCondition]
- [OPTIONS options]
-```
-
-- `WITH`: Declaration of collections. Optional for single server instances, but
- required for [graph traversals in a cluster](#graph-traversals-in-a-cluster).
- Needs to be placed at the very beginning of the query.
- - **collections** (collection, *repeatable*): list of vertex collections that
- are involved in the traversal
-- **edgeCollections** (collection, *repeatable*): One or more edge collections
- to use for the traversal (instead of using a named graph with `GRAPH graphName`).
- Vertex collections are determined by the edges in the edge collections.
-
- You can override the default traversal direction by setting `OUTBOUND`,
- `INBOUND`, or `ANY` before any of the edge collections.
-
- If the same edge collection is specified multiple times, it behaves as if it
- were specified only once. Specifying the same edge collection is only allowed
- when the collections do not have conflicting traversal directions.
-
- Views cannot be used as edge collections.
-- See the [named graph variant](#working-with-named-graphs) for the remaining
- traversal parameters. The `edgeCollections` restriction option is redundant in
- this case.
-
-### Traversing in mixed directions
-
-For traversals with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction has
-no relevance but in *edges1* and *edges3* the direction should be taken into account.
-In this case you can use `OUTBOUND` as general traversal direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND
- startVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction use the
-direction defined after `IN`. This allows to use a different direction for each
-collection in your traversal.
-
-### Graph traversals in a cluster
-
-Due to the nature of graphs, edges may reference vertices from arbitrary
-collections. Following the paths can thus involve documents from various
-collections and it is not possible to predict which are visited in a
-traversal. Which collections need to be loaded by the graph engine can only be
-determined at run time.
-
-Use the [`WITH` statement](../high-level-operations/with.md) to specify the collections you
-expect to be involved. This is required for traversals using collection sets
-in cluster deployments.
-
-## Pruning
-
-You can define stop conditions for graph traversals to return specific data and
-to improve the query performance. This is called _pruning_ and works by checking
-conditions during the traversal as opposed to filtering the results afterwards
-(post-filtering). This reduces the amount of data to be checked by stopping the
-traversal down specific paths early.
-
-{{< youtube id="4LVeeC0ciCQ" >}}
-
-You can specify one `PRUNE` expression per graph traversal, but it can contain
-an arbitrary number of conditions. You can use the vertex, edge, and path
-variables emitted by the traversal in a prune expression, as well as all other
-variables defined before the `FOR` operation. Note that `PRUNE` is an optional
-clause of the `FOR` operation and that the `OPTIONS` clause needs to be placed
-after `PRUNE`.
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample1
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 0..10 OUTBOUND "places/Toronto" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Edmonton"
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", p.vertices[*].label)
-```
-
-The above example shows a graph traversal using a
-[train station and connections dataset](../../graphs/example-graphs.md#k-shortest-paths-graph):
-
-
-
-The traversal starts at **Toronto** (bottom left), the traversal depth is
-limited to 10, and every station is only visited once. The traversal could
-continue up to **Vancouver** (bottom right) at depth 5, but it is stopped early
-on this path (the only path in this example) at **Edmonton** because of the
-prune expression.
-
-The traversal along paths is stopped as soon as the prune expression evaluates
-to `true` for a given path. The current depth is still included in the result,
-however. This can be seen in the query result of the example which includes the
-Edmonton vertex at which it stopped.
-
-The following example starts a traversal at **London** (middle right), with a
-depth between 2 and 3, and every station is only visited once. The station names
-as well as the travel times are returned:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample2
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-The same example with an added prune expression, with vertex and edge conditions:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample3
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-If either the **Carlisle** vertex or an edge with a travel time of over three
-hours is encountered, the subsequent paths are pruned. In the example, this
-removes the train connections to **Birmingham**, **Glasgow**, and **York**,
-which come after **Carlisle**, as well as the connections to and via
-**Edinburgh** because of the four hour duration for the section from **York**
-to **Edinburgh**.
-
-If your graph is comprised of multiple vertex or edge collections, you can
-also prune as soon as you reach a certain collection, using a condition like
-`PRUNE IS_SAME_COLLECTION("stopCollection", v)`.
-
-If you want to only return the results of the depth at which the traversal
-stopped due to the prune expression, you can use a `FILTER` in addition. You can
-assign the evaluated result of a prune expression to a variable
-(`PRUNE var = `) and use it for filtering:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample4
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE cond = v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- FILTER cond
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-Only paths that end at **Carlisle** or with the last edge having a travel time
-of over three hours are returned. This excludes the connection to **Cologne**
-from the results compared to the previous query.
-
-If you want to exclude the depth at which the prune expression stopped the
-traversal, you can assign the expression to a variable and use its negated value
-in a `FILTER`:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample5
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE cond = v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- FILTER NOT cond
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-This only returns the connection to **Cologne**, which is the opposite of the
-previous example.
-
-You may combine the prune variable with arbitrary other conditions in a `FILTER`
-operation. For example, you can remove results where the last edge has as lower
-travel time than the second to last edge of the path:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample6
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..5 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE cond = v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- FILTER cond AND p.edges[-1].travelTime >= p.edges[-2].travelTime
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-{{< info >}}
-The prune expression is **evaluated at every step of the traversal**. This
-includes any traversal depths below the specified minimum depth, despite not
-becoming part of the result. It also includes depth 0, which is the start vertex
-and a `null` edge.
-
-If you add prune conditions using the edge variable, make sure to account for
-the edge at depth 0 being `null`, as it may accidentally stop the traversal
-immediately. This may not be apparent due to the depth constraints.
-{{< /info >}}
-
-The following examples shows a graph traversal starting at **London**, with a
-traversal depth between 2 and 3, and every station is only visited once:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample7
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-If you add prune conditions to stop the traversal if the station is **Glasgow**
-or the travel time less than some number, no results are turned. This is even the
-case for a value of `2.5`, for which two paths exist that fulfill the criterion
-– to **Cologne** and **Carlisle**:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample8
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v,e,p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Glasgow" OR e.travelTime < 2.5
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-The problem is that `null`, `false`, and `true` are all less than any number (`< 2.5`)
-because of AQL's [Type and value order](../fundamentals/type-and-value-order.md), and
-because the edge at depth 0 is always `null`. The prune condition is accidentally
-fulfilled at the start vertex, stopping the traversal too early. This similarly
-happens if you check an edge attribute for inequality (`!=`) and compare it to
-string, for instance, which evaluates to `true` for the `null` value.
-
-The depth at which a traversal is stopped by pruning is considered as a result,
-but in the above example, the minimum depth of `2` filters the start vertex out.
-If you lower the minimum depth to `0`, you get **London** as the sole result.
-This confirms that the traversal stopped at the start vertex.
-
-To avoid this problem, exclude the `null` value. For example, you can use
-`e.travelTime > 0 AND e.travelTime < 2.5`, but more generic solutions are to
-exclude depth 0 from the check (`LENGTH(p.edges) > 0`) or to simply ignore the
-`null` edge (`e != null`):
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample9
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v,e,p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Glasgow" OR (e != null AND e.travelTime < 2.5)
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-{{< warning >}}
-You can use AQL functions in prune expressions but only those that can be
-executed on DB-Servers, regardless of your deployment mode. The following
-functions cannot be used in the expression:
-- `CALL()`
-- `APPLY()`
-- `DOCUMENT()`
-- `V8()`
-- `SCHEMA_GET()`
-- `SCHEMA_VALIDATE()`
-- `VERSION()`
-- `COLLECTIONS()`
-- `CURRENT_USER()`
-- `CURRENT_DATABASE()`
-- `COLLECTION_COUNT()`
-- `NEAR()`
-- `WITHIN()`
-- `WITHIN_RECTANGLE()`
-- `FULLTEXT()`
-- [User-defined functions (UDFs)](../user-defined-functions.md)
-{{< /warning >}}
-
-## Using filters
-
-All three variables emitted by the traversals might as well be used in filter
-statements. For some of these filter statements the optimizer can detect that it
-is possible to prune paths of traversals earlier, hence filtered results are
-not emitted to the variables in the first place. This may significantly
-improve the performance of your query. Whenever a filter is not fulfilled,
-the complete set of `vertex`, `edge` and `path` is skipped. All paths
-with a length greater than the `max` depth are never computed.
-
-Filter conditions that are `AND`-combined can be optimized, but `OR`-combined
-conditions cannot.
-
-### Filtering on paths
-
-Filtering on paths allows for the second most powerful filtering and may have the
-second highest impact on performance. Using the path variable you can filter on
-specific iteration depths. You can filter for absolute positions in the path
-by specifying a positive number (which then qualifies for the optimizations),
-or relative positions to the end of the path by specifying a negative number.
-
-#### Filtering edges on the path
-
-This example traversal filters all paths where the start edge (index 0) has the
-attribute `theTruth` equal to `true`. The resulting paths are up to 5 items long:
-
-```aql
----
-name: GRAPHTRAV_graphFilterEdges
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].theTruth == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-#### Filtering vertices on the path
-
-Similar to filtering the edges on the path, you can also filter the vertices:
-
-```aql
----
-name: GRAPHTRAV_graphFilterVertices
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key == "G"
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-#### Combining several filters
-
-You can combine filters in any way you like:
-
-```aql
----
-name: GRAPHTRAV_graphFilterCombine
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].theTruth == true
- AND p.edges[1].theFalse == false
- FILTER p.vertices[1]._key == "G"
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-The query filters all paths where the first edge has the attribute
-`theTruth` equal to `true`, the first vertex is `"G"` and the second edge has
-the attribute `theFalse` equal to `false`. The resulting paths are up to
-5 items long.
-
-**Note**: Despite the `min` depth of 1, this only returns results of
-depth 2. This is because for all results in depth 1, the second edge does not
-exist and hence cannot fulfill the condition here.
-
-#### Filter on the entire path
-
-With the help of array comparison operators filters can also be defined
-on the entire path, like `ALL` edges should have `theTruth == true`:
-
-```aql
----
-name: GRAPHTRAV_graphFilterEntirePath
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[*].theTruth ALL == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-Or `NONE` of the edges should have `theTruth == true`:
-
-```aql
----
-name: GRAPHTRAV_graphFilterPathEdges
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[*].theTruth NONE == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-Both examples above are recognized by the optimizer and can potentially use other indexes
-than the edge index.
-
-It is also possible to define that at least one edge on the path has to fulfill the condition:
-
-```aql
----
-name: GRAPHTRAV_graphFilterPathAnyEdge
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[*].theTruth ANY == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-It is guaranteed that at least one, but potentially more edges fulfill the condition.
-All of the above filters can be defined on vertices in the exact same way.
-
-### Filtering on the path vs. filtering on vertices or edges
-
-Filtering on the path influences the Iteration on your graph. If certain conditions
-aren't met, the traversal may stop continuing along this path.
-
-In contrast filters on vertex or edge only express whether you're interested in the actual value of these
-documents. Thus, it influences the list of returned documents (if you return v or e) similar
-as specifying a non-null `min` value. If you specify a min value of 2, the traversal over the first
-two nodes of these paths has to be executed - you just won't see them in your result array.
-
-Similar are filters on vertices or edges - the traverser has to walk along these nodes, since
-you may be interested in documents further down the path.
-
-### Examples
-
-Create a simple symmetric traversal demonstration graph:
-
-
-
-```js
----
-name: GRAPHTRAV_01_create_graph
-description: ''
----
-~addIgnoreCollection("circles");
-~addIgnoreCollection("edges");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("traversalGraph");
-db.circles.toArray();
-db.edges.toArray();
-print("once you don't need them anymore, clean them up:");
-examples.dropGraph("traversalGraph");
-```
-
-To get started we select the full graph. For better overview we only return
-the vertex IDs:
-
-```aql
----
-name: GRAPHTRAV_02_traverse_all_a
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_02_traverse_all_b
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 OUTBOUND 'circles/A' edges RETURN v._key
-```
-
-We can nicely see that it is heading for the first outer vertex, then goes back to
-the branch to descend into the next tree. After that it returns to our start node,
-to descend again. As we can see both queries return the same result, the first one
-uses the named graph, the second uses the edge collections directly.
-
-Now we only want the elements of a specific depth (min = max = 2), the ones that
-are right behind the fork:
-
-```aql
----
-name: GRAPHTRAV_03_traverse_3a
-description: ''
-dataset: traversalGraph
----
-FOR v IN 2..2 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_03_traverse_3b
-description: ''
-dataset: traversalGraph
----
-FOR v IN 2 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-As you can see, we can express this in two ways: with or without the `max` depth
-parameter.
-
-### Filter examples
-
-Now let's start to add some filters. We want to cut of the branch on the right
-side of the graph, we may filter in two ways:
-
-- we know the vertex at depth 1 has `_key` == `G`
-- we know the `label` attribute of the edge connecting **A** to **G** is `right_foo`
-
-```aql
----
-name: GRAPHTRAV_04_traverse_4a
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key != 'G'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_04_traverse_4b
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].label != 'right_foo'
- RETURN v._key
-```
-
-As we can see, all vertices behind **G** are skipped in both queries.
-The first filters on the vertex `_key`, the second on an edge label.
-Note again, as soon as a filter is not fulfilled for any of the three elements
-`v`, `e` or `p`, the complete set of these is excluded from the result.
-
-We also may combine several filters, for instance to filter out the right branch
-(**G**), and the **E** branch:
-
-```aql
----
-name: GRAPHTRAV_05_traverse_5a
-description: ''
-dataset: traversalGraph
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key != 'G'
- FILTER p.edges[1].label != 'left_blub'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_05_traverse_5b
-description: ''
-dataset: traversalGraph
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key != 'G' AND p.edges[1].label != 'left_blub'
- RETURN v._key
-```
-
-As you can see, combining two `FILTER` statements with an `AND` has the same result.
-
-## Comparing OUTBOUND / INBOUND / ANY
-
-All our previous examples traversed the graph in `OUTBOUND` edge direction.
-You may however want to also traverse in reverse direction (`INBOUND`) or
-both (`ANY`). Since `circles/A` only has outbound edges, we start our queries
-from `circles/E`:
-
-```aql
----
-name: GRAPHTRAV_06_traverse_6a
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 OUTBOUND 'circles/E' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_06_traverse_6b
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 INBOUND 'circles/E' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_06_traverse_6c
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 ANY 'circles/E' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-The first traversal only walks in the forward (`OUTBOUND`) direction.
-Therefore from **E** we only can see **F**. Walking in reverse direction
-(`INBOUND`), we see the path to **A**: **B** → **A**.
-
-Walking in forward and reverse direction (`ANY`) we can see a more diverse result.
-First of all, we see the simple paths to **F** and **A**. However, these vertices
-have edges in other directions and they are traversed.
-
-**Note**: The traverser may use identical edges multiple times. For instance,
-if it walks from **E** to **F**, it continues to walk from **F** to **E**
-using the same edge once again. Due to this, we see duplicate nodes in the result.
-
-Please note that the direction can't be passed in by a bind parameter.
-
-## Use the AQL explainer for optimizations
-
-Now let's have a look what the optimizer does behind the curtain and inspect
-traversal queries using [the explainer](../execution-and-performance/query-optimization.md):
-
-```aql
----
-name: GRAPHTRAV_07_traverse_7
-description: ''
-dataset: traversalGraph
-explain: true
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- LET localScopeVar = RAND() > 0.5
- FILTER p.edges[0].theTruth != localScopeVar
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_07_traverse_8
-description: ''
-dataset: traversalGraph
-explain: true
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].label == 'right_foo'
- RETURN v._key
-```
-
-We now see two queries: In one we add a `localScopeVar` variable, which is outside
-the scope of the traversal itself - it is not known inside of the traverser.
-Therefore, this filter can only be executed after the traversal, which may be
-undesired in large graphs. The second query on the other hand only operates on the
-path, and therefore this condition can be used during the execution of the traversal.
-Paths that are filtered out by this condition won't be processed at all.
-
-And finally clean it up again:
-
-```js
----
-name: GRAPHTRAV_99_drop_graph
-description: ''
----
-~examples.loadGraph("traversalGraph");
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("traversalGraph");
-```
-
-If this traversal is not powerful enough for your needs, like you cannot describe
-your conditions as AQL filter statements, then you might want to have a look at
-the [edge collection methods](../../develop/javascript-api/@arangodb/collection-object.md#edge-documents)
-in the JavaScript API.
-
-Also see how to [combine graph traversals](../examples-and-query-patterns/traversals.md).
diff --git a/site/content/3.10/aql/how-to-invoke-aql/with-arangosh.md b/site/content/3.10/aql/how-to-invoke-aql/with-arangosh.md
deleted file mode 100644
index a2a7a53b53..0000000000
--- a/site/content/3.10/aql/how-to-invoke-aql/with-arangosh.md
+++ /dev/null
@@ -1,726 +0,0 @@
----
-title: Executing AQL queries from _arangosh_
-menuTitle: with arangosh
-weight: 5
-description: >-
- How to run queries, set bind parameters, and obtain the resulting and
- additional information using the JavaScript API
----
-In the ArangoDB shell, you can use the `db._query()` and `db._createStatement()`
-methods to execute AQL queries. This chapter also describes
-how to use bind parameters, counting, statistics and cursors.
-
-## With `db._query()`
-
-`db._query() → cursor`
-
-You can execute queries with the `_query()` method of the `db` object.
-This runs the specified query in the context of the currently
-selected database and returns the query results in a cursor.
-You can print the results of the cursor using its `toArray()` method:
-
-```js
----
-name: 01_workWithAQL_all
-description: ''
----
-~addIgnoreCollection("mycollection")
-var coll = db._create("mycollection")
-var doc = db.mycollection.save({ _key: "testKey", Hello : "World" })
-db._query('FOR my IN mycollection RETURN my._key').toArray()
-```
-
-### `db._query()` bind parameters
-
-`db._query(, ) → cursor`
-
-To pass bind parameters into a query, you can specify a second argument when
-calling the `_query()` method:
-
-```js
----
-name: 02_workWithAQL_bindValues
-description: ''
----
-db._query('FOR c IN @@collection FILTER c._key == @key RETURN c._key', {
- '@collection': 'mycollection',
- 'key': 'testKey'
-}).toArray();
-```
-
-### ES6 template strings
-
-`` aql`` ``
-
-It is also possible to use ES6 template strings for generating AQL queries. There is
-a template string generator function named `aql`.
-
-The following example demonstrates what the template string function generates:
-
-```js
----
-name: 02_workWithAQL_aqlTemplateString
-description: ''
----
-var key = 'testKey';
-aql`FOR c IN mycollection FILTER c._key == ${key} RETURN c._key`
-```
-
-The next example directly uses the generated result to execute a query:
-
-```js
----
-name: 02_workWithAQL_aqlQuery
-description: ''
----
-var key = 'testKey';
-db._query(
- aql`FOR c IN mycollection FILTER c._key == ${key} RETURN c._key`
-).toArray();
-```
-
-Arbitrary JavaScript expressions can be used in queries that are generated with the
-`aql` template string generator. Collection objects are handled automatically:
-
-```js
----
-name: 02_workWithAQL_aqlCollectionQuery
-description: ''
----
-var key = 'testKey';
-db._query(aql`FOR doc IN ${ db.mycollection } RETURN doc`).toArray();
-```
-
-Note: data-modification AQL queries normally do not return a result unless the
-AQL query contains a `RETURN` operation at the top-level. Without a `RETURN`
-operation, the `toArray()` method returns an empty array.
-
-### Statistics and extra Information
-
-`cursor.getExtra() → queryInfo`
-
-It is always possible to retrieve statistics for a query with the `getExtra()` method:
-
-```js
----
-name: 03_workWithAQL_getExtra
-description: ''
----
-db._query(`
- FOR i IN 1..100
- INSERT { _key: CONCAT('test', TO_STRING(i)) } INTO mycollection
-`).getExtra();
-```
-
-The meaning of the statistics values is described in
-[Query statistics](../execution-and-performance/query-statistics.md).
-
-Query warnings are also reported here. If you design queries on the shell,
-be sure to check for warnings.
-
-### Main query options
-
-`db._query(, , , ) → cursor`
-
-You can pass the main options as the third argument to `db._query()` if you
-also pass a fourth argument with the sub options (can be an empty object `{}`).
-
-#### `count`
-
-Whether the number of documents in the result set should be calculated on the
-server side and returned in the `count` attribute of the result. Calculating the
-`count` attribute might have a performance impact for some queries so this
-option is turned off by default, and only returned when requested.
-
-If enabled, you can get the count by calling the `count()` method of the cursor.
-You can also count the number of results on the client side, for example, using
-`cursor.toArray().length`.
-
-```js
----
-name: 02_workWithAQL_count
-description: ''
----
-var cursor = db._query(
- 'FOR i IN 1..42 RETURN i',
- {},
- { count: true },
- {}
-);
-cursor.count();
-cursor.toArray().length;
-```
-
-#### `batchSize`
-
-The maximum number of result documents to be transferred from the server to the
-client in one roundtrip. If this attribute is not set, a server-controlled
-default value is used. A `batchSize` value of `0` is disallowed.
-
-```js
----
-name: 02_workWithAQL_batchSize
-description: ''
----
-db._query(
- 'FOR i IN 1..3 RETURN i',
- {},
- { batchSize: 2 },
- {}
-).toArray(); // full result retrieved in two batches
-```
-
-#### `ttl`
-
-The time-to-live for the cursor (in seconds). If the result set is small enough
-(less than or equal to `batchSize`), then results are returned right away.
-Otherwise, they are stored in memory and are accessible via the cursor with
-respect to the `ttl`. The cursor is removed on the server automatically after
-the specified amount of time. This is useful to ensure garbage collection of
-cursors that are not fully fetched by clients. If not set, a server-defined
-value is used (default: 30 seconds).
-
-```js
----
-name: 02_workWithAQL_ttl
-description: ''
----
-db._query(
- 'FOR i IN 1..20 RETURN i',
- {},
- { ttl: 5, batchSize: 10 },
- {}
-).toArray(); // Each batch needs to be fetched within 5 seconds
-```
-
-#### `cache`
-
-Whether the AQL query results cache shall be used. If set to `false`, then any
-query cache lookup is skipped for the query. If set to `true`, it leads to the
-query cache being checked for the query **if** the query cache mode is either
-set to `on` or `demand`.
-
-```js
----
-name: 02_workWithAQL_cache
-description: ''
----
-db._query(
- 'FOR i IN 1..20 RETURN i',
- {},
- { cache: true },
- {}
-); // result may get taken from cache
-```
-
-#### `memoryLimit`
-
-To set a memory limit for the query, pass `options` to the `_query()` method.
-The memory limit specifies the maximum number of bytes that the query is
-allowed to use. When a single AQL query reaches the specified limit value,
-the query will be aborted with a *resource limit exceeded* exception. In a
-cluster, the memory accounting is done per shard, so the limit value is
-effectively a memory limit per query per shard.
-
-```js
----
-name: 02_workWithAQL_memoryLimit
-description: ''
----
-db._query(
- 'FOR i IN 1..100000 SORT i RETURN i',
- {},
- { memoryLimit: 100000 }
-).toArray(); // xpError(ERROR_RESOURCE_LIMIT)
-```
-
-If no memory limit is specified, then the server default value (controlled by
-the `--query.memory-limit` startup option) is used for restricting the maximum amount
-of memory the query can use. A memory limit value of `0` means that the maximum
-amount of memory for the query is not restricted.
-
-### Query sub options
-
-`db._query(, , ) → cursor`
-
-`db._query(, , , ) → cursor`
-
-You can pass the sub options as the third argument to `db._query()` if you don't
-provide main options, or as fourth argument if you do.
-
-#### `fullCount`
-
-If you set `fullCount` to `true` and if the query contains a `LIMIT` operation, then the
-result has an extra attribute with the sub-attributes `stats` and `fullCount`, like
-`{ ... , "extra": { "stats": { "fullCount": 123 } } }`. The `fullCount` attribute
-contains the number of documents in the result before the last top-level `LIMIT` in the
-query was applied. It can be used to count the number of documents that match certain
-filter criteria, but only return a subset of them, in one go. It is thus similar to
-MySQL's `SQL_CALC_FOUND_ROWS` hint. Note that setting the option disables a few
-`LIMIT` optimizations and may lead to more documents being processed, and thus make
-queries run longer. Note that the `fullCount` attribute may only be present in the
-result if the query has a top-level `LIMIT` operation and the `LIMIT` operation
-is actually used in the query.
-
-#### `failOnWarning`
-If you set `failOnWarning` to `true`, this makes the query throw an exception and
-abort in case a warning occurs. You should use this option in development to catch
-errors early. If set to `false`, warnings don't propagate to exceptions and are
-returned with the query results. There is also a `--query.fail-on-warning`
-startup options for setting the default value for `failOnWarning`, so that you
-don't need to set it on a per-query level.
-
-#### `cache`
-
-If you set `cache` to `true`, this puts the query result into the query result cache
-if the query result is eligible for caching and the query cache is running in demand
-mode. If set to `false`, the query result is not inserted into the query result
-cache. Note that query results are never inserted into the query result cache if
-the query result cache is disabled, and that they are automatically inserted into
-the query result cache if it is active in non-demand mode.
-
-#### `fillBlockCache`
-
-If you set `fillBlockCache` to `true` or not specify it, this makes the query store
-the data it reads via the RocksDB storage engine in the RocksDB block cache. This is
-usually the desired behavior. You can set the option to `false` for queries that are
-known to either read a lot of data that would thrash the block cache, or for queries
-that read data known to be outside of the hot set. By setting the option
-to `false`, data read by the query does not make it into the RocksDB block cache if
-it is not already in there, thus leaving more room for the actual hot set.
-
-#### `profile`
-
-If you set `profile` to `true` or `1`, extra timing information is returned for the query.
-The timing information is accessible via the `getExtra()` method of the query
-result. If set to `2`, the query includes execution statistics per query plan
-execution node in `stats.nodes` sub-attribute of the `extra` return attribute.
-Additionally, the query plan is returned in the `extra.plan` sub-attribute.
-
-#### `maxWarningCount`
-
-The `maxWarningCount` option limits the number of warnings that are returned by the query if
-`failOnWarning` is not set to `true`. The default value is `10`.
-
-#### `maxNumberOfPlans`
-
-The `maxNumberOfPlans` option limits the number of query execution plans the optimizer
-creates at most. Reducing the number of query execution plans may speed up query plan
-creation and optimization for complex queries, but normally there is no need to adjust
-this value.
-
-#### `optimizer`
-
-Options related to the query optimizer.
-
-- `rules`: A list of to-be-included or to-be-excluded optimizer rules can be put into
- this attribute, telling the optimizer to include or exclude specific rules. To disable
- a rule, prefix its name with a `-`, to enable a rule, prefix it with a `+`. There is also
- a pseudo-rule `all`, which matches all optimizer rules. `-all` disables all rules.
-
-#### `stream`
-
-Set `stream` to `true` to execute the query in a **streaming** fashion.
-The query result is not stored on the server, but calculated on the fly.
-
-{{< warning >}}
-Long-running queries need to hold the collection locks for as long as the query
-cursor exists. It is advisable to **only** use this option on short-running
-queries **or** without exclusive locks.
-{{< /warning >}}
-
-If set to `false`, the query is executed right away in its entirety.
-In that case, the query results are either returned right away (if the result
-set is small enough), or stored on the arangod instance and can be accessed
-via the cursor API.
-
-The default value is `false`.
-
-{{< info >}}
-The query options `cache`, `count` and `fullCount` don't work on streaming
-queries. Additionally, query statistics, profiling data, and warnings are only
-available after the query has finished and are delivered as part of the last batch.
-{{< /info >}}
-
-#### `maxRuntime`
-
-The query has to be executed within the given runtime or it is killed.
-The value is specified in seconds. The default value is `0.0` (no timeout).
-
-#### `maxNodesPerCallstack`
-
-The number of execution nodes in the query plan after
-that stack splitting is performed to avoid a potential stack overflow.
-Defaults to the configured value of the startup option
-`--query.max-nodes-per-callstack`.
-
-This option is only useful for testing and debugging and normally does not need
-any adjustment.
-
-#### `maxTransactionSize`
-
-The transaction size limit in bytes.
-
-#### `intermediateCommitSize`
-
-The maximum total size of operations after which an intermediate
-commit is performed automatically.
-
-#### `intermediateCommitCount`
-
-The maximum number of operations after which an intermediate
-commit is performed automatically.
-
-#### `spillOverThresholdMemoryUsage`
-
-Introduced in: v3.10.0
-
-This option allows queries to store intermediate and final results temporarily
-on disk if the amount of memory used (in bytes) exceeds the specified value.
-This is used for decreasing the memory usage during the query execution.
-
-This option only has an effect on queries that use the `SORT` operation but
-without a `LIMIT`, and if you enable the spillover feature by setting a path
-for the directory to store the temporary data in with the
-[`--temp.intermediate-results-path` startup option](../../components/arangodb-server/options.md#--tempintermediate-results-path).
-
-Default value: 128MB.
-
-{{< info >}}
-Spilling data from RAM onto disk is an experimental feature and is turned off
-by default. The query results are still built up entirely in RAM on Coordinators
-and single servers for non-streaming queries. To avoid the buildup of
-the entire query result in RAM, use a streaming query (see the
-[`stream`](#stream) option).
-{{< /info >}}
-
-#### `spillOverThresholdNumRows`
-
-Introduced in: v3.10.0
-
-This option allows queries to store intermediate and final results temporarily
-on disk if the number of rows produced by the query exceeds the specified value.
-This is used for decreasing the memory usage during the query execution. In a
-query that iterates over a collection that contains documents, each row is a
-document, and in a query that iterates over temporary values
-(i.e. `FOR i IN 1..100`), each row is one of such temporary values.
-
-This option only has an effect on queries that use the `SORT` operation but
-without a `LIMIT`, and if you enable the spillover feature by setting a path
-for the directory to store the temporary data in with the
-[`--temp.intermediate-results-path` startup option](../../components/arangodb-server/options.md#--tempintermediate-results-path).
-
-Default value: `5000000` rows.
-
-{{< info >}}
-Spilling data from RAM onto disk is an experimental feature and is turned off
-by default. The query results are still built up entirely in RAM on Coordinators
-and single servers for non-streaming queries. To avoid the buildup of
-the entire query result in RAM, use a streaming query (see the
-[`stream`](#stream) option).
-{{< /info >}}
-
-#### `allowDirtyReads`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.0
-
-If you set this option to `true` and execute the query against a cluster
-deployment, then the Coordinator is allowed to read from any shard replica and
-not only from the leader. See [Read from followers](../../develop/http-api/documents.md#read-from-followers)
-for details.
-
-#### `skipInaccessibleCollections`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Let AQL queries (especially graph traversals) treat collection to which a
-user has **no access** rights for as if these collections are empty.
-Instead of returning a *forbidden access* error, your queries execute normally.
-This is intended to help with certain use-cases: A graph contains several collections
-and different users execute AQL queries on that graph. You can naturally limit the
-accessible results by changing the access rights of users on collections.
-
-#### `satelliteSyncWait`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Configure how long a DB-Server has time to bring the SatelliteCollections
-involved in the query into sync. The default value is `60.0` seconds.
-When the maximal time is reached, the query is stopped.
-
-## With `db._createStatement()` (ArangoStatement)
-
-The `_query()` method is a shorthand for creating an `ArangoStatement` object,
-executing it and iterating over the resulting cursor. If more control over the
-result set iteration is needed, it is recommended to first create an
-`ArangoStatement` object as follows:
-
-```js
----
-name: 04_workWithAQL_statements1
-description: ''
----
-stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-```
-
-To execute the query, use the `execute()` method of the _statement_ object:
-
-```js
----
-name: 05_workWithAQL_statements2
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-cursor = stmt.execute();
-```
-
-You can pass a number to the `execute()` method to specify a batch size value.
-The server returns at most this many results in one roundtrip.
-The batch size cannot be adjusted after the query is first executed.
-
-**Note**: There is no need to explicitly call the execute method if another
-means of fetching the query results is chosen. The following two approaches
-lead to the same result:
-
-```js
----
-name: executeQueryNoBatchSize
-description: ''
----
-~db._create("users");
-~db.users.save({ name: "Gerhard" });
-~db.users.save({ name: "Helmut" });
-~db.users.save({ name: "Angela" });
-var result = db.users.all().toArray();
-print(result);
-
-var q = db._query("FOR x IN users RETURN x");
-result = [ ];
-while (q.hasNext()) {
- result.push(q.next());
-}
-print(result);
-~db._drop("users")
-```
-
-The following two alternatives both use a batch size and return the same
-result:
-
-```js
----
-name: executeQueryBatchSize
-description: ''
----
-~db._create("users");
-~db.users.save({ name: "Gerhard" });
-~db.users.save({ name: "Helmut" });
-~db.users.save({ name: "Angela" });
-var result = [ ];
-var q = db.users.all();
-q.execute(1);
-while(q.hasNext()) {
- result.push(q.next());
-}
-print(result);
-
-result = [ ];
-q = db._query("FOR x IN users RETURN x", {}, { batchSize: 1 });
-while (q.hasNext()) {
- result.push(q.next());
-}
-print(result);
-~db._drop("users")
-```
-
-### Cursors
-
-Once the query executed the query results are available in a cursor.
-The cursor can return all its results at once using the `toArray()` method.
-This is a short-cut that you can use if you want to access the full result
-set without iterating over it yourself.
-
-```js
----
-name: 05_workWithAQL_statements3
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-~var cursor = stmt.execute();
-cursor.toArray();
-```
-
-Cursors can also be used to iterate over the result set document-by-document.
-To do so, use the `hasNext()` and `next()` methods of the cursor:
-
-```js
----
-name: 05_workWithAQL_statements4
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-~var c = stmt.execute();
-while (c.hasNext()) {
- require("@arangodb").print(c.next());
-}
-```
-
-Please note that you can iterate over the results of a cursor only once, and that
-the cursor will be empty when you have fully iterated over it. To iterate over
-the results again, the query needs to be re-executed.
-
-Additionally, the iteration can be done in a forward-only fashion. There is no
-backwards iteration or random access to elements in a cursor.
-
-### ArangoStatement parameters binding
-
-To execute an AQL query using bind parameters, you need to create a statement first
-and then bind the parameters to it before execution:
-
-```js
----
-name: 05_workWithAQL_statements5
-description: ''
----
-var stmt = db._createStatement( { "query": "FOR i IN [ @one, @two ] RETURN i * 2" } );
-stmt.bind("one", 1);
-stmt.bind("two", 2);
-cursor = stmt.execute();
-```
-
-The cursor results can then be dumped or iterated over as usual, e.g.:
-
-```js
----
-name: 05_workWithAQL_statements6
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ @one, @two ] RETURN i * 2" } );
-~stmt.bind("one", 1);
-~stmt.bind("two", 2);
-~var cursor = stmt.execute();
-cursor.toArray();
-```
-
-or
-
-```js
----
-name: 05_workWithAQL_statements7
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ @one, @two ] RETURN i * 2" } );
-~stmt.bind("one", 1);
-~stmt.bind("two", 2);
-~var cursor = stmt.execute();
-while (cursor.hasNext()) {
- require("@arangodb").print(cursor.next());
-}
-```
-
-Please note that bind parameters can also be passed into the `_createStatement()`
-method directly, making it a bit more convenient:
-
-```js
----
-name: 05_workWithAQL_statements8
-description: ''
----
-stmt = db._createStatement({
- "query": "FOR i IN [ @one, @two ] RETURN i * 2",
- "bindVars": {
- "one": 1,
- "two": 2
- }
-});
-```
-
-### Counting with a cursor
-
-Cursors also optionally provide the total number of results. By default, they do not.
-To make the server return the total number of results, you may set the `count` attribute to
-`true` when creating a statement:
-
-```js
----
-name: 05_workWithAQL_statements9
-description: ''
----
-stmt = db._createStatement( {
- "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i",
- "count": true } );
-```
-
-After executing this query, you can use the `count` method of the cursor to get the
-number of total results from the result set:
-
-```js
----
-name: 05_workWithAQL_statements10
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i", "count": true } );
-var cursor = stmt.execute();
-cursor.count();
-```
-
-Please note that the `count` method returns nothing if you did not specify the `count`
-attribute when creating the query.
-
-This is intentional so that the server may apply optimizations when executing the query and
-construct the result set incrementally. Incremental creation of the result sets
-is no possible
-if all of the results need to be shipped to the client anyway. Therefore, the client
-has the choice to specify `count` and retrieve the total number of results for a query (and
-disable potential incremental result set creation on the server), or to not retrieve the total
-number of results and allow the server to apply optimizations.
-
-Please note that at the moment the server will always create the full result set for each query so
-specifying or omitting the `count` attribute currently does not have any impact on query execution.
-This may change in the future. Future versions of ArangoDB may create result sets incrementally
-on the server-side and may be able to apply optimizations if a result set is not fully fetched by
-a client.
-
-### Using cursors to obtain additional information on internal timings
-
-Cursors can also optionally provide statistics of the internal execution phases. By default, they do not.
-To get to know how long parsing, optimization, instantiation and execution took,
-make the server return that by setting the `profile` attribute to
-`true` when creating a statement:
-
-```js
----
-name: 06_workWithAQL_statements11
-description: ''
----
-stmt = db._createStatement({
- query: "FOR i IN [ 1, 2, 3, 4 ] RETURN i",
- options: {"profile": true}});
-```
-
-After executing this query, you can use the `getExtra()` method of the cursor to get the
-produced statistics:
-
-```js
----
-name: 06_workWithAQL_statements12
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i", options: {"profile": true}} );
-var cursor = stmt.execute();
-cursor.getExtra();
-```
-
-## Query validation with `db._parse()`
-
-The `_parse()` method of the `db` object can be used to parse and validate a
-query syntactically, without actually executing it.
-
-```js
----
-name: 06_workWithAQL_statements13
-description: ''
----
-db._parse( "FOR i IN [ 1, 2 ] RETURN i" );
-```
diff --git a/site/content/3.10/arangograph/_index.md b/site/content/3.10/arangograph/_index.md
deleted file mode 100644
index 9ba6efedf4..0000000000
--- a/site/content/3.10/arangograph/_index.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: ArangoGraph Insights Platform
-menuTitle: ArangoGraph
-weight: 65
-description: >-
- The ArangoGraph Insights Platform provides the entire functionality of
- ArangoDB as a service, without the need to run or manage databases yourself
-aliases:
- - arangograph/changelog
----
-The [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
-formerly called Oasis, provides ArangoDB databases as a Service (DBaaS).
-It enables you to use the entire functionality of an ArangoDB cluster
-deployment without the need to run or manage the system yourself.
-
-The ArangoGraph Insights Platform...
-
-- runs your databases in data centers of the cloud provider
- of your choice: Google Cloud Platform (GCP), Amazon Web Services (AWS),
- Microsoft Azure. This optimizes performance and reduces cost.
-
-- ensures that your databases are always available and
- healthy by monitoring them 24/7.
-
-- ensures that your databases are kept up to date by
- installing new versions without service interruption.
-
-- ensures that your data is safe by providing encryption &
- audit logs and making frequent data backups.
-
-- guarantees that your data always remains your property and
- access to it is protected with industry standard safeguards.
-
-For more information see
-[dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-
-For quick start guide, see
-[Use ArangoDB in the Cloud](../get-started/set-up-a-cloud-instance.md).
diff --git a/site/content/3.10/arangograph/api/_index.md b/site/content/3.10/arangograph/api/_index.md
deleted file mode 100644
index ee4f21371f..0000000000
--- a/site/content/3.10/arangograph/api/_index.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: The ArangoGraph API
-menuTitle: ArangoGraph API
-weight: 60
-description: >-
- Interface to control all resources inside ArangoGraph in a scriptable manner
-aliases:
- - arangograph-api
----
-The [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
-comes with its own API. This API enables you to control all
-resources inside ArangoGraph in a scriptable manner. Typical use cases are spinning
-up ArangoGraph deployments during continuous integration and infrastructure as code.
-
-The ArangoGraph API…
-
-- is a well-specified API that uses
- [Protocol Buffers](https://developers.google.com/protocol-buffers/)
- as interface definition and [gRPC](https://grpc.io/) as
- underlying protocol.
-
-- allows for automatic generation of clients for a large list of languages.
- A Go client is available out of the box.
-
-- uses API keys for authentication. API keys impersonate a user and inherit
- the permissions of that user.
-
-- is also available as a command-line tool called [oasisctl](../oasisctl/_index.md).
-
-- is also available as a
- [Terraform plugin](https://github.com/arangodb-managed/terraform-provider-oasis/).
- This plugin makes integration of ArangoGraph in infrastructure as code projects
- very simple. To learn more, refer to the [plugin documentation](https://registry.terraform.io/providers/arangodb-managed/oasis/latest/docs).
-
-Also see:
-- [github.com/arangodb-managed/apis](https://github.com/arangodb-managed/apis/)
-- [API definitions](https://arangodb-managed.github.io/apis/index.html)
diff --git a/site/content/3.10/arangograph/api/get-started.md b/site/content/3.10/arangograph/api/get-started.md
deleted file mode 100644
index ee72c989a8..0000000000
--- a/site/content/3.10/arangograph/api/get-started.md
+++ /dev/null
@@ -1,481 +0,0 @@
----
-title: Get started with the ArangoGraph API and Oasisctl
-menuTitle: Get started with Oasisctl
-weight: 10
-description: >-
- A tutorial that guides you through the ArangoGraph API as well as the Oasisctl
- command-line tool
-aliases:
- - ../arangograph-api/getting-started
----
-This tutorial shows you how to do the following:
-
-- Generate an API key and authenticate with Oasisctl
-- View information related to your organizations, projects, and deployments
-- Configure, create and delete a deployment
-
-With Oasisctl the general command structure is to execute commands such as:
-
-```
-oasisctl list deployments
-```
-
-This command lists all deployments available to the authenticated user and we
-will explore it in more detail later. Most commands also have associated
-`--flags` that are required or provide additional options, this aligns with the
-interaction method for many command line utilities. If you aren’t already
-familiar with this, follow along as there are many examples in this guide that
-will familiarize you with this command structure and using flags, along with
-how to use OasisCtl to access the ArangoGraph API.
-
-Note: A good rule of thumb for all variables, resource names, and identifiers
-is to **assume they are all case sensitive**, when being used with Oasisctl.
-
-## API Authentication
-
-### Generating an API Key
-
-The first step to using the ArangoGraph API is to generate an API key. To generate a
-key you will need to be signed into your account at
-[dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-Once you are signed in, hover over the profile icon in the top right corner.
-
-
-
-Click _My API keys_.
-
-This will bring you to your API key management screen. From this screen you can
-create, reject, and delete API keys.
-
-Click the _New API key_ button.
-
-
-
-The pop-up box that follows has a few options for customizing the access level
-of this API key.
-
-The options you have available include:
-
-- Limit access to 1 organization or all organizations this user has access to
-- Set an expiration time, specified in number of hours
-- Limit key to read-only access
-
-Once you have configured the API key access options, you will be presented with
-your API key ID and API key secret. It is very important that you capture the
-API key secret before clicking the close button. There is no way to retrieve
-the API key secret after closing this pop-up window.
-
-
-
-Once you have securely stored your API key ID and secret, click close.
-
-That is all there is to setting up API access to your ArangoGraph organizations.
-
-### Authenticating with Oasisctl
-
-Now that you have API access it is time to login with Oasisctl.
-
-Running the Oasisctl utility without any arguments is the equivalent of
-including the --help flag. This shows all of the top level commands available
-and you can continue exploring each command by typing the command name
-followed by the --help flag to see the options available for that command.
-
-Let’s start with doing that for the login command:
-
-```bash
-oasisctl login --help
-```
-
-You should see an output similar to this:
-
-
-
-This shows two additional flags are available, aside from the help flag.
-
-- `--key-id`
-- `--key-secret`
-
-These require the values we received when creating the API key. Once you run
-this command you will receive an authentication token that can be used for the
-remainder of the session.
-
-```bash
-oasisctl login \
- --key-id cncApiKeyId \
- --key-secret 873-secret-key-id
-```
-
-Upon successful login you should receive an authentication token:
-
-
-
-Depending on your environment, you could instead store this token for easier
-access. For example:
-
-With Linux:
-
-```bash
-export OASIS_TOKEN=$(oasisctl login --key-id cncApiKeyId --key-secret 873-secret-key-id)
-```
-
-Or Windows Powershell:
-
-```powershell
-setx OASIS_TOKEN (oasisctl login --key-id cncApiKeyId --key-secret 873-secret-key-id)
-```
-
-In the coming sections you will see how to authenticate with this token when
-using other commands that require authentication.
-
-## Viewing and Managing Organizations and Deployments
-
-### Format
-
-This section covers the basics of retrieving information from the ArangoGraph API.
-Depending on the data you are requesting from the ArangoGraph API, being able to read
-it in the command line can start to become difficult. To make text easier to
-read for humans and your applications, Oasisctl offers two options for
-formatting the data received:
-
-- Table
-- JSON
-
-You can define the format of the data by supplying the `--format` flag along
-with your preferred format, like so:
-
-```bash
-oasisctl --format json
-```
-
-### Viewing Information with the List Command
-
-This section will cover the two main functions of retrieving data with the
-ArangoGraph API. These are:
-
-- `list` - List resources
-- `get` - Get information
-
-Before you can jump right into making new deployments you need to be aware of
-what resources you have available. This is where the list command comes in.
-List serves as a way to retrieve general information, you can see all of the
-available list options by accessing its help output.
-
-```bash
-oasisctl list --help
-```
-
-This should output a screen similar to:
-
-
-
-As you can see you can get information on anything you would need about your
-ArangoGraph organizations, deployments, and access control. To start, let’s take a
-look at a few examples of listing information and then getting more details on
-our results.
-
-### List Organizations
-
-One of the first pieces of information you may be interested in is the
-organizations you have access to. This is useful to know because most commands
-require an explicit declaration of the organization you are interacting with.
-To find this, use list to list your available organizations:
-
-```bash
-oasisctl list organizations --format json
-```
-
-Once you have your available organizations you can refer to your desired
-organization using its name or id.
-
-
-
-Note: You may also notice the url attribute, this is for internal use only and
-should not be treated as a publicly accessible path.
-
-### List Projects
-
-Once you have the organization name that you wish to interact with, the next
-step is to list the available projects within that organization. Do this by
-following the same command structure as before and instead exchange
-organizations for projects, this time providing the desired organization name
-with the `--organization-id` flag.
-
-```bash
-oasisctl list projects \
- --organization-id "ArangoGraph Organization" \
- --format json
-```
-
-This will return information on all projects that the authenticated user has
-access to.
-
-
-
-### List Deployments
-
-Things start getting a bit more interesting with information related to
-deployments. Now that you have obtained an organization iD and a project ID,
-you can list all of the associated deployments for that project.
-
-```bash
-oasisctl list deployments \
- --organization-id "ArangoGraph Organization" \
- --project-id "Getting Started with ArangoGraph" \
- --format json
- ```
-
-
-
-This provides some basic details for all of the deployments associated with the
-project. Namely, it provides a deployment ID which we can use to start making
-modifications to the deployment or to get more detailed information, with the
-`get` command.
-
-### Using the Get Command
-
-In Oasisctl, you use the get command to obtain more detailed information about
-any of your available resources. It follows the same command structure as the
-previous commands but typically requires a bit more information. For example,
-to get more information on a specific deployment means you need to know at
-least:
-
-- Organization ID
-- Project ID
-- Deployment ID
-
-To get more information about our example deployment we would need to execute
-the following command:
-
-```bash
-oasisctl get deployment \
- --organization-id "ArangoGraph Organization" \
- --project-id "Getting Started with ArangoGraph" \
- --deployment-id "abc123DeploymentID" \
- --format json
-```
-
-This returns quite a bit more information about the deployment including more
-detailed server information, the endpoint URL where you can access the web interface,
-and optionally the root user password.
-
-
-
-### Node Size ID
-
-We won’t be exploring every flag available for creating a deployment but it is
-a good idea to explore the concept of the node size ID value. This is an
-indicator that is unique to each provider (Google, Azure, AWS) and indicates
-the CPU and memory. Depending on the provider and region this can also
-determine the available disk sizes for your deployment. In other words, it is
-pretty important to know which `node-size-id` your deployment will be using.
-
-The command you execute will determine on the available providers and regions
-for your organization but here is an example command that lists the available
-options in the US West region for the Google Cloud Platform:
-
-```bash
-oasisctl list nodesizes \
- --organization-id "ArangoGraph Organization" \
- --provider-id "Google Cloud Platform" \
- --region-id gcp-us-west2
-```
-
-The output you will see will be similar to this:
-
-
-
-It is important to note that you can scale up with more disk size but you are
-unable to scale down your deployment disk size. The only way to revert back to
-a lower disk size is to destroy and recreate your deployment.
-
-Once you have decided what your starting deployment needs are you can reference
-your decision with the Id value for the corresponding configuration. So, for
-our example, we will be choosing the c4-a4 configuration. The availability and
-options are different for each provider and region, so be sure to confirm the
-node size options before creating a new deployment.
-
-### Challenge
-
-You can use this combination of listing and getting to obtain all of the
-information you want for your ArangoGraph organizations. We only explored a few of
-the commands available but you can explore them all within the utility by
-utilizing the `--help` flag or you can see all of the available options
-in the [documentation](../oasisctl/options.md).
-
-Something that might be useful practice before moving on is getting the rest
-of the information that you need to create a deployment. Here are a list of
-items that won’t have defaults available when you attempt to create your
-first deployment and you will need to supply:
-
-- CA Certificate ID (name)
-- IP Allowlist ID (id) (optional)
-- Node Size ID (id)
-- Node Disk Size (GB disk size dependent on Node Size ID)
-- Organization ID (name)
-- Project ID (name)
-- Region ID (name)
-
-Try looking up that information to get more familiar with how to find
-information with Oasisctl. When in doubt use the `--help` flag with any
-command.
-
-## Creating Resources
-
-Now that you have seen how to obtain information about your available
-resources, it’s time to start using those skills to start creating your own
-deployment. To create resources with Oasisctl you use the create command.
-To see all the possible options you can start with the following command:
-
-```bash
-oasisctl create --help
-```
-
-
-
-### Create a Deployment
-
-To take a look at all of the options available when creating a deployment the
-best place to start is with our trusty help command.
-
-```bash
-oasisctl create deployment --help
-```
-
-
-
-As you can see there are a lot of default options but also a few that require
-some knowledge of our pre-existing resources. Attempting to create a deployment
-without one of the required options will return an error indicating which value
-is missing or invalid.
-
-Once you have collected all of the necessary information the command for
-creating a deployment is simply supplying the values along with the appropriate
-flags. This command will create a deployment:
-
-```bash
-oasisctl create deployment \
- --region-id gcp-us-west2 \
- --node-size-id c4-a4 \
- --node-disk-size 10 \
- --version 3.9.2 \
- --cacertificate-id OasisCert \
- --organization-id "ArangoGraph Organization" \
- --project-id "Getting Started with ArangoGraph" \
- --name "First Oasisctl Deployment" \
- --description "The first deployment created using the awesome Oasisctl utility!"
-```
-
-If everything went according to play you should see similar output:
-
-
-
-### Wait on Deployment Status
-
-When you create a deployment it begins the process of _bootstrapping_ which is
-getting the deployment ready for use. This should happen quickly and to see if
-it is ready for use you can run the wait command using the ID of the newly
-created deployment, shown at the top of the information you received above.
-
-```bash
-oasisctl wait deployment \
- --deployment-id hmkuedzw9oavvjmjdo0i
-```
-
-Once you receive a response of _Deployment Ready_, your deployment is indeed
-ready to use. You can get some new details by running the get command.
-
-```bash
-oasisctl get deployment \
- --organization-id "ArangoGraph Organization" \
- --deployment-id hmkuedzw9oavvjmjdo0i
-```
-
-
-
-Once the deployment is ready you will get two new pieces of information, the
-endpoint URL and Bootstrapped-At will indicate the time it became available.
-If you would like to login to the web interface to verify that your server is in fact
-up and running you will need to supply the `--show-root-password` flag along
-with the get command, this flag does not take a value.
-
-### The Update Command
-
-The inevitable time comes when something about your deployment must change and
-this is where the update command comes in. You can use update to change or
-update a number of things including updating the groups, policies, and roles
-for user access control. You can also update some of your deployment
-information or, for our situation, add an IP Allowlist if you didn’t add one
-during creation.
-
-There are, of course, many options available and it is always recommended to
-start with the --help flag to read about all of them.
-
-### Update a Deployment
-
-This section will show an example of how to update a deployment to use a
-pre-existing allowlist. To add an IP Allowlist after the fact we are really
-just updating the IP Allowlist value, which is currently empty. In order to
-update the IP Allowlist of a deployment you must create a allowlist and then
-you can simply reference its id like so:
-
-```bash
-oasisctl update deployment \
- --deployment-id hmkuedzw9oavvjmjdo0i \
- --ipallowlist-id abc123AllowlistID
-```
-
-You should receive a response with the deployment information and an indication
-that deployment was updated at the top.
-
-You can use the update command to update everything about your deployments as
-well. If you run:
-
-```bash
-oasisctl update deployment --help
-```
-
-You will see the full list of options available that will allow you to scale
-your deployment as needed.
-
-
-
-## Delete a Deployment
-
-There may come a day where you need to delete a resource. The process for this
-follows right along with the conventions for the other commands detailed
-throughout this guide.
-
-### The Delete Command
-
-For the final example in this guide we will delete the deployment that has
-been created. This only requires the deployment ID and the permissions to
-delete the deployment.
-
-```bash
-oasisctl delete deployment \
- --deployment-id hmkuedzw9oavvjmjdo0i
-```
-
-Once the deployment has been deleted you can confirm it is gone by listing
-your deployments.
-
-```bash
-oasisctl list deployments \
- --organization-id "ArangoGraph Organization" \
- --format json
-```
-
-## Next Steps
-
-As promised, this guide covered the basics of using Oasisctl with the ArangoDB
-API. While we primarily focused on viewing and managing deployments there is
-also a lot more to explore, including:
-
-- Organization Invites Management
-- Backups
-- API Key Management
-- Certificate Management
-- User Access Control
-
-You can check out all these features and further details on the ones discussed
-in this guide in the documentation.
diff --git a/site/content/3.10/arangograph/backups.md b/site/content/3.10/arangograph/backups.md
deleted file mode 100644
index e4adcd0a0e..0000000000
--- a/site/content/3.10/arangograph/backups.md
+++ /dev/null
@@ -1,172 +0,0 @@
----
-title: Backups in ArangoGraph
-menuTitle: Backups
-weight: 50
-description: >-
- You can manually create backups or use a backup policy to schedule periodic
- backups, and both ways allow you to store your backups in multiple regions simultaneously
----
-## How to create backups
-
-To backup data in ArangoGraph for an ArangoDB installation, navigate to the
-**Backups** section of your deployment created previously.
-
-
-
-There are two ways to create backups. Create periodic backups using a
-**Backup policy**, or create a backup manually.
-Both ways allow you to create [backups in multiple regions](#multi-region-backups)
-as well.
-
-### Periodic backups
-
-Periodic backups are created at a given schedule. To see when the new backup is
-due, observe the schedule section.
-
-
-
-When a new deployment is created, a default **Backup policy** is created for it
-as well. This policy creates backups every two hours. To edit this policy
-(or any policy), highlight it in the row above and hit the pencil icon.
-
-
-
-These backups are not automatically uploaded. To enable this, use the
-**Upload backup to storage** option and choose a retention period that
-specifies how long backups are retained after creation.
-
-If the **Upload backup to storage** option is enabled for a backup policy,
-you can then create backups in different regions than the default one.
-The regions where the default backup is copied are shown in the
-**Additional regions** column in the **Policies** section.
-
-### Manual backups
-
-It's also possible to create a backup on demand. To do this, click **Back up now**.
-
-
-
-
-
-If you want to manually copy a backup to a different region than the default
-one, first ensure that the **Upload backup to storage** option is enabled.
-Then, highlight the backup row and use the
-**Copy backup to a different region** button from the **Actions** column.
-
-The source backup ID from
-which the copy is created is displayed in the **Copied from Backup** column.
-
-
-
-
-
-### Uploading backups
-
-By default, a backup is not uploaded to the cloud, instead it remains on the
-servers of the deployment. To make a backup that is resilient against server
-(disk) failures, upload the backup to cloud storage.
-
-When the **Upload backup to cloud storage** option is enabled, the backup is
-preserved for a long time and does not occupy any disk space on the servers.
-This also allows copying the backup to different regions and it can be
-configured in the **Multiple region backup** section.
-
-Uploaded backups are
-required for [cloning](#how-to-clone-deployments-using-backups).
-
-#### Best practices for uploading backups
-
-When utilizing the **Upload backup to cloud storage** feature, a recommended
-approach is to implement a backup strategy that balances granularity and storage
-efficiency.
-
-One effective strategy involves creating a combination of backup intervals and
-retention periods. For instance, consider the following example:
-
-1. Perform a backup every 4 hours with a retention period of 24 hours. This
- provides frequent snapshots of your data, allowing you to recover recent
- changes.
-2. Perform a backup every day with a retention period of a week. Daily backups
- offer a broader time range for recovery, enabling you to restore data from
- any point within the past week.
-3. Perform a backup every week with a retention period of a month. Weekly
- backups allow you to recover from more extensive data.
-4. Perform a backup every month with a retention period of a year. Monthly
- backups provide a long-term perspective, enabling you to restore data from
- any month within the past year.
-
-This backup strategy offers good granularity, providing multiple recovery
-options for different timeframes. By implementing this approach, you have a
-total number of backups that is considerable lower in comparison to other
-alternatives such as having hourly backups with a retention period of a year.
-
-## Multi-region backups
-
-Using the multi-region backup feature, you can store backups in multiple regions
-simultaneously either manually or automatically as part of a **Backup policy**.
-If a backup created in one region goes down, it is still available in other
-regions, significantly improving reliability.
-
-Multiple region backup is only available when the
-**Upload backup to cloud storage** option is enabled.
-
-
-
-## How to restore backups
-
-To restore a database from a backup, highlight the desired backup and click the restore icon.
-
-{{< warning >}}
-All current data will be lost when restoring. To make sure that new data that
-has been inserted after the backup creation is also restored, create a new
-backup before using the **Restore Backup** feature.
-
-During restore, the deployment is temporarily not available.
-{{< /warning >}}
-
-
-
-
-
-
-
-
-
-## How to clone deployments using backups
-
-Creating a deployment from a backup allows you to duplicate an existing
-deployment with all its data, for example, to create a test environment or to
-move to a different cloud provider or region within ArangoGraph.
-
-{{< info >}}
-This feature is only available if the backup you wish to clone has been
-uploaded to cloud storage.
-{{< /info >}}
-
-{{< info >}}
-The cloned deployment will have the exact same features as the previous
-deployment including node size and model. The cloud provider and the region
-can stay the same or you can select a different one.
-For restoring a deployment as quick as possible, it is recommended to create a
-deployment in the same region as where the backup resides to avoid cross-region
-data transfer.
-The data contained in the backup will be restored to this new deployment.
-
-The *root password* for this deployment will be different.
-{{< /info >}}
-
-1. Highlight the backup you wish to clone from and hit **Clone backup to new deployment**.
-
- 
-
-2. Choose whether the clone should be created using the current provider and in
- the same region as the backup or using a different provider, a different region,
- or both.
-
- 
-
-3. The view should navigate to the new deployment being bootstrapped.
-
- 
-
-This feature is also available through [oasisctl](oasisctl/_index.md).
diff --git a/site/content/3.10/arangograph/data-loader/_index.md b/site/content/3.10/arangograph/data-loader/_index.md
deleted file mode 100644
index 38f96ab442..0000000000
--- a/site/content/3.10/arangograph/data-loader/_index.md
+++ /dev/null
@@ -1,70 +0,0 @@
----
-title: Load your data into ArangoGraph
-menuTitle: Data Loader
-weight: 22
-description: >-
- Load your data into ArangoGraph and transform it into richly-connected graph
- structures, without needing to write any code or deploy any infrastructure
----
-
-ArangoGraph provides different ways of loading your data into the platform,
-based on your migration use case.
-
-## Transform data into a graph
-
-The ArangoGraph Data Loader allows you to transform existing data from CSV file
-formats into data that can be analyzed by the ArangoGraph platform.
-
-You provide your data in CSV format, a common format used for exports of data
-from various systems. Then, using a no-code editor, you can model the schema of
-this data and the relationships between them. This allows you to ingest your
-existing datasets into your ArangoGraph database, without the need for any
-development effort.
-
-You can get started in a few easy steps.
-
-{{< tabs "data-loader-steps" >}}
-
-{{< tab "1. Create database" >}}
-Choose an existing database or create a new one and enter a name for your new graph.
-{{< /tab >}}
-
-{{< tab "2. Add files" >}}
-Drag and drop your data files in CSV format.
-{{< /tab >}}
-
-{{< tab "3. Design your graph" >}}
-Model your graph schema by adding nodes and connecting them via edges.
-{{< /tab >}}
-
-{{< tab "4. Import data" >}}
-Once you are ready, save and start the import. The resulting graph is an
-[EnterpriseGraph](../../graphs/enterprisegraphs/_index.md) with its
-corresponding collections, available in your ArangoDB web interface.
-{{< /tab >}}
-
-{{< /tabs >}}
-
-Follow this [working example](../data-loader/example.md) to see how easy it is
-to transform existing data into a graph.
-
-## Import data to the cloud
-
-To import data from various files into collections **without creating a graph**,
-get the ArangoDB client tools for your operating system from the
-[download page](https://arangodb.com/download-major/).
-
-- To import data to ArangoGraph from an existing ArangoDB instance, see
- [arangodump](../../components/tools/arangodump/) and
- [arangorestore](../../components/tools/arangorestore/).
-- To import pre-existing data in JSON, CSV, or TSV format, see
- [arangoimport](../../components/tools/arangoimport/).
-
-## How to access the Data Loader
-
-1. If you do not have a deployment yet, [create a deployment](../deployments/_index.md#how-to-create-a-new-deployment) first.
-2. Open the deployment you want to load data into.
-3. In the **Load Data** section, click the **Load your data** button.
-4. Select your migration use case.
-
-
\ No newline at end of file
diff --git a/site/content/3.10/arangograph/data-loader/add-files.md b/site/content/3.10/arangograph/data-loader/add-files.md
deleted file mode 100644
index 114b588e40..0000000000
--- a/site/content/3.10/arangograph/data-loader/add-files.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: Add files into Data Loader
-menuTitle: Add files
-weight: 5
-description: >-
- Provide your set of files in CSV format containing the data to be imported
----
-
-The Data Loader allows you to upload your data files in CSV format into
-ArangoGraph and then use these data sources to design a graph using the
-built-in graph designer.
-
-## Upload your files
-
-You can upload your CSV files in the following ways:
-
-- Drag and drop your files in the designated area.
-- Click the **Browse files** button and select the files you want to add.
-
-
-
-You have the option to either upload several files collectively as a batch or
-add them individually. Furthermore, you can supplement additional files later on.
-After a file has been uploaded, you can expand it to preview both the header and
-the first row of data within the file.
-
-In case you upload CSV files without fields, they will not be available for
-manipulation.
-
-Once the files are uploaded, you can start [designing your graph](../data-loader/design-graph.md).
-
-### File formatting limitations
-
-Ensure that the files you upload are correctly formatted. Otherwise, errors may
-occur, the upload may fail, or the data may not be correctly mapped.
-
-The following restrictions and limitations apply:
-
-- The only supported file format is CSV. If you submit an invalid file format,
- the upload of that specific file will be prevented.
-- It is required that all CSV files have a header row. If you upload a file
- without a header, the first row of data is treated as the header. To avoid
- losing the first row of the data, make sure to include headers in your files.
-- The CSV file should have unique header names. It is not possible to have two
- columns with the same name within the same file.
-
-For more details, see the [File validation](../data-loader/import.md#file-validation) section.
-
-### Upload limits
-
-Note that there is a cumulative file upload limit of 1GB. This means that the
-combined size of all files you upload should not exceed 1GB. If the total size
-of the uploaded files surpasses this limit, the upload may not be successful.
-
-## Delete files
-
-You can remove uploaded files by clicking the **Delete file** button in the
-**Your files** panel. Please keep in mind that in order to delete a file,
-you must first remove all graph associations associated with it.
\ No newline at end of file
diff --git a/site/content/3.10/arangograph/data-loader/design-graph.md b/site/content/3.10/arangograph/data-loader/design-graph.md
deleted file mode 100644
index b1c5eaf3af..0000000000
--- a/site/content/3.10/arangograph/data-loader/design-graph.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: Design your graph
-menuTitle: Design graph
-weight: 10
-description: >-
- Design your graph database schema using the integrated graph modeler in the Data Loader
----
-
-Based on the data you have uploaded, you can start designing your graph.
-The graph designer allows you to create a schema using nodes and edges.
-Once this is done, you can save and start the import. The resulting
-[EnterpriseGraph](../../graphs/enterprisegraphs/_index.md) and the
-corresponding collections are created in your ArangoDB database instance.
-
-## How to add a node
-
-Nodes are the main objects in your data model and include the attributes of the
-objects.
-
-1. To create a new node, click the **Add node** button.
-2. In the graph designer, click on the newly created node to view the **Node details**.
-3. In the **Node details** panel, fill in the following fields:
- - For **Node label**, enter a name you want to use for the node.
- - For **File**, select a file from the list to associate it with the node.
- - For **Primary Identifier**, select a field from the list. This is used to
- reference the nodes when you define relations with edges.
- - For **File Headers**, select one or more attributes from the list.
-
-
-
-## How to connect nodes
-
-Nodes can be connected by edges to express and categorize the relations between
-them. A relation always has a direction, going from one node to another. You can
-define this direction in the graph designer by dragging your cursor from one
-particular node to another.
-
-To connect two nodes, you can use the **Connect node(s)** button. Click on any
-node to self-reference it or drag it to connect it to another node. Alternatively,
-when you select a node, a plus sign will appear, allowing you to directly add a
-new node with an edge.
-
-{{< tip >}}
-To quickly recenter your elements on the canvas, you can use the **Center View**
-button located in the bottom right corner. This brings your nodes and edges back
-into focus.
-{{< /tip >}}
-
-The edge needs to be associated with a file and must have a label. Note that a
-node and an edge cannot have the same label.
-
-See below the steps to add details to an edge.
-
-1. Click on an edge in the graph designer.
-2. In the **Edit Edge** panel, fill in the following fields:
- - For **Edge label**, enter a name you want to use for the edge.
- - For **Relation file**, select a file from the list to associate it with the edge.
- - To define how the relation points from one node to another, select the
- corresponding relation file header for both the origin file (`_from`) and the
- destination file (`_to`).
- - For **File Headers**, select one or more attributes from the list.
-
-
-
-## How to delete elements
-
-To remove a node or an edge, simply select it in the graph designer and click the
-**Delete** icon.
\ No newline at end of file
diff --git a/site/content/3.10/arangograph/data-loader/example.md b/site/content/3.10/arangograph/data-loader/example.md
deleted file mode 100644
index 46fdd1b38e..0000000000
--- a/site/content/3.10/arangograph/data-loader/example.md
+++ /dev/null
@@ -1,103 +0,0 @@
----
-title: Data Loader Example
-menuTitle: Example
-weight: 20
-description: >-
- Follow this complete working example to see how easy it is to transform existing
- data into a graph and get insights from the connected entities
----
-
-To transform your data into a graph, you need to have CSV files with entities
-representing the nodes and a corresponding CSV file representing the edges.
-
-This example uses a sample data set of two files, `airports.csv`, and `flights.csv`.
-These files are used to create a graph showing flights arriving at and departing
-from various cities.
-You can download the files from [GitHub](https://github.com/arangodb/example-datasets/tree/master/Data%20Loader).
-
-The `airports.csv` contains rows of airport entries, which are the future nodes
-in your graph. The `flights.csv` contains rows of flight entries, which are the
-future edges connecting the nodes.
-
-The whole process can be broken down into these steps:
-
-1. **Database and graph setup**: Begin by choosing an existing database or
- create a new one and enter a name for your new graph.
-2. **Add files**: Upload the CSV files to the Data Loader web interface. You can
- simply drag and drop them or upload them through the file browser window.
-3. **Design graph**: Design your graph schema by adding nodes and edges and map
- data from the uploaded files to them. This allows creating the corresponding
- documents and collections for your graph.
-4. **Import data**: Import the data and start using your newly created
- [EnterpriseGraph](../../graphs/enterprisegraphs/_index.md) and its
- corresponding collections.
-
-## Step 1: Create a database and choose the graph name
-
-Start by creating a new database and adding a name for your graph.
-
-
-
-## Step 2: Add files
-
-Upload your CSV files to the Data Loader web interface. You can drag and drop
-them or upload them via a file browser window.
-
-
-
-See also [Add files into Data Loader](../data-loader/add-files.md).
-
-## Step 3: Design graph schema
-
-Once the files are added, you can start designing the graph schema. This example
-uses a simple graph consisting of:
-- Two nodes (`origin_airport` and `destination_airport`)
-- One directed edge going from the origin airport to the destination one
- representing a flight
-
-Click **Add node** to create the nodes and connect them with edges.
-
-Next, for each of the nodes and edges, you need to create a mapping to the
-corresponding file and headers.
-
-For nodes, the **Node label** is going to be a node collection name and the
-**Primary identifier** will be used to populate the `_key` attribute of documents.
-You can also select any additional headers to be included as document attributes.
-
-In this example, two node collections have been created (`origin_airport` and
-`destination_airport`) and `AirportID` header is used to create the `_key`
-attribute for documents in both node collections. The header preview makes it
-easy to select the headers you want to use.
-
-
-
-For edges, the **Edge label** is going to be an edge collection name. Then, you
-need to specify how edges will connect nodes. You can do this by selecting the
-*from* and *to* nodes to give a direction to the edge.
-In this example, the `source airport` header has been selected as a source and
-the `destination airport` header as a target for the edge.
-
-
-
-Note that the values of the source and target for the edge correspond to the
-**Primary identifier** (`_key` attribute) of the nodes. In this case, it is the
-airport code (i.e. GKA) used as the `_key` in the node documents and in the source
-and destination headers to configure the edges.
-
-See also [Design your graph in the Data Loader](../data-loader/design-graph.md).
-
-## Step 4: Import and see the resulting graph
-
-After all the mapping is done, all you need to do is click
-**Save and start import**. The report provides an overview of the files
-processed and the documents created, as well as a link to your new graph.
-See also [Start import](../data-loader/import.md).
-
-
-
-Finally, click **See your new graph** to open the ArangoDB web interface and
-explore your new collections and graph.
-
-
-
-Happy graphing!
\ No newline at end of file
diff --git a/site/content/3.10/arangograph/deployments/_index.md b/site/content/3.10/arangograph/deployments/_index.md
deleted file mode 100644
index b8dd98d490..0000000000
--- a/site/content/3.10/arangograph/deployments/_index.md
+++ /dev/null
@@ -1,301 +0,0 @@
----
-title: Deployments in ArangoGraph
-menuTitle: Deployments
-weight: 20
-description: >-
- How to create and manage deployments in ArangoGraph
----
-An ArangoGraph deployment is an ArangoDB cluster or single server, configured
-as you choose.
-
-Each deployment belongs to a project, which belongs to an organization in turn.
-You can have any number of deployments under one project.
-
-**Organizations → Projects → Deployments **
-
-
-
-## How to create a new deployment
-
-1. If you do not have a project yet,
- [create a project](../projects.md#how-to-create-a-new-project) first.
-2. In the main navigation, click __Deployments__.
-3. Click the __New deployment__ button.
-4. Select the project you want to create the deployment for.
-5. Set up your deployment. The configuration options are described below.
-
-{{< info >}}
-Deployments contain exactly **one policy**. Within that policy, you can define
-role bindings to regulate access control on a deployment level.
-{{< /info >}}
-
-### In the **General** section
-
-- Enter the __Name__ and optionally a __Short description__ for the deployment.
-- Select the __Provider__ and __Region__ of the provider.
- {{< warning >}}
- Once a deployment has been created, it is not possible to change the
- provider and region anymore.
- {{< /warning >}}
-
-
-
-### In the **Sizing** section
-
-- Choose a __Model__ for the deployment:
-
- - __OneShard__ deployments are suitable when your data set fits in a single node.
- They are ideal for graph use cases. This model has a fixed number of 3 nodes.
-
- - __Sharded__ deployments are suitable when your data set is larger than a single
- node. The data will be sharded across multiple nodes. You can select the
- __Number of nodes__ for this deployment model. The more nodes you have, the
- higher the replication factor can be.
-
- - __Single Server__ deployments are suitable when you want to try out ArangoDB without
- the need for high availability or scalability. The deployment will contain a
- single server only. Your data will not be replicated and your deployment can
- be restarted at any time.
-
-- Select a __NODE SIZE__ from the list of available options. Each option is a
- combination of vCPUs, memory, and disk space per node.
-
-
-
-### In the **Advanced** section
-
-- Select the __DB Version__.
- If you don't know which DB version to select, use the version selected by default.
-- Select the desired __Support Plan__. Click the link below the field to get
- more information about the different support plans.
-- In the __Certificate__ field:
- - The default certificate created for your project is selected automatically.
- - If you have no default certificate, or want to use a new certificate,
- create a new certificate by typing the desired name for it and hitting
- enter or clicking __Create "\"__ when done.
- - Or, if you already have multiple certificates, select the desired one.
-- _Optional but strongly recommended:_ In the __IP allowlist__ field, select the
- desired one in case you want to limit access to your deployment to certain
- IP ranges. To create a allowlist, navigate to your project and select the
- __IP allowlists__ tab. See [How to manage IP allowlists](../projects.md#how-to-manage-ip-allowlists)
- for details.
- {{< security >}}
- For any kind of production deployment it is strongly advise to use an IP allowlist.
- {{< /security >}}
-- Select a __Deployment Profile__. Profile options are only available on request.
-
-
-
-### In the **Summary** panel
-
-1. Review the configuration, and if you're okay with the setup, press the
- __Create deployment__ button.
-2. You are taken to the deployment overview page.
- **Note:** Your deployment is being bootstrapped at that point. This process
- takes a few minutes. Once the deployment is ready, you receive a confirmation
- email.
-
-## How to access your deployment
-
-1. In the main navigation, click the __Dashboard__ icon and then click __Projects__.
-2. In the __Projects__ page, click the project for
- which you created a deployment earlier.
-3. Alternatively, you can access your deployment by clicking __Deployments__ in the
- dashboard navigation. This page shows all deployments from all projects.
- Click the name of the deployment you want to view.
-4. For each deployment in your project, you see the status. While your new
- deployment is being set up, it displays the __bootstrapping__ status.
-5. Press the __View__ button to show the deployment page.
-6. When a deployment displays a status of __OK__, you can access it.
-7. Click the __Open database UI__ button or on the database UI link to open
- the dashboard of your new ArangoDB deployment.
-
-At this point your ArangoDB deployment is available for you to use — **Have fun!**
-
-If you have disabled the [auto-login option](#auto-login-to-database-ui) to the
-database web interface, you need to follow the additional steps outlined below
-to access your deployment:
-
-1. Click the copy icon next to the root password. This copies the deployment
- root password to your clipboard. You can also click the view icon to unmask
- the root password to see it.
- {{< security >}}
- Do not use the root username/password for everyday operations. It is recommended
- to use them only to create other user accounts with appropriate permissions.
- {{< /security >}}
-2. Click the __Open database UI__ button or on the database UI link to open
- the dashboard of your new ArangoDB deployment.
-3. In the __username__ field type `root`, and in the __password__ field paste the
- password that you copied earlier.
-4. Press the __Login__ button.
-5. Press the __Select DB: \_system__ button.
-
-{{< info >}}
-Each deployment is accessible on two ports:
-
-- Port `8529` is the standard port recommended for use by web-browsers.
-- Port `18529` is the alternate port that is recommended for use by automated services.
-
-The difference between these ports is the certificate used. If you enable
-__Use well-known certificate__, the certificates used on port `8529` is well-known
-and automatically accepted by most web browsers. The certificate used on port
-`18529` is a self-signed certificate. For securing automated services, the use of
-a self-signed certificate is recommended. Read more on the
-[Certificates](../security-and-access-control/x-509-certificates.md) page.
-{{< /info >}}
-
-## Password settings
-
-### How to enable the automatic root user password rotation
-
-Password rotation refers to changing passwords regularly - a security best
-practice to reduce the vulnerability to password-based attacks and exploits
-by limiting for how long passwords are valid. The ArangoGraph Insights Platform
-can automatically change the `root` user password of an ArangoDB deployment
-periodically to improve security.
-
-1. Navigate to the __Deployment__ for which you want to enable an automatic
- password rotation for the root user.
-2. In the __Quick start__ section, click the button with the __gear__ icon next to the
- __ROOT PASSWORD__.
-3. In the __Password Settings__ dialog, turn the automatic password rotation on
- and click the __Confirm__ button.
-
- 
-4. You can expand the __Root password__ panel to see when the password was
- rotated last. The rotation takes place every three months.
-
-### Auto login to database UI
-
-ArangoGraph provides the ability to automatically login to your database using
-your existing ArangoGraph credentials. This not only provides a seamless
-experience, preventing you from having to manage multiple sets of credentials
-but also improves the overall security of your database. As your credentials
-are shared between ArangoGraph and your database, you can benefit from
-end-to-end audit traceability for a given user, as well as integration with
-ArangoGraph SSO.
-
-You can enable this feature in the **Password Settings** dialog. Please note
-that it may take a few minutes to get activated.
-Once enabled, you no longer have to fill in the `root` user and password of
-your ArangoDB deployment.
-
-{{< info >}}
-If you use the auto login feature with AWS
-[private endpoints](../deployments/private-endpoints.md), it is recommended
-to switch off the `custom DNS` setting.
-{{< /info >}}
-
-This feature can be disabled at any time. You may wish to consider explicitly
-disabling this feature in the following situations:
-- Your workflow requires you to access the database UI using different accounts
- with differing permission sets, as you cannot switch database users when
- automatic login is enabled.
-- You need to give individuals access to a database's UI without giving them
- any access to ArangoGraph. Note, however, that it's possible to only give an
- ArangoGraph user database UI access, without other ArangoGraph permissions.
-
-{{< warning >}}
-When the auto login feature is enabled, users cannot edit their permissions on
-the ArangoDB database web interface as all permissions are managed by the
-ArangoGraph platform.
-{{< /warning >}}
-
-Before getting started, make sure you are signed into ArangoGraph as a user
-with one of the following permissions in your project:
-- `data.deployment.full-access`
-- `data.deployment.read-only-access`
-
-Organization owners have these permissions enabled by default.
-The `deployment-full-access-user` and `deployment-read-only-user` roles which
-contain these permissions can also be granted to other members of the
-organization. See how to create a
-[role binding](../security-and-access-control/_index.md#how-to-view-edit-or-remove-role-bindings-of-a-policy).
-
-{{< warning >}}
-This feature is available on `443` port only.
-{{< /warning >}}
-
-## How to edit a deployment
-
-You can modify a deployment's configuration, including the ArangoDB version
-that is being used, change the memory size, or even switch from
-a OneShard deployment to a Sharded one if your data set no longer fits in a
-single node.
-
-{{< tip >}}
-To edit an existing deployment, you must have the necessary set of permissions
-attached to your role. Read more about [roles and permissions](../security-and-access-control/_index.md#roles).
-{{< /tip >}}
-
-1. In the main navigation, click **Deployments** and select an existing
- deployment from the list, or click **Projects**, select a project, and then
- select a deployment.
-2. In the **Quick start** section, click the **Edit** button.
-3. In the **General** section, you can do the following:
- - Change the deployment name
- - Change the deployment description
-4. In the **Sizing** section, you can do the following:
- - Change **OneShard** deployments into **Sharded** deployments. To do so,
- select **Sharded** in the **Model** dropdown list. You can select the
- number of nodes for your deployment. This can also be modified later on.
- {{< warning >}}
- You cannot switch from **Sharded** back to **OneShard**.
- {{< /warning >}}
- - Change **Single Server** deployments into **OneShard** or **Sharded** deployments.
- {{< warning >}}
- You cannot switch from **Sharded** or **OneShard** back to **Single Server**.
- {{< /warning >}}
- - Scale up or down the node size.
- {{< warning >}}
- When scaling up or down the size in AWS deployments, the new value gets locked
- and cannot be changed again until the cloud provider rate limit is reset.
- {{< /warning >}}
-5. In the **Advanced** section, you can do the following:
- - Upgrade the ArangoDB version that is currently being used. See also
- [Upgrades and Versioning](upgrades-and-versioning.md)
- - Select a different certificate.
- - Add or remove an IP allowlist.
- - Select a deployment profile.
-6. All changes are reflected in the **Summary** panel. Review the new
- configuration and click **Save changes**.
-
-## How to connect a driver to your deployment
-
-[ArangoDB drivers](../../develop/drivers/_index.md) allow you to use your ArangoGraph
-deployment as a database system for your applications. Drivers act as interfaces
-between different programming languages and ArangoDB, which enable you to
-connect to and manipulate ArangoDB deployments from within compiled programs
-or using scripting languages.
-
-To get started, open a deployment.
-In the **Quick start** section, click on the **Connecting drivers** button and
-select your programming language. The code snippets provide examples on how to
-connect to your instance.
-
-{{< tip >}}
-Note that ArangoGraph Insights Platform runs deployments in a cluster
-configuration. To achieve the best possible availability, your client
-application has to handle connection failures by retrying operations if needed.
-{{< /tip >}}
-
-
-
-## How to delete a deployment
-
-{{< danger >}}
-Deleting a deployment deletes all its data and backups.
-This operation is **irreversible**. Please proceed with caution.
-{{< /danger >}}
-
-1. In the main navigation, in the __Projects__ section, click the project that
- holds the deployment you wish to delete.
-2. In the __Deployments__ page, click the deployment you wish to delete.
-3. Click the __Delete/Lock__ entry in the navigation.
-4. Click the __Delete deployment__ button.
-5. In the modal dialog, confirm the deletion by entering `Delete!` into the
- designated text field.
-6. Confirm the deletion by pressing the __Yes__ button.
-7. You will be taken back to the deployments page of the project.
- The deployment being deleted will display the __Deleting__ status until it has
- been successfully removed.
diff --git a/site/content/3.10/arangograph/deployments/private-endpoints.md b/site/content/3.10/arangograph/deployments/private-endpoints.md
deleted file mode 100644
index 39e42514fd..0000000000
--- a/site/content/3.10/arangograph/deployments/private-endpoints.md
+++ /dev/null
@@ -1,221 +0,0 @@
----
-title: Private endpoint deployments in ArangoGraph
-menuTitle: Private endpoints
-weight: 5
-description: >-
- Use the private endpoint feature to isolate your deployments and increase
- security
----
-This topic describes how to create a private endpoint deployment and
-securely deploy to various cloud providers such as Google Cloud Platform (GCP),
-Microsoft Azure, and Amazon Web Services (AWS). Follow the steps outlined below
-to get started.
-
-{{< tip >}}
-Private endpoints on Microsoft Azure can be cross region; in AWS they should be
-located in the same region.
-{{< /tip >}}
-
-{{< info >}}
-For more information about the certificates used for private endpoints, please
-refer to the [How to manage certificates](../security-and-access-control/x-509-certificates.md)
-section.
-{{< /info >}}
-
-## Google Cloud Platform (GCP)
-
-Google Cloud Platform (GCP) offers a feature called
-[Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect)
-that allows private consumption of services across VPC networks that belong to
-different groups, teams, projects, or organizations. You can publish and consume
-services using the defined IP addresses which are internal to your VPC network.
-
-In ArangoGraph, you can
-[create a regular deployment](_index.md#how-to-create-a-new-deployment)
-and change it to a private endpoint deployment afterwards.
-
-Such a deployment is not reachable from the internet anymore, other than via
-the ArangoGraph dashboard to administrate it. To revert to a public deployment,
-please contact support via **Request help** in the help menu.
-
-To configure a private endpoint for GCP, you need to provide your Google project
-names. ArangoGraph then configures a **Private Endpoint Service** that automatically
-connect to private endpoints that are created for those projects.
-
-After the creation of the **Private Endpoint Service**, you should receive a
-service attachment that you need during the creation of your private endpoint(s).
-
-1. Open the deployment you want to change.
-2. In the **Quick start** section, click the **Edit** button with an ellipsis (`…`)
- icon.
-3. Click **Change to private endpoint** in the menu.
- 
-4. In the configuration wizard, click **Next** to enter your configuration details.
-5. Enter one or more Google project names. You can also add them later in the summary view.
- Click **Next**.
- 
-6. Configure custom DNS names. This step is optional and disabled by default.
- Note that, once enabled, this setting is immutable and cannot be reverted.
- Click **Next** to continue.
- {{< info >}}
- By default, your private endpoint is available to all VPCs that connect to it
- at `https://-pe.arangodb.cloud` with the
- [well-known certificate](../security-and-access-control/x-509-certificates.md#well-known-x509-certificates).
- If the custom DNS is enabled, you will be responsible for the DNS of your
- private endpoints.
- {{< /info >}}
- 
-7. Click **Confirm Settings** to change the deployment.
-8. Back in the **Overview** page, scroll down to the **Private Endpoint** section
- that is now displayed to see the connection status and to change the
- configuration.
-9. ArangoGraph configures a **Private Endpoint Service**. As soon as the
- **Service Attachment** is ready, you can use it to configure the Private
- Service Connect in your VPC.
-
-{{< tip >}}
-When you create a private endpoint in ArangoGraph, both endpoints (the regular
-one and the new private one) are available for two hours. During this time period,
-you can switch your application to the new private endpoint. After this period,
-the old endpoint is not available anymore.
-{{< /tip >}}
-
-## Microsoft Azure
-
-Microsoft Azure offers a feature called
-[Azure Private Link](https://docs.microsoft.com/en-us/azure/private-link)
-that allows you to limit communication between different Azure servers and
-services to Microsoft's backbone network without exposure to the internet.
-It can lower network latency and increase security.
-
-If you want to connect an ArangoGraph deployment running on Azure with other
-services you run on Azure using such a tunnel, then
-[create a regular deployment](_index.md#how-to-create-a-new-deployment)
-and change it to a private endpoint deployment afterwards.
-
-The deployment is not reachable from the internet anymore, other than via
-the ArangoGraph dashboard to administrate it. To revert to a public deployment,
-please contact support via **Request help** in the help menu.
-
-1. Open the deployment you want to change.
-2. In the **Quick start** section, click the **Edit** button with an ellipsis (`…`)
- icon.
-3. Click **Change to private endpoint** in the menu.
- 
-4. In the configuration wizard, click **Next** to enter your configuration details.
-5. Enter one or more Azure Subscription IDs (GUIDs). They cannot be
- changed anymore once a connection has been established.
- Proceed by clicking **Next**.
- 
-6. Configure custom DNS names. This step is optional and disabled by default,
- you can also add or change them later from the summary view.
- Click **Next** to continue.
- {{< info >}}
- When using custom DNS names on private endpoints running on Azure, you need
- to use the [self-signed certificate](../security-and-access-control/x-509-certificates.md#self-signed-x509-certificates).
- {{< /info >}}
-7. Click **Confirm Settings** to change the deployment.
-8. Back in the **Overview** page, scroll down to the **Private Endpoint** section
- that is now displayed to see the connection status and to change the
- configuration.
-9. ArangoGraph configures a **Private Endpoint Service**. As soon as the **Azure alias**
- becomes available, you can copy it and then go to your Microsoft Azure portal
- to create Private Endpoints using this alias. The number of established
- **Connections** increases and you can view the connection details by
- clicking it.
-
-{{< tip >}}
-When you create a private endpoint in ArangoGraph, both endpoints (the regular
-one and the new private one) are available for two hours. During this time period,
-you can switch your application to the new private endpoint. After this period,
-the old endpoint is not available anymore.
-{{< /tip >}}
-
-## Amazon Web Services (AWS)
-
-AWS offers a feature called [AWS PrivateLink](https://aws.amazon.com/privatelink)
-that enables you to privately connect your Virtual Private Cloud (VPC) to
-services, without exposure to the internet. You can control the specific API
-endpoints, sites, and services that are reachable from your VPC.
-
-Amazon VPC allows you to launch AWS resources into a
-virtual network that you have defined. It closely resembles a traditional
-network that you would normally operate, with the benefits of using the AWS
-scalable infrastructure.
-
-In ArangoGraph, you can
-[create a regular deployment](_index.md#how-to-create-a-new-deployment) and change it
-to a private endpoint deployment afterwards.
-
-The ArangoDB private endpoint deployment is not exposed to public internet
-anymore, other than via the ArangoGraph dashboard to administrate it. To revert
-it to a public deployment, please contact the support team via **Request help**
-in the help menu.
-
-To configure a private endpoint for AWS, you need to provide the AWS principals related
-to your VPC. The ArangoGraph Insights Platform configures a **Private Endpoint Service**
-that automatically connects to private endpoints that are created in those principals.
-
-1. Open the deployment you want to change.
-2. In the **Quick start** section, click the **Edit** button with an ellipsis (`…`)
- icon.
-3. Click **Change to private endpoint** in the menu.
- 
-4. In the configuration wizard, click **Next** to enter your configuration details.
-5. Click **Add Principal** to start configuring the AWS principal(s).
- You need to enter a valid account, which is your 12 digit AWS account ID.
- Adding usernames or role names is optional. You can also
- skip this step and add them later from the summary view.
- {{< info >}}
- Principals cannot be changed anymore once a connection has been established.
- {{< /info >}}
- {{< warning >}}
- To verify your endpoint service in AWS, you must use the same principal as
- configured in ArangoGraph. Otherwise, the service name cannot be verified.
- {{< /warning >}}
- 
-6. Configure custom DNS names. This step is optional and disabled by default,
- you can also add or change them later from the summary view.
- Click **Next** to continue.
- {{< info >}}
- By default, your private endpoint is available to all VPCs that connect to it
- at `https://-pe.arangodb.cloud` with the well-known certificate.
- If the custom DNS is enabled, you will be responsible for the DNS of your
- private endpoints.
- {{< /info >}}
- 
-7. Confirm that you want to use a private endpoint for your deployment by
- clicking **Confirm Settings**.
-8. Back in the **Overview** page, scroll down to the **Private Endpoint** section
- that is now displayed to see the connection status and change the
- configuration, if needed.
- 
- {{< info >}}
- Note that
- [Availability Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones)
- are independently mapped for each AWS account. The physical location of a
- zone may differ from one account to another account. To coordinate
- Availability Zones across AWS accounts, you must use the
- [Availability Zone ID](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html).
- {{< /info >}}
-
- {{< tip >}}
- To learn more or request help from the ArangoGraph support team, click **Help**
- in the top right corner of the **Private Endpoint** section.
- {{< /tip >}}
-9. ArangoGraph configures a **Private Endpoint Service**. As soon as this is available,
- you can use it in the AWS portal to create an interface endpoint to connect
- to your endpoint service. For more details, see
- [How to connect to an endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html#share-endpoint-service).
-
-{{< tip >}}
-To establish connectivity and enable traffic flow, make sure you add a route
-from the originating machine to the interface endpoint.
-{{< /tip >}}
-
-{{< tip >}}
-When you create a private endpoint in ArangoGraph, both endpoints (the regular
-one and the new private one) are available for two hours. During this time period,
-you can switch your application to the new private endpoint. After this period,
-the old endpoint is not available anymore.
-{{< /tip >}}
diff --git a/site/content/3.10/arangograph/migrate-to-the-cloud.md b/site/content/3.10/arangograph/migrate-to-the-cloud.md
deleted file mode 100644
index 8a3f4a9802..0000000000
--- a/site/content/3.10/arangograph/migrate-to-the-cloud.md
+++ /dev/null
@@ -1,259 +0,0 @@
----
-title: Cloud Migration Tool
-menuTitle: Migrate to the cloud
-weight: 30
-description: >-
- Migrating data from bare metal servers to the cloud with minimal downtime
-draft: true
----
-The `arangosync-migration` tool allows you to easily move from on-premises to
-the cloud while ensuring a smooth transition with minimal downtime.
-Start the cloud migration, let the tool do the job and, at the same time,
-keep your local cluster up and running.
-
-Some of the key benefits of the cloud migration tool include:
-- Safety comes first - pre-checks and potential failures are carefully handled.
-- Your data is secure and fully encrypted.
-- Ease-of-use with a live migration while your local cluster is still in use.
-- Get access to what a cloud-based fully managed service has to offer:
- high availability and reliability, elastic scalability, and much more.
-
-## Downloading the tool
-
-The `arangosync-migration` tool is available to download for the following
-operating systems:
-
-**Linux**
-- [AMD64 (x86_64) architecture](https://download.arangodb.com/arangosync-migration/linux/amd64/arangosync-migration)
-- [ARM64 (AArch64) architecture](https://download.arangodb.com/arangosync-migration/linux/arm64/arangosync-migration)
-
-**macOS / Darwin**
-- [AMD64 (x86_64) architecture](https://download.arangodb.com/arangosync-migration/darwin/amd64/arangosync-migration)
-- [ARM64 (AArch64) architecture](https://download.arangodb.com/arangosync-migration/darwin/arm64/arangosync-migration)
-
-**Windows**
-- [AMD64 (x86_64) architecture](https://download.arangodb.com/arangosync-migration/windows/amd64/arangosync-migration.exe)
-- [ARM64 (AArch64) architecture](https://download.arangodb.com/arangosync-migration/windows/arm64/arangosync-migration.exe)
-
-For macOS as well as other Unix-based operating systems, run the following
-command to make sure you can execute the binary:
-
-```bash
-chmod 755 ./arangosync-migration
-```
-
-## Prerequisites
-
-Before getting started, make sure the following prerequisites are in place:
-
-- Go to the [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home)
- and sign in. If you don’t have an account yet, sign-up to create one.
-
-- Generate an ArangoGraph API key and API secret. See a detailed guide on
- [how to create an API key](api/set-up-a-connection.md#creating-an-api-key).
-
-{{< info >}}
-The cloud migration tool is only available for clusters.
-{{< /info >}}
-
-### Setting up the target deployment in ArangoGraph
-
-Continue by [creating a new ArangoGraph deployment](deployments/_index.md#how-to-create-a-new-deployment)
-or choose an existing one.
-
-The target deployment in ArangoGraph requires specific configuration rules to be
-set up before the migration can start:
-
-- **Configuration settings**: The target deployment must be compatible with the
- source data cluster. This includes the ArangoDB version that is being used,
- the DB-Servers count, and disk space.
-- **Deployment region and cloud provider**: Choose the closest region to your
- data cluster. This factor can speed up your migration to the cloud.
-
-After setting up your ArangoGraph deployment, wait for a few minutes for it to become
-fully operational.
-
-{{< info >}}
-Note that Developer mode deployments are not supported.
-{{< /info >}}
-
-## Running the migration tool
-
-The `arangosync-migration` tool provides a set of commands that allow you to:
-- start the migration process
-- check whether your source and target clusters are fully compatible
-- get the current status of the migration process
-- stop or abort the migration process
-- switch the local cluster to read-only mode
-
-### Starting the migration process
-
-To start the migration process, run the following command:
-
-```bash
-arangosync-migration start
-```
-The `start` command runs some pre-checks. Among other things, it measures
-the disk space which is occupied by your ArangoDB cluster. If you are using the
-same data volume for ArangoDB servers and other data as well, the measurements
-can be incorrect. Provide the `--source.ignore-metrics` option to overcome this.
-
-You also have the option of doing a `--check-only` without starting the actual
-migration. If specified, this checks if your local cluster and target deployment
-are compatible without sending any data to ArangoGraph.
-
-Once the migration starts, the local cluster enters into monitoring mode and the
-synchronization status is displayed in real-time. If you don't want to see the
-status you can terminate this process, as the underlying agent process
-continues to work. If something goes wrong, restarting the same command restores
-the replication state.
-
-To restart the migration, first `stop` or `stop --abort` the migration. Then,
-start it again using the `start` command.
-
-{{< warning >}}
-Starting the migration creates a full copy of all data from the source cluster
-to the target deployment in ArangoGraph. All data that has previously existed in the
-target deployment will be lost.
-{{< /warning >}}
-
-### During the migration
-
-The following takes place during an active migration:
-- The source data cluster remains usable.
-- The target deployment in ArangoGraph is switched to read-only mode.
-- Your root user password is not copied to the target deployment in ArangoGraph.
- To get your root password, select the target deployment from the ArangoGraph
- Dashboard and go to the **Overview** tab. All other users are fully synchronized.
-
-{{< warning >}}
-The migration tool increases the CPU and memory usage of the server you are
-running it on. Depending on your ArangoDB usage pattern, it may take a lot of CPU
-to handle the replication. You can stop the migration process anytime
-if you see any problems.
-{{< /warning >}}
-
-```bash
-./arangosync-migration start \
- --source.endpoint=$COORDINATOR_ENDPOINT \
- --source.jwt-secret=/path-to/jwt-secret.file \
- --arango-graph.api-key=$ARANGO_GRAPH_API_KEY \
- --arango-graph.api-secret=$ARANGO_GRAPH_API_SECRET \
- --arango-graph.deployment-id=$ARANGO_GRAPH_DEPLOYMENT_ID
-```
-
-### How long does it take?
-
-The total time required to complete the migration depends on how much data you
-have and how often write operations are executed during the process.
-
-You can also track the progress by checking the **Migration status** section of
-your target deployment in ArangoGraph dashboard.
-
-
-
-### Getting the current status
-
-To print the current status of the migration, run the following command:
-
-```bash
-./arangosync-migration status \
- --arango-graph.api-key=$ARANGO_GRAPH_API_KEY \
- --arango-graph.api-secret=$ARANGO_GRAPH_API_SECRET \
- --arango-graph.deployment-id=$ARANGO_GRAPH_DEPLOYMENT_ID
-```
-
-You can also add the `--watch` option to start monitoring the status in real-time.
-
-### Stopping the migration process
-
-The `arangosync-migration stop` command stops the migration and terminates
-the migration agent process.
-
-If replication is running normally, the command waits until all shards are
-in sync. The local cluster is then switched into read-only mode.
-After all shards are in-sync and the migration stopped, the target deployment
-is switched into the mode specified in `--source.server-mode` option. If no
-option is specified, it defaults to the read/write mode.
-
-```bash
-./arangosync-migration stop \
- --arango-graph.api-key=$ARANGO_GRAPH_API_KEY \
- --arango-graph.api-secret=$ARANGO_GRAPH_API_SECRET \
- --arango-graph.deployment-id=$ARANGO_GRAPH_DEPLOYMENT_ID
-```
-
-The additional `--abort` option is supported. If specified, the `stop` command
-will not check anymore if both deployments are in-sync and stops all
-migration-related processes as soon as possible.
-
-### Switching the local cluster to read-only mode
-
-The `arangosync-migration set-server-mode` command allows switching
-[read-only mode](../develop/http-api/administration.md#set-the-server-mode-to-read-only-or-default)
-for your local cluster on and off.
-
-In a read-only mode, all write operations are going to fail with an error code
-of `1004` (ERROR_READ_ONLY).
-Creating or dropping databases and collections are also going to fail with
-error code `11` (ERROR_FORBIDDEN).
-
-```bash
-./arangosync-migration set-server-mode \
- --source.endpoint=$COORDINATOR_ENDPOINT \
- --source.jwt-secret=/path-to/jwt-secret.file \
- --source.server-mode=readonly
-```
-The `--source.server-mode` option allows you to specify the desired server mode.
-Allowed values are `readonly` or `default`.
-
-### Supported environment variables
-
-The `arangosync-migration` tool supports the following environment variables:
-
-- `$ARANGO_GRAPH_API_KEY`
-- `$ARANGO_GRAPH_API_SECRET`
-- `$ARANGO_GRAPH_DEPLOYMENT_ID`
-
-Using these environment variables is highly recommended to ensure a secure way
-of providing sensitive data to the application.
-
-### Restrictions and limitations
-
-When running the migration, ensure that your target deployment has the same (or
-bigger) amount of resources (CPU, RAM) than your cluster. Otherwise, the
-migration process might get stuck or require manual intervention. This is closely
-connected to the type of data you have and how it is distributed between shards
-and collections.
-
-In general, the most important parameters are:
-- Total number of leader shards
-- The amount of data in bytes per collection
-
-Both parameters can be retrieved from the ArangoDB Web Interface.
-
-The `arangosync-migration` tool supports migrating large datasets of up to
-5 TB of data and 3800 leader shards, as well as collections as big as 250 GB.
-
-In case you have any questions, please
-[reach out to us](https://www.arangodb.com/contact).
-
-## Cloud migration workflow for minimal downtime
-
-1. Download and start the `arangosync-migration` tool. The target deployment
- is switched into read-only mode automatically.
-2. Wait until all shards are in sync. You can use the `status` or the `start`
- command with the same parameters to track that.
-3. Optionally, when all shards are in-sync, you can switch your applications
- to use the endpoint of the ArangoGraph deployment, but note that it stays in
- read-only mode until the migration process is fully completed.
-4. Stop the migration using the `stop` subcommand. The following steps are executed:
- - The source data cluster is switched into read-only mode.
- - It waits until all shards are synchronized.
- - The target deployment is switched into default read/write mode.
-
- {{< info >}}
- If you switched the source data cluster into read-only mode,
- you can switch it back to default (read/write) mode using the
- `set-server-mode` subcommand.
- {{< /info >}}
diff --git a/site/content/3.10/arangograph/monitoring-and-metrics.md b/site/content/3.10/arangograph/monitoring-and-metrics.md
deleted file mode 100644
index 2b9ede4b4a..0000000000
--- a/site/content/3.10/arangograph/monitoring-and-metrics.md
+++ /dev/null
@@ -1,137 +0,0 @@
----
-title: Monitoring & Metrics in ArangoGraph
-menuTitle: Monitoring & Metrics
-weight: 40
-description: >-
- ArangoGraph provides various built-in tools and integrations to help you
- monitor your deployment
----
-The ArangoGraph Insights Platform provides integrated charts, metrics, and logs
-to help you monitor your deployment. This allows you to track your deployment's
-performance, resource utilization, and its overall status.
-
-The key features include:
-- **Built-in monitoring**: Get immediate access to monitoring capabilities for
- your deployments without any additional setup.
-- **Chart-based metrics representation**: Visualize the usage of the DB-Servers
- and Coordinators over a selected timeframe.
-- **Integration with Prometheus and Grafana**: Connect your metrics to Prometheus
- and Grafana for in-depth visualization and analysis.
-
-To get started, select an existing deployment from within a project and
-click **Monitoring** in the navigation.
-
-
-
-## Built-in monitoring and metrics
-
-### In the **Servers** section
-
-The **Servers** section offers an overview of the DB-Servers, Coordinators,
-and Agents used in your deployment. It provides essential details such as each
-server's ID and type, the running ArangoDB version, as well as their memory,
-CPU, and disk usage.
-
-In case you need to perform a restart on a server, you can do so by using the
-**Gracefully restart this server** action button. This shuts down all services
-normally, allowing ongoing operations to finish gracefully before the restart
-occurs.
-
-Additionally, you can access detailed logs via the **Logs** button. This allows
-you to apply filters to obtain logs from all server types or select specific ones
-(i.e. only Coordinators or only DB-Servers) within a timeframe. To download the
-logs, click the **Save** button.
-
-
-
-### In the **Metrics** section
-
-The **Metrics** section displays a chart-based representation depicting the
-resource utilization of DB-Servers and Coordinators within a specified timeframe.
-
-You can select one or more DB-Servers and choose **CPU**, **Memory**, or **Disk**
-to visualize their respective usage. The search box enables you to easily find
-a server by its ID, particularly useful when having a large number of servers
-or when needing to quickly find a particular one among many.
-
-Similarly, you can repeat the process for Coordinators to see the **CPU** and
-**Memory** usage.
-
-
-
-## Connect with Prometheus and Grafana
-
-The ArangoGraph Insights Platform provides metrics for each deployment in a
-[Prometheus](https://prometheus.io/)-compatible format.
-You can use these metrics to gather detailed insights into the current
-and previous states of your deployment.
-Once metrics are collected by Prometheus, you can inspect them using tools
-such as [Grafana](https://grafana.com/oss/grafana/).
-
-
-
-### Metrics tokens
-
-The **Metrics tokens** section allows you to create a new metrics token,
-which is required for connecting to Prometheus.
-
-1. To create a metrics token, click **New metrics token**.
-2. For **Name**, enter a name for the metrics token.
-3. Optionally, you can also enter a **Short description**.
-4. Select the **Lifetime** of the metrics token.
-5. Click **Create**.
-
-
-
-### How to connect Prometheus
-
-1. In the **Metrics** section, click **Connect Prometheus**.
-2. Create the `prometheus.yml` file with the following content:
- ```yaml
- global:
- scrape_interval: 60s
- scrape_configs:
- - job_name: 'deployment'
- bearer_token: ''
- scheme: 'https'
- static_configs:
- - targets: ['6775e7d48152.arangodb.cloud:8829']
- tls_config:
- insecure_skip_verify: true
- ```
-3. Start Prometheus with the following command:
- ```sh
- docker run -d \
- -p 9090:9090 -p 3000:3000 --name prometheus \
- -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml:ro \
- prom/prometheus
- ```
- {{< info >}}
- This command also opens a port 3000 for Grafana. In a production environment,
- this is not needed and not recommended to have it open.
- {{< /info >}}
-
-### How to connect Grafana
-
-1. Start Grafana with the following command:
- ```sh
- docker run -d \
- --network container:prometheus \
- grafana/grafana
- ```
-2. Go to `localhost:3000` and log in with the following credentials:
- - For username, enter *admin*.
- - For password, enter *admin*.
-
- {{< tip >}}
- After the initial login, make sure to change your password.
- {{< /tip >}}
-
-3. To add a data source, click **Add your first data source** and then do the following:
- - Select **Prometheus**.
- - For **HTTP URL**, enter `http://localhost:9090`.
- - Click **Save & Test**.
-4. To add a dashboard, open the menu and click **Create** and then **Import**.
-5. Download the [Grafana dashboard for ArangoGraph](https://github.com/arangodb-managed/grafana-dashboards).
-6. Copy the contents of the `main.json` file into the **Import via panel json** field in Grafana.
-7. Click **Load**.
diff --git a/site/content/3.10/arangograph/my-account.md b/site/content/3.10/arangograph/my-account.md
deleted file mode 100644
index e79415060a..0000000000
--- a/site/content/3.10/arangograph/my-account.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: My Account in ArangoGraph
-menuTitle: My Account
-weight: 35
-description: >-
- How to manage your user account, your organizations, and your API keys in ArangoGraph
----
-You can access information related to your account via the __User Toolbar__.
-The toolbar is in the top right corner in the ArangoGraph dashboard and
-accessible from every view. There are two elements:
-
-- __Question mark icon__: Help
-- __User icon__: My Account
-
-
-
-## Overview
-
-### How to view my account
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __Overview__ in the __My account__ section.
-3. The __Overview__ displays your name, email address, company and when the
- account was created.
-
-### How to edit the profile of my account
-
-1. Hover over or click the user icon in the __User Toolbar__ in the top right corner.
-2. Click __Overview__ in the __My account__ section.
-3. Click the __Edit__ button.
-4. Change your personal information and __Save__.
-
-
-
-## Organizations
-
-### How to view my organizations
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My organizations__ in the __My account__ section.
-3. Your organizations are listed in a table.
- Click the organization name or the eye icon in the __Actions__ column to
- jump to the organization overview.
-
-### How to create a new organization
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My organizations__ in the __My account__ section.
-3. Click the __New organization__ button.
-4. Enter a name and and a description for the new organization and click the
- __Create__ button.
-
-{{< info >}}
-The free to try tier is limited to a single organization.
-{{< /info >}}
-
-### How to delete an organization
-
-{{< danger >}}
-Removing an organization implies the deletion of projects and deployments.
-This operation cannot be undone and **all deployment data will be lost**.
-Please proceed with caution.
-{{< /danger >}}
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My organizations__ in the __My account__ section.
-3. Click the __recycle bin__ icon in the __Actions__ column.
-4. Enter `Delete!` to confirm and click __Yes__.
-
-{{< info >}}
-If you are no longer a member of any organization, then a new organization is
-created for you when you log in again.
-{{< /info >}}
-
-
-
-## Invites
-
-Invitations are requests to join organizations. You can accept or reject
-pending invites.
-
-### How to create invites
-
-See [Users and Groups: How to add a new member to the organization](organizations/users-and-groups.md#how-to-add-a-new-member-to-the-organization)
-
-### How to respond to my invites
-
-#### I am not a member of an organization yet
-
-1. Once invited, you will receive an email asking to join your
- ArangoGraph organization.
- 
-2. Click the __View my organization invite__ link in the email. You will be
- asked to log in or to create a new account.
-3. To sign up for a new account, click the __Start Free__ button or the
- __Sign up__ link in the header navigation.
- 
-4. After successfully signing up, you will receive a verification email.
-5. Click the __Verify my email address__ link in the email. It takes you back
- to the ArangoGraph Insights Platform site.
- 
-6. After successfully logging in, you can accept or reject the invite to
- join your organization.
- 
-7. After accepting the invite, you become a member of your organization and
- will be granted access to the organization and its related projects and
- deployments.
-
-#### I am already a member of an organization
-
-1. Once invited, you will receive an email asking to join your
- ArangoGraph organization, as well as a notification in the ArangoGraph dashboard.
-2. Click the __View my organization invites__ link in the email, or hover over the
- user icon in the top right corner of the dashboard and click
- __My organization invites__.
- 
-3. On the __Invites__ tab of the __My account__ view, you can accept or reject
- pending invitations, as well as see past invitations that you accepted or
- rejected. Click the button with a checkmark icon to join the organization.
- 
-
-## API Keys
-
-API keys are authentication tokens intended to be used for scripting.
-They allow a script to authenticate on behalf of a user.
-
-An API key consists of a key and a secret. You need both to complete
-authentication.
-
-### How to view my API keys
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My API keys__ in the __My account__ section.
-3. Information about the API keys are listed in the __My API keys__ section.
-
-
-
-### How to create a new API key
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My API keys__ in the __My account__ section.
-3. Click the __New API key__ button.
-4. Optionally limit the API key to a specific organization.
-5. Optionally specify after how many hours the API key should expire into the
- __Time to live__ field.
-6. Optionally limit the API key to read-only APIs
-7. Click the __Create__ button.
-8. Copy the API key ID and Secret, then click the __Close__ button.
-
-{{< security >}}
-The secret is only shown once at creation time.
-You have to store it in a safe place.
-{{< /security >}}
-
-
-
-
-
-### How to revoke or delete an API key
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My API keys__ in the __My account__ section.
-3. Click an icon in the __Actions__ column:
- - __Counter-clockwise arrow__ icon: Revoke API key
- - __Recycle bin__ icon: Delete API key
-4. Click the __Yes__ button to confirm.
-
-{{% comment %}}
-TODO: Copy to clipboard button
-Access token that should expire after 1 hour unless renewed, might get removed as it's confusing.
-{{% /comment %}}
diff --git a/site/content/3.10/arangograph/notebooks.md b/site/content/3.10/arangograph/notebooks.md
deleted file mode 100644
index b581dc44d8..0000000000
--- a/site/content/3.10/arangograph/notebooks.md
+++ /dev/null
@@ -1,170 +0,0 @@
----
-title: ArangoGraph Notebooks
-menuTitle: Notebooks
-weight: 25
-description: >-
- How to create and manage colocated Jupyter Notebooks within ArangoGraph
----
-{{< info >}}
-This documentation describes the beta version of the Notebooks feature and is
-subject to change. The beta version is free for all.
-{{< /info >}}
-
-The ArangoGraph Notebook is a JupyterLab notebook embedded in the ArangoGraph
-Insights Platform. The notebook integrates seamlessly with platform,
-automatically connecting to ArangoGraph services, including ArangoDB and the
-ArangoML platform services. This makes it much easier to leverage these
-resources without having to download any data locally or to remember user IDs,
-passwords, and endpoint URLs.
-
-
-
-The ArangoGraph Notebook has built-in [ArangoGraph Magic Commands](#arangograph-magic-commands)
-that answer questions like:
-- What ArangoDB database am I connected to at the moment?
-- What data does the ArangoDB instance contain?
-- How can I access certain documents?
-- How do I create a graph?
-
-The ArangoGraph Notebook also pre-installs [python-arango](https://docs.python-arango.com/en/main/)
-and ArangoML connectors
-to [PyG](https://github.com/arangoml/pyg-adapter),
-[DGL](https://github.com/arangoml/dgl-adapter),
-[CuGraph](https://github.com/arangoml/cugraph-adapter), as well as the
-[FastGraphML](https://github.com/arangoml/fastgraphml)
-library, so you can get started
-right away accessing data in ArangoDB to develop GraphML models using your
-favorite GraphML libraries with GPUs.
-
-## How to create a new notebook
-
-1. Open the deployment in which you want to create the notebook.
-2. Go to the **Data science** section and click the **Create Notebook** button.
-3. Enter a name and optionally a description for your new notebook.
-4. Select a configuration model from the dropdown menu. Click **Save**.
-5. The notebook's phase is set to **Initializing**. Once the phase changes to
- **Running**, the notebook's endpoint is accessible.
-6. Click the **Open notebook** button to access your notebook.
-7. To access your notebook, you need to be signed into ArangoGraph as a user with
- the `notebook.notebook.execute` permission in your project. Organization
- owners have this permission enabled by default. The `notebook-executor` role
- which contains the permission can also be granted to other members of the
- organization via roles. See how to create a
- [role binding](security-and-access-control/_index.md#how-to-view-edit-or-remove-role-bindings-of-a-policy).
-
-{{< info >}}
-Depending on the tier your organization belongs to, different limitations apply:
-- On-Demand and Committed: you can create up to three notebooks per deployment.
-- Free-to-try: you can only create one notebook per deployment.
-{{< /info >}}
-
-
-
-{{< info >}}
-Notebooks in beta version have a fixed configuration of 10 GB of disk size.
-{{< /info >}}
-
-## How to edit a notebook
-
-1. Select the notebook that you want to change from the **Notebooks** tab.
-2. Click **Edit notebook**. You can modify its name and description.
-3. To pause a notebook, click the **Pause notebook** button. You can resume it
-at anytime. The notebook's phase is updated accordingly.
-
-## How to delete a notebook
-
-1. Select the notebook that you want to remove from the **Notebooks** tab.
-2. Click the **Delete notebook** button.
-
-## Getting Started notebook
-
-To get a better understanding of how to interact with your ArangoDB database
-cluster, use the ArangoGraph Getting Started template.
-The ArangoGraph Notebook automatically connects to the ArangoDB service
-endpoint, so you can immediately start interacting with it.
-
-1. Log in to the notebook you have created by using your deployment's root password.
-2. Select the `GettingStarted.ipynb` template from the file browser.
-
-## ArangoGraph Magic Commands
-
-A list of the available magic commands you can interact with.
-Single line commands have `%` prefix and multi-line commands have `%%` prefix.
-
-**Database Commands**
-
-- `%listDatabases` - lists the databases on the database server.
-- `%whichDatabase` - returns the database name you are connected to.
-- `%createDatabase databaseName` - creates a database.
-- `%selectDatabase databaseName` - selects a database as the current database.
-- `%useDatabase databasename` - uses a database as the current database;
- alias for `%selectDatabase`.
-- `%getDatabase databaseName` - gets a database. Used for assigning a database,
- e.g. `studentDB` = `getDatabase student_database`.
-- `%deleteDatabase databaseName` - deletes the database.
-
-**Graph Commands**
-
-- `%listGraphs` - lists the graphs defined in the currently selected database.
-- `%whichGraph` - returns the graph name that is currently selected.
-- `%createGraph graphName` - creates a named graph.
-- `%selectGraph graphName` - selects the graph as the current graph.
-- `%useGraph graphName` - uses the graph as the current graph;
- alias for `%selectGraph`.
-- `%getGraph graphName` - gets the graph for variable assignment,
- e.g. `studentGraph` = `%getGraph student-graph`.
-- `%deleteGraph graphName` - deletes a graph.
-
-**Collection Commands**
-
-- `%listCollections` - lists the collections on the selected current database.
-- `%whichCollection` - returns the collection name that is currently selected.
-- `%createCollection collectionName` - creates a collection.
-- `%selectCollection collectionName` - selects a collection as the current collection.
-- `%useCollection collectionName` - uses the collection as the current collection;
- alias for `%selectCollection`.
-- `%getCollection collectionName` - gets a collection for variable assignment,
- e.g. `student` = `% getCollection Student`.
-- `%createEdgeCollection` - creates an edge collection.
-- `%createVertexCollection` - creates a vertex collection.
-- `%createEdgeDefinition` - creates an edge definition.
-- `%deleteCollection collectionName` - deletes the collection.
-- `%truncateCollection collectionName` - truncates the collection.
-- `%sampleCollection collectionName` - returns a random document from the collection.
- If no collection is specified, then it uses the selected collection.
-
-**Document Commands**
-
-- `%insertDocument jsonDocument` - inserts the document into the currently selected collection.
-- `%replaceDocument jsonDocument` - replaces the document in the currently selected collection.
-- `%updateDocument jsonDocument` - updates the document in the currently selected collection.
-- `%deleteDocument jsonDocument` - deletes the document from the currently selected collection.
-- `%%importBulk jsonDocumentArray` - imports an array of documents into the currently selected collection.
-
-**AQL Commands**
-
-- `%aql single-line_aql_query` - executes a single line AQL query.
-- `%%aqlm multi-line_aql_query` - executes a multi-line AQL query.
-
-**Variables**
-
-- `_endpoint` - the endpoint (URL) of the ArangoDB Server.
-- `_system` - the system database used for creating, listing, and deleting databases.
-- `_db` - the selected (current) database. To select a different database, use `%selectDatabase`.
-- `_graph` - the selected (current) graph. To select a different graph, use `%selectGraph`.
-- `_collection` - the selected (current) collection. To select a different collection, use `%selectCollection`.
-- `_user` - the current user.
-
-You can use these variables directly, for example, `_db.collections()` to list
-collections or `_system.databases` to list databases.
-
-You can also create your own variable assignments, such as:
-
-- `schoolDB` = `%getDatabase schoolDB`
-- `school_graph` = `%getGraph school_graph`
-- `student` = `%getCollection Student`
-
-**Reset environment**
-
-In the event that any of the above variables have been unintentionally changed,
-you can revert all of them to the default state with `reset_environment()`.
diff --git a/site/content/3.10/arangograph/organizations/_index.md b/site/content/3.10/arangograph/organizations/_index.md
deleted file mode 100644
index 85ee2c7656..0000000000
--- a/site/content/3.10/arangograph/organizations/_index.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: Organizations in ArangoGraph
-menuTitle: Organizations
-weight: 10
-description: >-
- How to manage organizations and what type of packages ArangoGraph offers
----
-An ArangoGraph organizations is a container for projects. An organization
-typically represents a (commercial) entity such as a company, a company division,
-an institution, or a non-profit organization.
-
-**Organizations → Projects → Deployments**
-
-Users can be members of one or more organizations. However, you can only be a
-member of one _Free-to-try_ tier organization at a time.
-
-## How to switch between my organizations
-
-1. The first entry in the main navigation (with a double arrow icon) indicates
- the current organization.
-2. Click it to bring up a dropdown menu to select another organization of which you
- are a member.
-3. The overview will open for the selected organization, showing the number of
- projects, the tier and when it was created.
-
-
-
-
-
-## ArangoGraph Packages
-
-With the ArangoGraph Insights Platform, your organization can choose one of the
-following packages.
-
-### Free Trial
-
-ArangoGraph comes with a free-to-try tier that lets you test ArangoGraph for
-free for 14 days. You can get started quickly, without needing to enter a
-credit card.
-
-The free trial gives you access to:
-- One small deployment (4GB) in a region of your choice for 14 days
-- Local backups
-- One ArangoGraph Notebook for learning and data science
-
-After the trial period, your deployment will be deleted automatically.
-
-### On-Demand
-
-Add a payment payment method to gain access to ArangoGraph's full feature set.
-Pay monthly via a credit card for what you actually use.
-
-This package unlocks all ArangoGraph functionality, including:
-- Multiple and larger deployments
-- Backups to cloud storage, with multi-region support
-- Enhanced security features such as Private Endpoints
-
-### Committed
-
-Commit up-front for a year and pay via the Sales team. This package provides
-the same flexibility of On-Demand, but at a lower price.
-
-In addition, you gain access to:
-- 24/7 Premium Support
-- ArangoDB Professional Services Engagements
-- Ability to transact via the AWS and GCP marketplaces
-
-To take advantage of this, you need to get in touch with the ArangoDB
-team. [Contact us](https://www.arangodb.com/contact/) for more details.
-
-## How to unlock all features
-
-You can unlock all features in ArangoGraph at any time by adding your billing
-details and a payment method. As soon as you have added a payment method, all
-ArangoGraph functionalities are immediately unlocked. From that point on, your
-deployments will no longer expire and you can create more and larger deployments.
-
-See [Billing: How to add billing details / payment methods](billing.md)
-
-
-
-## How to create a new organization
-
-See [My Account: How to create a new organization](../my-account.md#how-to-create-a-new-organization)
-
-## How to restrict access to an organization
-
-If you want to restrict access to an organization, you can do it by specifying which authentication providers are accepted for users trying to access the organization. For more information, refer to the [Access Control](../security-and-access-control/_index.md#restricting-access-to-organizations) section.
-
-## How to delete the current organization
-
-{{< danger >}}
-Removing an organization implies the deletion of projects and deployments.
-This operation cannot be undone and **all deployment data will be lost**.
-Please proceed with caution.
-{{< /danger >}}
-
-1. Click **Overview** in the **Organization** section of the main navigation.
-2. Open the **Danger zone** tab.
-3. Click the **Delete organization** button.
-4. Enter `Delete!` to confirm and click **Yes**.
-
-{{< info >}}
-If you are no longer a member of any organization, then a new organization is
-created for you when you log in again.
-{{< /info >}}
-
-{{< tip >}}
-If the organization has a locked resource (a project or a deployment), you need to [unlock](../security-and-access-control/_index.md#locked-resources)
-that resource first to be able to delete the organization.
-{{< /tip >}}
diff --git a/site/content/3.10/arangograph/organizations/billing.md b/site/content/3.10/arangograph/organizations/billing.md
deleted file mode 100644
index 9b892b5500..0000000000
--- a/site/content/3.10/arangograph/organizations/billing.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Billing in ArangoGraph
-menuTitle: Billing
-weight: 10
-description: >-
- How to manage billing details and payment methods in ArangoGraph
----
-## How to add billing details
-
-1. In the main navigation menu, click the **Organization** icon.
-2. Click **Billing** in the **Organization** section.
-3. In the **Billing Details** section, click **Edit**.
-4. Enter your company name, billing address, and EU VAT identification number (if applicable).
-5. Optionally, enter the email address(es) to which invoices should be emailed
- to automatically.
-6. Click **Save**.
-
-
-
-## How to add a payment method
-
-1. In the main navigation menu, click the **Organization** icon.
-2. Click **Billing** in the **Organization** section.
-3. In the **Payment methods** section, click **Add**.
-4. Fill out the form with your credit card details. Currently, a credit card is the only available payment method.
-5. Click **Save**.
-
-
-
-{{% comment %}}
-TODO: Need screenshot with invoice
-
-### How to view invoices
-
-
-{{% /comment %}}
diff --git a/site/content/3.10/arangograph/organizations/credits-and-usage.md b/site/content/3.10/arangograph/organizations/credits-and-usage.md
deleted file mode 100644
index 34dafb8488..0000000000
--- a/site/content/3.10/arangograph/organizations/credits-and-usage.md
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: Credits & Usage in ArangoGraph
-menuTitle: Credits & Usage
-weight: 15
-description: >-
- Credits give you access to a flexible prepaid model, so you can allocate them
- across multiple deployments as needed
----
-{{< info >}}
-Credits are only available if your organization has signed up for
-ArangoGraph's [Committed](../organizations/_index.md#committed) package.
-{{< /info >}}
-
-The ArangoGraph credit model is a versatile prepaid model that allows you to
-purchase credits and use them in a flexible way, based on what you have running
-in ArangoGraph.
-
-Instead of purchasing a particular deployment for a year, you can purchase a
-number of ArangoGraph credits that expire a year after purchase. These credits
-are then consumed over that time period, based on the deployments you run
-in ArangoGraph.
-
-For example, a OneShard (three nodes) A64 deployment consumes more credits per
-hour than a smaller deployment such as A8. If you are running multiple deployments,
-like pre-production environments or for different use-cases, these would each consume
-from the same credit balance. However, if you are not running any deployments
-and do not have any backup storage, then none of your credits will be consumed.
-
-{{< tip >}}
-To purchase credits for your organization, you need to get in touch with the
-ArangoDB team. [Contact us](https://www.arangodb.com/contact/) for more details.
-{{< /tip >}}
-
-There are a number of benefits that ArangoGraph credits provide:
-- **Adaptability**: The pre-paid credit model allows you to adapt your usage to
- changing project requirements or fluctuating workloads. By enabling the use of
- credits for various instance types and sizes, you can easily adjust your
- resource allocation.
-- **Efficient handling of resources**: With the ability to purchase credits in
- advance, you can better align your needs in terms of resources and costs.
- You can purchase credits in bulk and then allocate them as needed.
-- **Workload Optimization**: By having a clear view of credit consumption and
- remaining balance, you can identify inefficiencies to further optimize your
- infrastructure, resulting in cost savings and better performance.
-
-## How to view the credit usage
-
-1. In the main navigation, click the **Organization** icon.
-2. Click **Credits & Usage** in the **Organization** section.
-3. In the **Credits & Usage** page, you can:
- - See the remaining credit balance.
- - Track your total credit balance.
- - See a projection of when you will run out of credits, based on the last 30 days of usage.
- - Get a detailed consumption report in PDF format that shows:
- - The number of credits you had at the start of the month.
- - The number of credits consumed in the month.
- - The number of credits remaining.
- - The number of credits consumed for each deployment.
-
-
-
-## FAQs
-
-### Are there any configuration constraints for using the credits?
-
-No. Credits are designed to be used completely flexibly. You can use all of your
-credits for multiple small deployments (i.e. A8s) or you can use them for a single
-large deployment (i.e. A256), or even multiple large deployments, as long as you
-have enough credits remaining.
-
-### What is the flexibility of moving up or down in configuration size of the infrastructure?
-
-You can move up sizes in configuration at any point by editing your deployment
-within ArangoGraph, once every 6 hours to allow for in-place disk expansion.
-
-### Is there a limit to how many deployments I can use my credits on?
-
-There is no specific limit to the number of deployments you can use your credits
-on. The credit model is designed to provide you with the flexibility to allocate
-credits across multiple deployments as needed. This enables you to effectively
-manage and distribute your resources according to your specific requirements and
-priorities. However, it is essential to monitor your credit consumption to ensure
-that you have sufficient credits to cover your deployments.
-
-### Do the credits I purchase expire?
-
-Yes, credits expire 1 year after purchase. You should ensure that you consume
-all of these credits within the year.
-
-### Can I make multiple purchases of credits within a year?
-
-As an organization’s usage of ArangoGraph grows, particularly in the initial
-phases of application development and early production release, it is common
-to purchase a smaller credit package that is later supplemented by a larger
-credit package part-way through the initial credit expiry term.
-In this case, all sets of credits will be available for ArangoGraph consumption
-as a single credit balance. The credits with the earlier expiry date are consumed
-first to avoid credit expiry where possible.
-
-### Can I purchase a specific number of credits (i.e. 3361, 4185)?
-
-ArangoGraph offers a variety of predefined credit packages designed to
-accommodate different needs and stages of the application lifecycle.
-For any credit purchasing needs, please [contact us](https://www.arangodb.com/contact/)
-and we are happy to help find an appropriate package for you.
-
-### How quickly will the credits I purchase be consumed?
-
-The rate at which your purchased credits will be consumed depends on several
-factors, including the type and size of instances you deploy, the amount of
-resources used, and the duration of usage. Each machine size has an hourly credit
-consumption rate, and the overall rate of credit consumption will increase for
-larger sizes or for more machines/deployments. Credits will also be consumed for
-any variable usage charges such as outbound network traffic and backup storage.
-
-### How can I see how many credits I have remaining?
-
-All details about credits, including how many credits have been purchased,
-how many remain, and how they are being consumed are available in the
-**Credits & Usage** page within the ArangoGraph web interface.
-
-### I have a large sharded deployment, how do I know how many credits it will consume?
-
-If you are using credits, then you will be able to see how many credits your
-configured deployment will consume when [creating](../deployments/_index.md#how-to-create-a-new-deployment)
-or [editing a deployment](../deployments/_index.md#how-to-edit-a-deployment).
-
-You can download a detailed consumption report in the
-[**Credits & Usage** section](#how-to-view-the-credit-usage). It shows you the
-number of credits consumed by any deployment you are creating or editing.
-
-All users can see the credit price of each node size in the **Pricing** section.
-
-### What happens if I run out of credits?
-
-If you run out of credits, your access to ArangoGraph's services and resources
-will be temporarily suspended until you purchase additional credits.
-
-### Can I buy credits for a short time period (e.g. 2 months)?
-
-No, you cannot but credits with an expiry of less than 12 months.
-If you require credits for a shorter time frame, such as 2 months, you can still
-purchase one of the standard credit packages and consume the credits as needed
-during that time. You may opt for a smaller credit package that aligns with your
-expected usage during the desired period, rather than the full year’s expected usage.
-Although the credits will have a longer expiration period, this allows you to have
-the flexibility of utilizing the remaining credits for any future needs.
\ No newline at end of file
diff --git a/site/content/3.10/arangograph/organizations/users-and-groups.md b/site/content/3.10/arangograph/organizations/users-and-groups.md
deleted file mode 100644
index abed36697b..0000000000
--- a/site/content/3.10/arangograph/organizations/users-and-groups.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-title: Users and Groups in ArangoGraph
-menuTitle: Users & Groups
-weight: 5
-description: >-
- How to manage individual members and user groups in ArangoGraph
----
-## Users, groups & members
-
-When you use ArangoGraph, you are logged in as a user.
-A user has properties such as name & email address.
-Most important of the user is that it serves as an identity of a person.
-
-A user is member of one or more organizations in ArangoGraph.
-You can become a member of an organization in the following ways:
-
-- Create a new organization. You will become the first member and owner of that
- organization.
-- Be invited to join an organization. Once accepted (by the invited user), this
- user becomes a member of the organization.
-
-If the number of members of an organization becomes large, it helps to group
-users. In ArangoGraph a group is part of an organization and a group contains
-a list of users. All users of the group must be member of the owning organization.
-
-In the **People** section of the dashboard you can manage users, groups and
-invites for the organization.
-
-To edit permissions of members see [Access Control](../security-and-access-control/_index.md).
-
-## Members
-
-Members are a list of users that can access an organization.
-
-
-
-### How to add a new member to the organization
-
-1. In the main navigation, click the __Organization__ icon.
-2. Click __Members__ in the __People__ section.
-3. Optionally, click the __Invites__ entry.
-4. Click the __Invite new member__ button.
-5. In the form that appears, enter the email address of the person you want to
- invite.
-6. Click the __Create__ button.
-7. An email with an organization invite will now be sent to the specified
- email address.
-8. After accepting the invite the person will be added to the organization
- [members](#members).
-
-
-
-### How to respond to an organization invite
-
-See [My Account: How to respond to my invites](../my-account.md#how-to-respond-to-my-invites)
-
-### How to remove a member from the organization
-
-1. Click __Members__ in the __People__ section of the main navigation.
-2. Delete a member by pressing the __recycle bin__ icon in the __Actions__ column.
-3. Confirm the deletion in the dialog that pops up.
-
-{{< info >}}
-You cannot delete members who are organization owners.
-{{< /info >}}
-
-### How to make a member an organization owner
-
-1. Click __Members__ in the __People__ section of the main navigation.
-2. You can convert a member to an organization owner by pressing the __Key__ icon
- in the __Actions__ column.
-3. You can convert a member back to a normal user by pressing the __User__ icon
- in the __Actions__ column.
-
-## Groups
-
-A group is a defined set of members. Groups can then be bound to roles. These
-bindings contribute to the respective organization, project or deployment policy.
-
-
-
-### How to create a new group
-
-1. Click __Groups__ in the __People__ section of the main navigation.
-2. Press the __New group__ button.
-3. Enter a name and optionally a description for your new group.
-4. Select the members you want to be part of the group.
-5. Press the __Create__ button.
-
-
-
-### How to view, edit or remove a group
-
-1. Click __Groups__ in the __People__ section of the main navigation.
-2. Click an icon in the __Actions__ column:
- - __Eye__: View group
- - __Pencil__: Edit group
- - __Recycle bin__: Delete group
-
-You can also click a group name to view it. There are buttons to __Edit__ and
-__Delete__ the currently viewed group.
-
-
-
-{{< info >}}
-The groups __Organization members__ and __Organization owners__ are virtual groups
-and cannot be changed. They always reflect the current set of organization
-members and owners.
-{{< /info >}}
-
-## Invites
-
-### How to create a new organization invite
-
-See [How to add a new member to the organization](#how-to-add-a-new-member-to-the-organization)
-
-### How to view the status of invitations
-
-1. Click __Invites__ in the __People__ section of the main navigation.
-2. The created invites are displayed, grouped by status __Pending__,
- __Accepted__ and __Rejected__.
-3. You may delete pending invites by clicking the __recycle bin__ icon in the
- __Actions__ column.
-
-
diff --git a/site/content/3.10/arangograph/projects.md b/site/content/3.10/arangograph/projects.md
deleted file mode 100644
index f4efd27833..0000000000
--- a/site/content/3.10/arangograph/projects.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: Projects in ArangoGraph
-menuTitle: Projects
-weight: 15
-description: >-
- How to manage projects and IP allowlists in ArangoGraph
----
-ArangoGraph projects can represent organizational units such as teams,
-product groups, environments (e.g. staging vs. production). You can have any
-number of projects under one organization.
-
-**Organizations → Projects → Deployments**
-
-Projects are a container for related deployments, certificates & IP allowlists.
-Projects also come with their own policy for access control. You can have any
-number of deployment under one project.
-
-
-
-## How to create a new project
-
-1. In the main navigation, click the __Dashboard__ icon.
-2. Click __Projects__ in the __Dashboard__ section.
-3. Click the __New project__ button.
-4. Enter a name and optionally a description for your new project.
-5. Click the __Create__ button.
-6. You will be taken to the project page.
-7. To change the name or description, click either at the top of the page.
-
-
-
-
-
-{{< info >}}
-Projects contain exactly **one policy**. Within that policy, you can define
-role bindings to regulate access control on a project level.
-{{< /info >}}
-
-## How to create a new deployment
-
-See [Deployments: How to create a new deployment](deployments/_index.md#how-to-create-a-new-deployment)
-
-## How to delete a project
-
-{{< danger >}}
-Deleting a project will delete contained deployments, certificates & IP allowlists.
-This operation is **irreversible**.
-{{< /danger >}}
-
-1. Click __Projects__ in the __Dashboard__ section of the main navigation.
-2. Click the __recycle bin__ icon in the __Actions__ column of the project to be deleted.
-3. Enter `Delete!` to confirm and click __Yes__.
-
-{{< tip >}}
-If the project has a locked deployment, you need to [unlock](security-and-access-control/_index.md#locked-resources)
-it first to be able to delete the project.
-{{< /tip >}}
-
-## How to manage IP allowlists
-
-IP allowlists let you limit access to your deployment to certain IP ranges.
-It is optional, but strongly recommended to do so.
-
-You can create an allowlist as part of a project.
-
-1. Click a project name in the __Projects__ section of the main navigation.
-2. Click the __Security__ entry.
-3. In the __IP allowlists__ section, click:
- - The __New IP allowlist__ button to create a new allowlist.
- When creating or editing a list, you can add comments
- in the __Allowed CIDR ranges (1 per line)__ section.
- Everything after `//` or `#` is considered a comment until the end of the line.
- - A name or the __eye__ icon in the __Actions__ column to view the allowlist.
- - The __pencil__ icon to edit the allowlist.
- You can also view the allowlist and click the __Edit__ button.
- - The __recycle bin__ icon to delete the allowlist.
-
-## How to manage role bindings
-
-See:
-- [Access Control: How to view, edit or remove role bindings of a policy](security-and-access-control/_index.md#how-to-view-edit-or-remove-role-bindings-of-a-policy)
-- [Access Control: How to add a role binding to a policy](security-and-access-control/_index.md#how-to-add-a-role-binding-to-a-policy)
diff --git a/site/content/3.10/arangograph/security-and-access-control/_index.md b/site/content/3.10/arangograph/security-and-access-control/_index.md
deleted file mode 100644
index 27742b57b3..0000000000
--- a/site/content/3.10/arangograph/security-and-access-control/_index.md
+++ /dev/null
@@ -1,698 +0,0 @@
----
-title: Security and access control in ArangoGraph
-menuTitle: Security and Access Control
-weight: 45
-description: >-
- This guide explains which access control concepts are available in
- ArangoGraph and how to use them
----
-The ArangoGraph Insights Platform has a structured set of resources that are subject to security and
-access control:
-
-- Organizations
-- Projects
-- Deployments
-
-For each of these resources, you can perform various operations.
-For example, you can create a project in an organization and create a deployment
-inside a project.
-
-## Locked resources
-
-In ArangoGraph, you can lock the resources to prevent accidental deletion. When
-a resource is locked, it cannot be deleted and must be unlocked first.
-
-The hierarchical structure of the resources (organization-project-deployment)
-is used in the locking functionality: if a child resource is locked
-(for example, a deployment), you cannot delete the parent project without
-unlocking that deployment first.
-
-{{< info >}}
-If you lock a backup policy of a deployment or an IP allowlist, CA certificate,
-and IAM provider of a project, it is still possible to delete
-the corresponding parent resource without unlocking those properties first.
-{{< /info >}}
-
-## Policy
-
-Various actions in ArangoGraph require different permissions, which can be
-granted to users via **roles**.
-
-The association of a member with a role is called a **role binding**.
-All role bindings of a resource comprise a **policy**.
-
-Roles can be bound on an organization, project, and deployment level (listed in
-the high to low level order, with lower levels inheriting permissions from their
-parents). This means that there is a unique policy per resource (an organization,
-a project, or a deployment).
-
-For example, an organization has exactly one policy,
-which binds roles to members of the organization. These bindings are used to
-give the users permissions to perform operations in this organization.
-This is useful when, as an organization owner, you need to extend the permissions
-for an organization member.
-
-{{< info >}}
-Permissions linked to predefined roles vary between organization owners and
-organization members. If you need to extend permissions for an organization
-member, you can create a new role binding. The complete list of roles and
-their respective permissions for both organization owners and members can be
-viewed on the **Policy** page of an organization within the ArangoGraph dashboard.
-{{< /info >}}
-
-### How to view, edit, or remove role bindings of a policy
-
-Decide whether you want to edit the policy for an organization, a project,
-or a deployment:
-
-- **Organization**: In the main navigation, click the __Organization__ icon and
- then click __Policy__.
-- **Project**: In the main navigation, click the __Dashboard__ icon, then click
- __Projects__, click the name of the desired project, and finally click __Policy__.
-- **Deployment**: In the main navigation, click the __Dashboard__ icon, then
- click __Deployments__, click the name of the desired deployment, and finally
- click __Policy__.
-
-To delete a role binding, click the **Recycle Bin** icon in the **Actions** column.
-
-{{< info >}}
-Currently, you cannot edit a role binding, you can only delete it.
-{{< /info >}}
-
-
-
-### How to add a role binding to a policy
-
-1. Navigate to the **Policy** tab of an organization, a project or a deployment.
-2. Click the **New role binding** button.
-3. Select one or more users and/or groups.
-4. Select one or more roles you want to assign to the specified members.
-5. Click **Create**.
-
-
-
-## Roles
-
-Operations on resources in ArangoGraph require zero (just an authentication) or
-more permissions. Since the
-number of permissions is large and very detailed, it is not practical to assign
-permissions directly to users. Instead, ArangoGraph uses **roles**.
-
-A role is a set of permissions. Roles can be bound to groups (preferably)
-or individual users. You can create such bindings for the respective organization,
-project, or deployment policy.
-
-There are predefined roles, but you can also create custom ones.
-
-
-
-### Predefined roles
-
-Predefined roles are created by ArangoGraph and group related permissions together.
-An example of a predefined role is `deployment-viewer`. This role
-contains all permissions needed to view deployments in a project.
-
-Predefined roles cannot be deleted. Note that permissions linked to predefined
-roles vary between organization owners and organization members.
-
-{{% comment %}}
-Command to generate below list with (Git)Bash:
-
-export OASIS_TOKEN=''
-./oasisctl list roles --organization-id --format json | jq -r '.[] | select(.predefined == true) | "**\(.description)** (`\(.id)`):\n\(.permissions | split(", ") | map("- `\(.)`\n") | join(""))"'
-{{% /comment %}}
-
-{{< details summary="List of predefined roles and their permissions" >}}
-
-{{* tip */>}}
-The roles below are described following this pattern:
-
-**Role description** (`role ID`):
-- `Permission`
-{{* /tip */>}}
-
-**Audit Log Admin** (`auditlog-admin`):
-- `audit.auditlog.create`
-- `audit.auditlog.delete`
-- `audit.auditlog.get`
-- `audit.auditlog.list`
-- `audit.auditlog.set-default`
-- `audit.auditlog.test-https-post-destination`
-- `audit.auditlog.update`
-
-**Audit Log Archive Admin** (`auditlog-archive-admin`):
-- `audit.auditlogarchive.delete`
-- `audit.auditlogarchive.get`
-- `audit.auditlogarchive.list`
-
-**Audit Log Archive Viewer** (`auditlog-archive-viewer`):
-- `audit.auditlogarchive.get`
-- `audit.auditlogarchive.list`
-
-**Audit Log Attachment Admin** (`auditlog-attachment-admin`):
-- `audit.auditlogattachment.create`
-- `audit.auditlogattachment.delete`
-- `audit.auditlogattachment.get`
-
-**Audit Log Attachment Viewer** (`auditlog-attachment-viewer`):
-- `audit.auditlogattachment.get`
-
-**Audit Log Event Admin** (`auditlog-event-admin`):
-- `audit.auditlogevent.delete`
-- `audit.auditlogevents.get`
-
-**Audit Log Event Viewer** (`auditlog-event-viewer`):
-- `audit.auditlogevents.get`
-
-**Audit Log Viewer** (`auditlog-viewer`):
-- `audit.auditlog.get`
-- `audit.auditlog.list`
-
-**Backup Administrator** (`backup-admin`):
-- `backup.backup.copy`
-- `backup.backup.create`
-- `backup.backup.delete`
-- `backup.backup.download`
-- `backup.backup.get`
-- `backup.backup.list`
-- `backup.backup.restore`
-- `backup.backup.update`
-- `backup.feature.get`
-- `data.deployment.restore-backup`
-
-**Backup Viewer** (`backup-viewer`):
-- `backup.backup.get`
-- `backup.backup.list`
-- `backup.feature.get`
-
-**Backup Policy Administrator** (`backuppolicy-admin`):
-- `backup.backuppolicy.create`
-- `backup.backuppolicy.delete`
-- `backup.backuppolicy.get`
-- `backup.backuppolicy.list`
-- `backup.backuppolicy.update`
-- `backup.feature.get`
-
-**Backup Policy Viewer** (`backuppolicy-viewer`):
-- `backup.backuppolicy.get`
-- `backup.backuppolicy.list`
-- `backup.feature.get`
-
-**Billing Administrator** (`billing-admin`):
-- `billing.config.get`
-- `billing.config.set`
-- `billing.invoice.get`
-- `billing.invoice.get-preliminary`
-- `billing.invoice.get-statistics`
-- `billing.invoice.list`
-- `billing.organization.get`
-- `billing.paymentmethod.create`
-- `billing.paymentmethod.delete`
-- `billing.paymentmethod.get`
-- `billing.paymentmethod.get-default`
-- `billing.paymentmethod.list`
-- `billing.paymentmethod.set-default`
-- `billing.paymentmethod.update`
-- `billing.paymentprovider.list`
-
-**Billing Viewer** (`billing-viewer`):
-- `billing.config.get`
-- `billing.invoice.get`
-- `billing.invoice.get-preliminary`
-- `billing.invoice.get-statistics`
-- `billing.invoice.list`
-- `billing.organization.get`
-- `billing.paymentmethod.get`
-- `billing.paymentmethod.get-default`
-- `billing.paymentmethod.list`
-- `billing.paymentprovider.list`
-
-**CA Certificate Administrator** (`cacertificate-admin`):
-- `crypto.cacertificate.create`
-- `crypto.cacertificate.delete`
-- `crypto.cacertificate.get`
-- `crypto.cacertificate.list`
-- `crypto.cacertificate.set-default`
-- `crypto.cacertificate.update`
-
-**CA Certificate Viewer** (`cacertificate-viewer`):
-- `crypto.cacertificate.get`
-- `crypto.cacertificate.list`
-
-**Dataloader Administrator** (`dataloader-admin`):
-- `dataloader.deployment.import`
-
-**Deployment Administrator** (`deployment-admin`):
-- `data.cpusize.list`
-- `data.deployment.create`
-- `data.deployment.create-test-database`
-- `data.deployment.delete`
-- `data.deployment.get`
-- `data.deployment.list`
-- `data.deployment.pause`
-- `data.deployment.rebalance-shards`
-- `data.deployment.resume`
-- `data.deployment.rotate-server`
-- `data.deployment.update`
-- `data.deployment.update-scheduled-root-password-rotation`
-- `data.deploymentfeatures.get`
-- `data.deploymentmodel.list`
-- `data.deploymentprice.calculate`
-- `data.diskperformance.list`
-- `data.limits.get`
-- `data.nodesize.list`
-- `data.presets.list`
-- `monitoring.logs.get`
-- `monitoring.metrics.get`
-- `notification.deployment-notification.list`
-- `notification.deployment-notification.mark-as-read`
-- `notification.deployment-notification.mark-as-unread`
-
-**Deployment Content Administrator** (`deployment-content-admin`):
-- `data.cpusize.list`
-- `data.deployment.create-test-database`
-- `data.deployment.get`
-- `data.deployment.list`
-- `data.deploymentcredentials.get`
-- `data.deploymentfeatures.get`
-- `data.deploymentmodel.list`
-- `data.deploymentprice.calculate`
-- `data.diskperformance.list`
-- `data.limits.get`
-- `data.nodesize.list`
-- `data.presets.list`
-- `monitoring.logs.get`
-- `monitoring.metrics.get`
-- `notification.deployment-notification.list`
-- `notification.deployment-notification.mark-as-read`
-- `notification.deployment-notification.mark-as-unread`
-
-**Deployment Full Access User** (`deployment-full-access-user`):
-- `data.deployment.full-access`
-
-**Deployment Read Only User** (`deployment-read-only-user`):
-- `data.deployment.read-only-access`
-
-**Deployment Viewer** (`deployment-viewer`):
-- `data.cpusize.list`
-- `data.deployment.get`
-- `data.deployment.list`
-- `data.deploymentfeatures.get`
-- `data.deploymentmodel.list`
-- `data.deploymentprice.calculate`
-- `data.diskperformance.list`
-- `data.limits.get`
-- `data.nodesize.list`
-- `data.presets.list`
-- `monitoring.metrics.get`
-- `notification.deployment-notification.list`
-- `notification.deployment-notification.mark-as-read`
-- `notification.deployment-notification.mark-as-unread`
-
-**Deployment Profile Viewer** (`deploymentprofile-viewer`):
-- `deploymentprofile.deploymentprofile.list`
-
-**Example Datasets Viewer** (`exampledataset-viewer`):
-- `example.exampledataset.get`
-- `example.exampledataset.list`
-
-**Example Dataset Installation Administrator** (`exampledatasetinstallation-admin`):
-- `example.exampledatasetinstallation.create`
-- `example.exampledatasetinstallation.delete`
-- `example.exampledatasetinstallation.get`
-- `example.exampledatasetinstallation.list`
-- `example.exampledatasetinstallation.update`
-
-**Example Dataset Installation Viewer** (`exampledatasetinstallation-viewer`):
-- `example.exampledatasetinstallation.get`
-- `example.exampledatasetinstallation.list`
-
-**Group Administrator** (`group-admin`):
-- `iam.group.create`
-- `iam.group.delete`
-- `iam.group.get`
-- `iam.group.list`
-- `iam.group.update`
-
-**Group Viewer** (`group-viewer`):
-- `iam.group.get`
-- `iam.group.list`
-
-**IAM provider Administrator** (`iamprovider-admin`):
-- `security.iamprovider.create`
-- `security.iamprovider.delete`
-- `security.iamprovider.get`
-- `security.iamprovider.list`
-- `security.iamprovider.set-default`
-- `security.iamprovider.update`
-
-**IAM provider Viewer** (`iamprovider-viewer`):
-- `security.iamprovider.get`
-- `security.iamprovider.list`
-
-**IP allowlist Administrator** (`ipwhitelist-admin`):
-- `security.ipallowlist.create`
-- `security.ipallowlist.delete`
-- `security.ipallowlist.get`
-- `security.ipallowlist.list`
-- `security.ipallowlist.update`
-
-**IP allowlist Viewer** (`ipwhitelist-viewer`):
-- `security.ipallowlist.get`
-- `security.ipallowlist.list`
-
-**Metrics Administrator** (`metrics-admin`):
-- `metrics.endpoint.get`
-- `metrics.token.create`
-- `metrics.token.delete`
-- `metrics.token.get`
-- `metrics.token.list`
-- `metrics.token.revoke`
-- `metrics.token.update`
-
-**Migration Administrator** (`migration-admin`):
-- `replication.deploymentmigration.create`
-- `replication.deploymentmigration.delete`
-- `replication.deploymentmigration.get`
-
-**MLServices Admin** (`mlservices-admin`):
-- `ml.mlservices.get`
-
-**Notebook Administrator** (`notebook-admin`):
-- `notebook.model.list`
-- `notebook.notebook.create`
-- `notebook.notebook.delete`
-- `notebook.notebook.get`
-- `notebook.notebook.list`
-- `notebook.notebook.pause`
-- `notebook.notebook.resume`
-- `notebook.notebook.update`
-
-**Notebook Executor** (`notebook-executor`):
-- `notebook.notebook.execute`
-
-**Notebook Viewer** (`notebook-viewer`):
-- `notebook.model.list`
-- `notebook.notebook.get`
-- `notebook.notebook.list`
-
-**Organization Administrator** (`organization-admin`):
-- `billing.organization.get`
-- `resourcemanager.organization-invite.create`
-- `resourcemanager.organization-invite.delete`
-- `resourcemanager.organization-invite.get`
-- `resourcemanager.organization-invite.list`
-- `resourcemanager.organization-invite.update`
-- `resourcemanager.organization.delete`
-- `resourcemanager.organization.get`
-- `resourcemanager.organization.update`
-
-**Organization Viewer** (`organization-viewer`):
-- `billing.organization.get`
-- `resourcemanager.organization-invite.get`
-- `resourcemanager.organization-invite.list`
-- `resourcemanager.organization.get`
-
-**Policy Administrator** (`policy-admin`):
-- `iam.policy.get`
-- `iam.policy.update`
-
-**Policy Viewer** (`policy-viewer`):
-- `iam.policy.get`
-
-**Prepaid Deployment Viewer** (`prepaid-deployment-viewer`):
-- `prepaid.prepaiddeployment.get`
-- `prepaid.prepaiddeployment.list`
-
-**Private Endpoint Service Administrator** (`privateendpointservice-admin`):
-- `network.privateendpointservice.create`
-- `network.privateendpointservice.get`
-- `network.privateendpointservice.get-by-deployment-id`
-- `network.privateendpointservice.get-feature`
-- `network.privateendpointservice.update`
-
-**Private Endpoint Service Viewer** (`privateendpointservice-viewer`):
-- `network.privateendpointservice.get`
-- `network.privateendpointservice.get-by-deployment-id`
-- `network.privateendpointservice.get-feature`
-
-**Project Administrator** (`project-admin`):
-- `resourcemanager.project.create`
-- `resourcemanager.project.delete`
-- `resourcemanager.project.get`
-- `resourcemanager.project.list`
-- `resourcemanager.project.update`
-
-**Project Viewer** (`project-viewer`):
-- `resourcemanager.project.get`
-- `resourcemanager.project.list`
-
-**Replication Administrator** (`replication-admin`):
-- `replication.deployment.clone-from-backup`
-- `replication.deploymentreplication.get`
-- `replication.deploymentreplication.update`
-- `replication.migration-forwarder.upgrade-connection`
-
-**Role Administrator** (`role-admin`):
-- `iam.role.create`
-- `iam.role.delete`
-- `iam.role.get`
-- `iam.role.list`
-- `iam.role.update`
-
-**Role Viewer** (`role-viewer`):
-- `iam.role.get`
-- `iam.role.list`
-
-**SCIM Administrator** (`scim-admin`):
-- `scim.user.add`
-- `scim.user.delete`
-- `scim.user.get`
-- `scim.user.list`
-- `scim.user.update`
-
-**User Administrator** (`user-admin`):
-- `iam.user.get-personal-data`
-- `iam.user.update`
-
-{{< /details >}}
-
-### How to create a custom role
-
-1. In the main navigation menu, click **Access Control**.
-2. On the **Roles** tab, click **New role**.
-3. Enter a name and optionally a description for the new role.
-4. Select the required permissions.
-5. Click **Create**.
-
-
-
-### How to view, edit or remove a custom role
-
-1. In the main navigation menu, click **Access Control**.
-2. On the **Roles** tab, click:
- - A role name or the **eye** icon in the **Actions** column to view the role.
- - The **pencil** icon in the **Actions** column to edit the role.
- You can also view a role and click the **Edit** button in the detail view.
- - The **recycle bin** icon to delete the role.
- You can also view a role and click the **Delete** button in the detail view.
-
-## Permissions
-
-Each operation done on a resource requires zero (just authentication) or more **permissions**.
-A permission is a constant string such as `resourcemanager.project.create`,
-following this schema: `..`.
-
-Permissions are solely defined by the ArangoGraph API.
-
-{{% comment %}}
-Retrieved with the below command, with manual adjustments:
-oasisctl list permissions
-
-Note that if the tier is "internal", there is an `internal-dashboard` API that should be excluded in below list!
-{{% /comment %}}
-
-| API | Kind | Verbs
-|:--------------------|:-----------------------------|:-------------------------------------------
-| `audit` | `auditlogarchive` | `delete`, `get`, `list`
-| `audit` | `auditlogattachment` | `create`, `delete`, `get`
-| `audit` | `auditlogevents` | `get`
-| `audit` | `auditlogevent` | `delete`
-| `audit` | `auditlog` | `create`, `delete`, `get`, `list`, `set-default`, `test-https-post-destination`, `update`
-| `backup` | `backuppolicy` | `create`, `delete`, `get`, `list`, `update`
-| `backup` | `backup` | `copy`, `create`, `delete`, `download`, `get`, `list`, `restore`, `update`
-| `backup` | `feature` | `get`
-| `billing` | `config` | `get`, `set`
-| `billing` | `invoice` | `get`, `get-preliminary`, `get-statistics`, `list`
-| `billing` | `organization` | `get`
-| `billing` | `paymentmethod` | `create`, `delete`, `get`, `get-default`, `list`, `set-default`, `update`
-| `billing` | `paymentprovider` | `list`
-| `crypto` | `cacertificate` | `create`, `delete`, `get`, `list`, `set-default`, `update`
-| `dataloader` | `deployment` | `import`
-| `data` | `cpusize` | `list`
-| `data` | `deploymentcredentials` | `get`
-| `data` | `deploymentfeatures` | `get`
-| `data` | `deploymentmodel` | `list`
-| `data` | `deploymentprice` | `calculate`
-| `data` | `deployment` | `create`, `create-test-database`, `delete`, `full-access`, `get`, `list`, `pause`, `read-only-access`, `rebalance-shards`, `restore-backup`, `resume`, `rotate-server`, `update`, `update-scheduled-root-password-rotation`
-| `data` | `diskperformance` | `list`
-| `data` | `limits` | `get`
-| `data` | `nodesize` | `list`
-| `data` | `presets` | `list`
-| `deploymentprofile` | `deploymentprofile` | `list`
-| `example` | `exampledatasetinstallation` | `create`, `delete`, `get`, `list`, `update`
-| `example` | `exampledataset` | `get`, `list`
-| `iam` | `group` | `create`, `delete`, `get`, `list`, `update`
-| `iam` | `policy` | `get`, `update`
-| `iam` | `role` | `create`, `delete`, `get`, `list`, `update`
-| `iam` | `user` | `get-personal-data`, `update`
-| `metrics` | `endpoint` | `get`
-| `metrics` | `token` | `create`, `delete`, `get`, `list`, `revoke`, `update`
-| `ml` | `mlservices` | `get`
-| `monitoring` | `logs` | `get`
-| `monitoring` | `metrics` | `get`
-| `network` | `privateendpointservice` | `create`, `get`, `get-by-deployment-id`, `get-feature`, `update`
-| `notebook` | `model` | `list`
-| `notebook` | `notebook` | `create`, `delete`, `execute`, `get`, `list`, `pause`, `resume`, `update`
-| `notification` | `deployment-notification` | `list`, `mark-as-read`, `mark-as-unread`
-| `prepaid` | `prepaiddeployment` | `get`, `list`
-| `replication` | `deploymentmigration` | `create`, `delete`, `get`
-| `replication` | `deploymentreplication` | `get`, `update`
-| `replication` | `deployment` | `clone-from-backup`
-| `replication` | `migration-forwarder` | `upgrade-connection`
-| `resourcemanager` | `organization-invite` | `create`, `delete`, `get`, `list`, `update`
-| `resourcemanager` | `organization` | `delete`, `get`, `update`
-| `resourcemanager` | `project` | `create`, `delete`, `get`, `list`, `update`
-| `scim` | `user` | `add`, `delete`, `get`, `list`, `update`
-| `security` | `iamprovider` | `create`, `delete`, `get`, `list`, `set-default`, `update`
-| `security` | `ipallowlist` | `create`, `delete`, `get`, `list`, `update`
-
-### Permission inheritance
-
-Each resource (organization, project, deployment) has its own policy, but this does not mean that you have to
-repeat role bindings in all these policies.
-
-Once you assign a role to a user (or group of users) in a policy at one level,
-all the permissions of this role are inherited in lower levels -
-permissions are inherited downwards from an organization to its projects and
-from a project to its deployments.
-
-For more general permissions, which you want to be propagated to other levels,
-add a role for a user/group at the organization level.
-For example, if you bind the `deployment-viewer` role to user `John` in the
-organization policy, `John` will have the role permissions in all projects of
-that organization and all deployments of the projects.
-
-For more restrictive permissions, which you don't necessarily want to be
-propagated to other levels, add a role at the project or even deployment level.
-For example, if you bind the `deployment-viewer` role to user `John`
-in a project, `John` will have the role permissions in
-this project as well as in all the deployments of it, but not
-in other projects of the parent organization.
-
-**Inheritance example**
-
-- Let's assume you have a group called "Deployers" which includes users who deal with deployments.
-- Then you create a role "Deployment Viewer", containing
- `data.deployment.get` and `data.deployment.list` permissions.
-- You can now add a role binding of the "Deployers" group to the "Deployment Viewer" role.
-- If you add the binding to an organization policy, members of this group
- will be granted the defined permissions for the organization, all its projects and all its deployments.
-- If you add the role binding to a policy of project ABC, members of this group will be granted
- the defined permissions for project ABC only and its deployments, but not for
- other projects and their deployments.
-- If you add the role binding to a policy of deployment X, members of this
- group will be granted the defined permissions for deployment X only, and not
- any other deployment of the parent project or any other project of the organization.
-
-The "Deployment Viewer" role is effective for the following entities depending
-on which policy the binding is added to:
-
-Role binding added to → Role effective on ↓ | Organization policy | Project ABC's policy | Deployment X's policy of project ABC |
-|:---:|:---:|:---:|:---:|
-Organization, its projects and deployments | ✓ | — | —
-Project ABC and its deployments | ✓ | ✓ | —
-Project DEF and its deployments | ✓ | — | —
-Deployment X of project ABC | ✓ | ✓ | ✓
-Deployment Y of project ABC | ✓ | ✓ | —
-Deployment Z of project DEF | ✓ | — | —
-
-## Restricting access to organizations
-
-To enhance security, you can implement the following restrictions via [Oasisctl](../oasisctl/_index.md):
-
-1. Limit allowed authentication providers.
-2. Specify an allowed domain list.
-
-{{< info >}}
-Note that users who do not meet the restrictions will not be granted permissions for any resource in
-the organization. These users can still be members of the organization.
-{{< /info >}}
-
-Using the first option, you can limit which **authentication providers** are
-accepted for users trying to access an organization in ArangoGraph.
-The following commands are available to configure this option:
-
-- `oasisctl get organization authentication providers` - allows you to see which
- authentication providers are enabled for accessing a specific organization
-- `oasisctl update organization authentication providers` - allows you to update
- a list of authentication providers for an organization to which the
- authenticated user has access
- - `--enable-github` - if set, allow access from user accounts authenticated via Github
- - `--enable-google` - if set, allow access from user accounts authenticated via Google
- - `--enable-username-password` - if set, allow access from user accounts
- authenticated via a username/password
-
-Using the second option, you can configure a **list of domains**, and only users
-with email addresses from the specified domains will be able to access an
-organization. The following commands are available to configure this option:
-
-- `oasisctl get organization email domain restrictions -o ` -
- allows you to see which domains are in the allowed list for a specific organization
-- `oasisctl update organization email domain restrictions -o --allowed-domain= --allowed-domain=` -
- allows you to update a list of the allowed domains for a specific organization
-- `oasisctl update organization email domain restrictions -o --allowed-domain=` -
- allows you to reset a list and accept any domains for accessing a specific organization
-
-## Using an audit log
-
-{{< info >}}
-To enable the audit log feature, get in touch with the ArangoGraph team via **Request Help**, available in the left sidebar menu of the ArangoGraph Dashboard.
-{{< /info >}}
-
-To have a better overview of the events happening in your ArangoGraph organization,
-you can set up an audit log, which will track and log auditing information for you.
-The audit log is created on the organization level, then you can use the log for
-projects belonging to that organization.
-
-***To create an audit log***
-
-1. In the main navigation menu, click **Access Control** in the **Organization** section.
-2. Open the **Audit logs** tab and click the **New audit log** button.
-3. In the dialog, fill out the following settings:
-
- - **Name** - enter a name for your audit log.
- - **Description** - enter an optional description for your audit log.
- - **Destinations** - specify one or several destinations to which you want to
- upload the audit log. If you choose **Upload to cloud**, the log will be
- available on the **Audit logs** tab of your organization. To send the log
- entries to your custom destination, specify a destination URL with
- authentication parameters (the **HTTP destination** option).
-
- {{< info >}}
- The **Upload to cloud** option is not available for the free-to-try tier.
- {{< /info >}}
-
- - **Excluded topics** - select topics that will not be included in the log.
- Please note, that some are excluded by default (for example, `audit-document`).
-
- {{< warning >}}
- Enabling the audit log for all events will have a negative impact on performance.
- {{< /warning >}}
-
- - **Confirmation** - confirm that logging auditing events increases the price of your deployments.
-
- 
-
-4. Click **Create** to add the audit log. You can now use it in the projects
- belonging to your organization.
diff --git a/site/content/3.10/arangograph/security-and-access-control/single-sign-on/_index.md b/site/content/3.10/arangograph/security-and-access-control/single-sign-on/_index.md
deleted file mode 100644
index 1144d59ebd..0000000000
--- a/site/content/3.10/arangograph/security-and-access-control/single-sign-on/_index.md
+++ /dev/null
@@ -1,94 +0,0 @@
----
-title: Single Sign-On (SSO) in ArangoGraph
-menuTitle: Single Sign-On
-weight: 10
-description: >-
- ArangoGraph supports **Single Sign-On** (SSO) authentication using
- **Security Assertion Markup language 2.0** (SAML 2.0)
----
-{{< info >}}
-To enable the Single Sign-On (SSO) feature, get in touch with the ArangoGraph
-team via **Request Help**, available in the left sidebar menu of the
-ArangoGraph Dashboard.
-{{< /info >}}
-
-## About SAML 2.0
-
-The Security Assertion Markup language 2.0 (SAML 2.0) is an open standard created
-to provide cross-domain single sign-on (SSO). It allows you to authenticate in
-multiple web applications by using a single set of login credentials.
-
-SAML SSO works by transferring user authentication data from the identity
-provider (IdP) to the service provider (SP) through an exchange of digitally
-signed XML documents.
-
-## Configure SAML 2.0 using Okta
-
-You can enable SSO for your ArangoGraph organization using Okta as an Identity
-Provider (IdP). For more information about Okta, please refer to the
-[Okta Documentation](https://help.okta.com/en-us/Content/index.htm?cshid=csh-index).
-
-### Create the SAML app integration in Okta
-
-1. Sign in to your Okta account and select **Applications** from the left sidebar menu.
-2. Click **Create App Integration**.
-3. In the **Create a new app integration** dialog, select **SAML 2.0**.
-
- 
-4. In the **General Settings**, specify a name for your integration and click **Next**.
-
- 
-5. Configure the SAML settings:
- - For **Single sign-on URL**, use `https://auth.arangodb.com/login/callback?connection=ORG_ID`
- - For **Audience URI (SP Entity ID)**, use `urn:auth0:arangodb:ORG_ID`
-
- 
-
-6. Replace **ORG_ID** with your organization identifier from the
- ArangoGraph Dashboard. To find your organization ID, go to the **User Toolbar**
- in the top right corner, which is accessible from every view of the Dashboard,
- and click **My organizations**.
-
- If, for example, your organization ID is 14587062, here are the values you
- would use when configuring the SAML settings:
- - `https://auth.arangodb.com/login/callback?connection=14587062`
- - `urn:auth0:arangodb:14587062`
-
- 
-7. In the **Attribute Statements** section, add custom attributes as seen in the image below:
- - email: `user.email`
- - given_name: `user.firstName`
- - family_name: `user.lastName`
- - picture: `user.profileUrl`
-
- This step consists of a mapping between the ArangoGraph attribute names and
- Okta attribute names. The values of these attributes are automatically filled
- in based on the users list that is defined in Okta.
-
- 
-8. Click **Next**.
-9. In the **Configure feedback** section, select **I'm an Okta customer adding an internal app**.
-10. Click **Finish**. The SAML app integration is now created.
-
-### SAML Setup
-
-After creating the app integration, you must perform the SAML setup to finalize
-the SSO configuration.
-
-1. Go to the **SAML Signing Certificates** section, displayed under the **Sign On** tab.
-2. Click **View SAML setup instructions**.
-
- 
-3. The setup instructions include the following items:
- - **Identity Provider Single Sign-On URL**
- - **Identity Provider Issuer**
- - **X.509 Certificate**
-4. Copy the IdP settings, download the certificate using the
- **Download X.509 certificate** button, and share them with the ArangoGraph
- team via an ArangoGraph Support Ticket in order to complete the SSO
- configuration.
-
-{{< info >}}
-If you would like to enable SCIM provisioning in addition to the SSO SAML
-configuration, please refer to the [SCIM](scim-provisioning.md) documentation.
-{{< /info >}}
diff --git a/site/content/3.10/arangograph/security-and-access-control/single-sign-on/scim-provisioning.md b/site/content/3.10/arangograph/security-and-access-control/single-sign-on/scim-provisioning.md
deleted file mode 100644
index 8cf40b8009..0000000000
--- a/site/content/3.10/arangograph/security-and-access-control/single-sign-on/scim-provisioning.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: SCIM Provisioning
-menuTitle: SCIM Provisioning
-weight: 5
-description: >-
- How to enable SCIM provisioning with Okta for your ArangoGraph project
----
-ArangoGraph provides support to control and manage members access in
-ArangoGraph organizations with the
-**System for Cross-domain Identity Management** (SCIM) provisioning.
-This enables you to propagate to ArangoGraph any user access changes by using
-the dedicated API.
-
-{{< info >}}
-To enable the SCIM feature, get in touch with the ArangoGraph team via
-**Request Help**, available in the left sidebar menu of the ArangoGraph Dashboard.
-{{< /info >}}
-
-## About SCIM
-
-[SCIM](https://www.rfc-editor.org/rfc/rfc7644), or the System
-for Cross-domain Identity Management [specification](http://www.simplecloud.info/),
-is an open standard designed to manage user identity information.
-SCIM provides a defined schema for representing users, and a RESTful
-API to run CRUD operations on these user resources.
-
-The SCIM specification expects the following operations so that the SSO system
-can sync the information about user resources in real time:
-
-- `GET /Users` - List all users.
-- `GET /Users/:user_id` - Get details for a given user ID.
-- `POST /Users` - Invite a new user to ArangoGraph.
-- `PUT /Users/:user_id` - Update a given user ID.
-- `DELETE /Users/:user_id` - Delete a specified user ID.
-
-ArangoGraph organization administrators can generate an API key for a specific organization.
-The API token consists of a key and a secret. Using this key and secret as the
-Basic Authentication Header (Basic Auth) in SCIM provisioning, you can access the APIs and
-manage the user resources.
-
-To learn how to generate a new API key in the ArangoGraph Dashboard, see the
-[API Keys](../../my-account.md#api-keys) section.
-
-{{< info >}}
-When creating an API key, it is required to select an organization from the
-list.
-{{< /info >}}
-
-## Enable SCIM provisioning in Okta
-
-To enable SCIM provisioning, you first need to create an SSO integration that
-supports the SCIM provisioning feature.
-
-1. To enable SCIM provisioning for your integration, go to the **General** tab.
-2. In the **App Settings** section, select **Enable SCIM provisioning**.
-3. Navigate to the **Provisioning** tab. The SCIM connection settings are
- displayed under **Settings > Integration**.
-4. Fill in the following fields:
- - For **SCIM connector base URL**, use `https://dashboard.arangodb.cloud/api/scim/v1`
- - For **Unique identifier field for users**, use `userName`
-5. For **Supported provisioning actions**, enable the following:
- - **Import New Users and Profile Updates**
- - **Push New Users**
- - **Push Profile Updates**
-6. From the **Authentication Mode** menu, select the **Basic Auth** option.
- To authenticate using this mode, you need to provide the username and password
- for the account that handles the SCIM actions - in this case ArangoGraph.
-7. Go to the ArangoGraph Dashboard and create a new API key ID and Secret.
-
- 
-
- Make sure to select one organization from the list and do not set any
- value in the **Time to live** field. For more information,
- see [How to create a new API key](../../my-account.md#how-to-create-a-new-api-key).
-8. Use these authentication tokens as username and password when using the
- **Basic Auth** mode and click **Save**.
diff --git a/site/content/3.10/arangograph/security-and-access-control/x-509-certificates.md b/site/content/3.10/arangograph/security-and-access-control/x-509-certificates.md
deleted file mode 100644
index 1ef13ef4e0..0000000000
--- a/site/content/3.10/arangograph/security-and-access-control/x-509-certificates.md
+++ /dev/null
@@ -1,179 +0,0 @@
----
-title: X.509 Certificates in ArangoGraph
-menuTitle: X.509 Certificates
-weight: 5
-description: >-
- X.509 certificates in ArangoGraph are utilized for encrypted remote administration.
- The communication with and between the servers of an ArangoGraph deployment is
- encrypted using the TLS protocol
----
-X.509 certificates are digital certificates that are used to verify the
-authenticity of a website, user, or organization using a public key infrastructure
-(PKI). They are used in various applications, including SSL/TLS encryption,
-which is the basis for HTTPS - the primary protocol for securing communication
-and data transfer over a network.
-
-The X.509 certificate format is a standard defined by the
-[International Telecommunication Union (ITU)](https://www.itu.int/en/Pages/default.aspx)
-and contains information such as the name of the certificate holder, the public
-key associated with the certificate, the certificate's issuer, and the
-certificate's expiration date. An X.509 certificate can be signed by a
-certificate authority (CA) or self-signed.
-
-ArangoGraph is using:
-- **well-known X.509 certificates** created by
-[Let's Encrypt](https://letsencrypt.org/)
-- **self-signed X.509 certificates** created by ArangoGraph platform
-
-## Certificate chains
-
-A certificate chain, also called the chain of trust, is a hierarchical structure
-that links together a series of digital certificates. The trust in the chain is
-established by verifying the identity of the issuer of each certificate in the
-chain. The root of the chain is a trusted third-party, such as a certificate
-authority (CA). The CA issues a certificate to an organization, which in turn
-can issue certificates to servers and other entities.
-
-For example, when you visit a website with an SSL/TLS certificate, the browser
-checks the chain of trust to verify the authenticity of the digital certificate.
-The browser checks to see if the root certificate is trusted, and if it is, it
-trusts the chain of certificates that lead to the end-entity certificate.
-If any of the certificates in the chain are invalid, expired, or revoked, the
-browser does not trust the digital certificate.
-
-## X.509 certificates in ArangoGraph
-
-Each ArangoGraph deployment is accessible on different port numbers:
-- default port `8529`, `443`
-- high port `18529`
-
-Each ArangoGraph Notebook is accessible on different port numbers:
-- default port `8840`, `443`
-- high port `18840`
-
-Metrics are accessible on different port numbers:
-- default port `8829`, `443`
-- high port `18829`
-
-The distinction between these port numbers is in the certificate used for the
-TLS connection.
-
-{{< info >}}
-The default ports (`8529` and `443`) always serve the well-known certificate.
-The [auto login to database UI](../deployments/_index.md#auto-login-to-database-ui)
-feature is only available on the `443` port and is enabled by default.
-{{< /info >}}
-
-### Well-known X.509 certificates
-
-**Well-known X.509 certificates** created by
-[Let's Encrypt](https://letsencrypt.org/) are used on the
-default ports, `8529` and `443`.
-
-This type of certificate has a lifetime of 5 years and is rotated automatically.
-It is recommended to use well-known certificates, as this eases access of a
-deployment in your browser.
-
-{{< info >}}
-The well-known certificate is a wildcard certificate and cannot contain
-Subject Alternative Names (SANs). To include a SAN field, which is needed
-for private endpoints running on Azure, please use the self-signed certificate
-option.
-{{< /info >}}
-
-### Self-signed X.509 certificates
-
-**Self-signed X.509 certificates** are used on the high ports, i.e. `18529`.
-This type of certificate has a lifetime of 1 year, and it is created by the
-ArangoGraph platform. It is also rotated automatically before the expiration
-date.
-
-{{< info >}}
-Unless you switch off the **Use well-known certificate** option in the
-certificate generation, both the default and high port serve the same
-self-signed certificate.
-{{< /info >}}
-
-### Subject Alternative Name (SAN)
-
-The Subject Alternative Name (SAN) is an extension to the X.509 specification
-that allows you to specify additional host names for a single SSL certificate.
-
-When using [private endpoints](../deployments/private-endpoints.md),
-you can specify custom domain names. Note that these are added **only** to
-the self-signed certificate as Subject Alternative Name (SAN).
-
-## How to create a new certificate
-
-1. Click a project name in the **Projects** section of the main navigation.
-2. Click **Security**.
-3. In the **Certificates** section, click:
- - The **New certificate** button to create a new certificate.
- - A name or the **eye** icon in the **Actions** column to view a certificate.
- The dialog that opens provides commands for installing and uninstalling
- the certificate through a console.
- - The **pencil** icon to edit a certificate.
- You can also view a certificate and click the **Edit** button.
- - The **tag** icon to make the certificate the new default.
- - The **recycle bin** icon to delete a certificate.
-
-
-
-## How to install a certificate
-
-Certificates that have the **Use well-known certificate** option enabled do
-not need any installation and are supported by almost all web browsers
-automatically.
-
-When creating a self-signed certificate that has the **Use well-known certificate**
-option disabled, the certificate needs to be installed on your local machine as
-well. This operation varies between operating systems. To install a self-signed
-certificate on your local machine, open the certificate and follow the
-installation instructions.
-
-
-
-
-
-You can also extract the information from all certificates in the chain using the
-`openssl` tool.
-
-- For **well-known certificates**, run the following command:
- ```
- openssl s_client -showcerts -servername <123456abcdef>.arangodb.cloud -connect <123456abcdef>.arangodb.cloud:8529 .arangodb.cloud -connect <123456abcdef>.arangodb.cloud:18529 ` is a placeholder that needs to be replaced with the
-unique ID that is part of your ArangoGraph deployment endpoint URL.
-
-## How to connect to your application
-
-[ArangoDB drivers](../../develop/drivers/_index.md), also called connectors, allow you to
-easily connect ArangoGraph deployments to your application.
-
-1. Navigate to **Deployments** and click the **View** button to show the
- deployment page.
-2. In the **Quick start** section, click the **Connecting drivers** button.
-3. Select your programming language, i.e. Go, Java, Python, etc.
-4. Follow the examples to connect a driver to your deployment. They include
- code examples on how to use certificates in your application.
-
-
-
-## Certificate Rotation
-
-Every certificate has a self-signed root certificate that is going to expire.
-When certificates that are used in existing deployments are about to expire,
-an automatic rotation of the certificates is triggered. This means that the
-certificate is cloned (all existing settings are copied over to a new certificate)
-and all affected deployments then start using the cloned certificate.
-
-Based on the type of certificate used, you may also need to install the new
-certificate on your local machine. For example, self-signed certificates require
-installation. To prevent any downtime, it is recommended to manually create a
-new certificate and apply the required changes prior to the expiration date.
diff --git a/site/content/3.10/components/arangodb-server/_index.md b/site/content/3.10/components/arangodb-server/_index.md
deleted file mode 100644
index 82da2f3a5f..0000000000
--- a/site/content/3.10/components/arangodb-server/_index.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: ArangoDB Server
-menuTitle: ArangoDB Server
-weight: 170
-description: >-
- The ArangoDB daemon (arangod) is the central server binary that can run in
- different modes for a variety of setups like single server and clusters
----
-The ArangoDB server is the core component of ArangoDB. The executable file to
-run it is named `arangod`. The `d` stands for daemon. A daemon is a long-running
-background process that answers requests for services.
-
-The server process serves the various client connections to the server via the
-TCP/HTTP protocol. It also provides a [web interface](../web-interface/_index.md).
-
-_arangod_ can run in different modes for a variety of setups like single server
-and clusters. It differs between the [Community Edition](../../about-arangodb/features/community-edition.md)
-and [Enterprise Edition](../../about-arangodb/features/enterprise-edition.md).
-
-See [Administration](../../operations/administration/_index.md) for server configuration
-and [Deploy](../../deploy/_index.md) for operation mode details.
diff --git a/site/content/3.10/components/arangodb-server/ldap.md b/site/content/3.10/components/arangodb-server/ldap.md
deleted file mode 100644
index b773edf61e..0000000000
--- a/site/content/3.10/components/arangodb-server/ldap.md
+++ /dev/null
@@ -1,563 +0,0 @@
----
-title: ArangoDB Server LDAP Options
-menuTitle: LDAP
-weight: 10
-description: >-
- LDAP authentication options in the ArangoDB server
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-## Basics Concepts
-
-The basic idea is that one can keep the user authentication setup for
-an ArangoDB instance (single or cluster) outside of ArangoDB in an LDAP
-server. A crucial feature of this is that one can add and withdraw users
-and permissions by only changing the LDAP server and in particular
-without touching the ArangoDB instance. Changes are effective in
-ArangoDB within a few minutes.
-
-Since there are many different possible LDAP setups, we must support a
-variety of possibilities for authentication and authorization. Here is
-a short overview:
-
-To map ArangoDB user names to LDAP users there are two authentication
-methods called "simple" and "search". In the "simple" method the LDAP bind
-user is derived from the ArangoDB user name by prepending a prefix and
-appending a suffix. For example, a user "alice" could be mapped to the
-distinguished name `uid=alice,dc=arangodb,dc=com` to perform the LDAP
-bind and authentication.
-See [Simple authentication method](#simple-authentication-method)
-below for details and configuration options.
-
-In the "search" method there are two phases. In Phase 1 a generic
-read-only admin LDAP user account is used to bind to the LDAP server
-first and search for an LDAP user matching the ArangoDB user name. In
-Phase 2, the actual authentication is then performed against the LDAP
-user that was found in phase 1. Both methods are sensible and are
-recommended to use in production.
-See [Search authentication method](#search-authentication-method)
-below for details and configuration options.
-
-Once the user is authenticated, there are now two methods for
-authorization: (a) "roles attribute" and (b) "roles search".
-
-In method (a) ArangoDB acquires a list of roles the authenticated LDAP
-user has from the LDAP server. The actual access rights to databases
-and collections for these roles are configured in ArangoDB itself.
-Users effectively have the union of all access rights of all roles
-they have. This method is probably the most common one for production use
-cases. It combines the advantages of managing users and roles outside of
-ArangoDB in the LDAP server with the fine grained access control within
-ArangoDB for the individual roles. See [Roles attribute](#roles-attribute)
-below for details about method (a) and for the associated configuration
-options.
-
-Method (b) is very similar and only differs from (a) in the way the
-actual list of roles of a user is derived from the LDAP server.
-See [Roles search](#roles-search) below for details about method (b)
-and for the associated configuration options.
-
-## Fundamental options
-
-The fundamental options for specifying how to access the LDAP server are
-the following:
-
- - `--ldap.enabled` this is a boolean option which must be set to
- `true` to activate the LDAP feature
- - `--ldap.server` is a string specifying the host name or IP address
- of the LDAP server
- - `--ldap.port` is an integer specifying the port the LDAP server is
- running on, the default is `389`
- - `--ldap.basedn` specifies the base distinguished name under which
- the search takes place (can alternatively be set via `--ldap.url`)
- - `--ldap.binddn` and `--ldap.bindpasswd` are distinguished name and
- password for a read-only LDAP user to which ArangoDB can bind to
- search the LDAP server. Note that it is necessary to configure these
- for both the "simple" and "search" authentication methods, since
- even in the "simple" method, ArangoDB occasionally has to refresh
- the authorization information from the LDAP server
- even if the user session persists and no new authentication is
- needed! It is, however, allowed to leave both empty, but then the
- LDAP server must be readable with anonymous access.
- - `--ldap.refresh-rate` is a floating point value in seconds. The
- default is 300, which means that ArangoDB refreshes the
- authorization information for authenticated users after at most 5
- minutes. This means that changes in the LDAP server like removed
- users or added or removed roles for a user are effective after
- at most 5 minutes.
-
-Note that the `--ldap.server` and `--ldap.port` options can
-alternatively be specified in the `--ldap.url` string together with
-other configuration options. For details see Section "LDAP URLs" below.
-
-Here is an example on how to configure the connection to the LDAP server,
-with anonymous bind:
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com
-```
-
-With this configuration ArangoDB binds anonymously to the LDAP server
-on host `ldap.arangodb.com` on the default port 389 and executes all searches
-under the base distinguished name `dc=arangodb,dc=com`.
-
-If we need a user to read in LDAP here is the example for it:
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.binddn=uid=arangoadmin,dc=arangodb,dc=com \
---ldap.bindpasswd=supersecretpassword
-```
-
-The connection is identical but the searches are executed with the
-given distinguished name in `binddn`.
-
-Note here:
-The given user (or the anonymous one) needs at least read access on
-all user objects to find them and in the case of Roles search
-also read access on the objects storing the roles.
-
-Up to this point ArangoDB can now connect to a given LDAP server
-but it is not yet able to authenticate users properly with it.
-For this pick one of the following two authentication methods.
-
-### LDAP URLs
-
-As an alternative one can specify the values of multiple LDAP related configuration
-options by specifying a single LDAP URL. Here is an example:
-
-```
---ldap.url ldap://ldap.arangodb.com:1234/dc=arangodb,dc=com?uid?sub
-```
-
-This one option has the combined effect of setting the following:
-
-```
---ldap.server=ldap.arangodb.com \
---ldap.port=1234 \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.searchAttribute=uid \
---ldap.searchScope=sub
-```
-
-That is, the LDAP URL consists of the LDAP `server` and `port`, a `basedn`, a
-`searchAttribute`, and a `searchScope` which can be one of `base`, `one`, or
-`sub`. There is also the possibility to use the `ldaps` protocol as in:
-
-```
---ldap.url ldaps://ldap.arangodb.com:636/dc=arangodb,dc=com?uid?sub
-```
-
-This does exactly the same as the one above, except that it uses the
-LDAP over TLS protocol. This is a non-standard method which does not
-involve using the STARTTLS protocol. Note that this does not work in the
-Windows version! We suggest to use the `ldap` protocol and STARTTLS
-as described in the next section.
-
-### TLS options
-
-{{< warning >}}
-TLS is not supported in the Windows version of ArangoDB!
-{{< /warning >}}
-
-To configure the usage of encrypted TLS to communicate with the LDAP server
-the following options are available:
-
-- `--ldap.tls`: the main switch to active TLS. can either be
- `true` (use TLS) or `false` (do not use TLS). It is switched
- off by default. If you switch this on and do not use the `ldaps`
- protocol via the [LDAP URL](#ldap-urls), then ArangoDB
- uses the `STARTTLS` protocol to initiate TLS. This is the
- recommended approach.
-- `--ldap.tls-version`: the minimal TLS version that ArangoDB should accept.
- Available versions are `1.0`, `1.1` and `1.2`. The default is `1.2`. If
- your LDAP server does not support Version 1.2, you have to change
- this setting.
-- `--ldap.tls-cert-check-strategy`: strategy to validate the LDAP server
- certificate. Available strategies are `never`, `hard`,
- `demand`, `allow` and `try`. The default is `hard`.
-- `--ldap.tls-cacert-file`: a file path to one or more (concatenated)
- certificate authority certificates in PEM format.
- As default no file path is configured. This certificate
- is used to validate the server response.
-- `--ldap.tls-cacert-dir`: a directory path to certificate authority certificates in
- [c_rehash](https://www.openssl.org/docs/man3.0/man1/c_rehash.html)
- format. As default no directory path is configured.
-
-Assuming you have the TLS CAcert file that is given to the server at
-`/path/to/certificate.pem`, here is an example on how to configure TLS:
-
-```
---ldap.tls true \
---ldap.tls-cacert-file /path/to/certificate.pem
-```
-
-You can use TLS with any of the following authentication mechanisms.
-
-### Secondary server options (`ldap2`)
-
-The `ldap.*` options configure the primary LDAP server. It is possible to
-configure a secondary server with the `ldap2.*` options to use it as a
-fail-over for the case that the primary server is not reachable, but also to
-let the primary servers handle some users and the secondary others.
-
-Instead of `--ldap.` you need to specify `--ldap2. `.
-Authentication / authorization first checks the primary LDAP server.
-If this server cannot authenticate a user, it tries the secondary one.
-
-It is possible to specify a file containing all users that the primary
-LDAP server is handling by specifying the option `--ldap.responsible-for`.
-This file must contain the usernames line-by-line. This is also supported for
-the secondary server, which can be used to exclude certain users completely.
-
-### Esoteric options
-
-The following options can be used to configure advanced options for LDAP
-connectivity:
-
-- `--ldap.serialized`: whether or not calls into the underlying LDAP library should be serialized.
- This option can be used to work around thread-unsafe LDAP library functionality.
-- `--ldap.serialize-timeout`: sets the timeout value that is used when waiting to enter the
- LDAP library call serialization lock. This is only meaningful when `--ldap.serialized` has been
- set to `true`.
-- `--ldap.retries`: number of tries to attempt a connection. Setting this to values greater than
- one will make ArangoDB retry to contact the LDAP server in case no connection can be made
- initially.
-
-Please note that some of the following options are platform-specific and may not work
-with all LDAP servers reliably:
-
-- `--ldap.restart`: whether or not the LDAP library should implicitly restart connections
-- `--ldap.referrals`: whether or not the LDAP library should implicitly chase referrals
-
-The following options can be used to adjust the LDAP configuration on Linux and macOS
-platforms only, but does not work on Windows:
-
-- `--ldap.debug`: turn on internal OpenLDAP library output (warning: prints to stdout).
-- `--ldap.timeout`: timeout value (in seconds) for synchronous LDAP API calls (a value of 0
- means default timeout).
-- `--ldap.network-timeout`: timeout value (in seconds) after which network operations
- following the initial connection return in case of no activity (a value of 0 means default timeout).
-- `--ldap.async-connect`: whether or not the connection to the LDAP library is done
- asynchronously.
-
-## Authentication methods
-
-In order to authenticate users in LDAP we have two options available.
-We need to pick exactly one them.
-
-### Simple authentication method
-
-The simple authentication method is used if and only if both the
-`--ldap.prefix` and `--ldap.suffix` configuration options are specified
-and are non-empty. In all other cases the
-["search" authentication method](#search-authentication-method) is used.
-
-In the "simple" method the LDAP bind user is derived from the ArangoDB
-user name by prepending the value of the `--ldap.prefix` configuration
-option and by appending the value of the `--ldap.suffix` configuration
-option. For example, an ArangoDB user "alice" would be mapped to the
-distinguished name `uid=alice,dc=arangodb,dc=com` to perform the LDAP
-bind and authentication, if `--ldap.prefix` is set to `uid=` and
-`--ldap.suffix` is set to `,dc=arangodb,dc=com`.
-
-ArangoDB binds to the LDAP server and authenticates with the
-distinguished name and the password provided by the client. If
-the LDAP server successfully verifies the password then the user is
-authenticated.
-
-If you want to use this method add the following example to your
-ArangoDB configuration together with the fundamental configuration:
-
-```
---ldap.prefix uid= \
---ldap.suffix ,dc=arangodb,dc=com
-```
-
-This method authenticates an LDAP user with the distinguished name
-`{PREFIX}{USERNAME}{SUFFIX}`, in this case for the ArangoDB user `alice`.
-It searches for: `uid=alice,dc=arangodb,dc=com`.
-This distinguished name is used as `{USER}` for the roles later on.
-
-### Search authentication method
-
-The search authentication method is used if at least one of the two
-options `--ldap.prefix` and `--ldap.suffix` is empty or not specified.
-ArangoDB uses the LDAP user credentials given by the `--ldap.binddn` and
-`--ldap.bindpasswd` to perform a search for LDAP users.
-In this case, the values of the options `--ldap.basedn`,
-`--ldap.search-attribute`, `--ldap.search-filter` and `--ldap.search-scope`
-are used in the following way:
-
-- `--ldap.search-scope` is an LDAP search scope with possible values
- `base` (just search the base distinguished name),
- `sub` (recursive search under the base distinguished name) or
- `one` (search the base's immediate children) (default: `sub`)
-- `--ldap.search-filter` is an LDAP filter expression which limits the
- set of LDAP users being considered (default: `objectClass=*` which
- means all objects). The placeholder `{USER}` is replaced by the
- supplied username.
-- `--ldap.search-attribute` specifies the attribute in the user objects
- which is used to match the ArangoDB user name (default: `uid`)
-
-Here is an example on how to configure the search method.
-Assume we have users like the following stored in LDAP:
-
-```
-dn: uid=alice,dc=arangodb,dc=com
-uid: alice
-objectClass: inetOrgPerson
-objectClass: organizationalPerson
-objectClass: top
-objectClass: person
-```
-
-Where `uid` is the username used in ArangoDB, and we only search
-for objects of type `person` then we can add the following to our
-fundamental LDAP configuration:
-
-```
---ldap.search-attribute=uid \
---ldap.search-filter=objectClass=person
-```
-
-This uses the `sub` search scope by default and finds
-all `person` objects where the `uid` is equal to the given username.
-From these, the `dn` is extracted and used as `{USER}` in
-the roles later on.
-
-## Fetching roles for a user
-
-After authentication, the next step is to derive authorization
-information from the authenticated LDAP user.
-In order to fetch the roles and thereby the access rights
-for a user we again have two possible options and need to pick
-one of them. We can combine each authentication method
-with each role method.
-In any case a user can have no role or more than one.
-If a user has no role, the user does not get any access
-to ArangoDB at all.
-If a user has multiple roles with different rights,
-then the rights are combined and the *strongest*
-right wins. Example:
-
-- `alice` has the roles `project-a` and `project-b`.
-- `project-a` has no access to collection `BData`.
-- `project-b` has `rw` access to collection `BData`,
-- hence `alice` has `rw` access on `BData`.
-
-Note that the actual database and collection access rights
-are configured in ArangoDB itself by roles in the users module.
-The role name is always prefixed with `:role:`, e.g.: `:role:project-a`
-and `:role:project-b` respectively. You can use the normal user
-permissions tools in the Web interface or `arangosh` to configure these.
-
-### Roles attribute
-
-The most important method for this is to read off the roles an LDAP
-user is associated with from an attribute in the LDAP user object.
-If the
-
-```
---ldap.roles-attribute-name
-```
-
-configuration option is set, then the value of that
-option is the name of the attribute being used.
-
-Here is the example to add to the overall configuration:
-
-```
---ldap.roles-attribute-name=role
-```
-
-If we have the user stored like the following in LDAP:
-
-```
-dn: uid=alice,dc=arangodb,dc=com
-uid: alice
-objectClass: inetOrgPerson
-objectClass: organizationalPerson
-objectClass: top
-objectClass: person
-role: project-a
-role: project-b
-```
-
-Then the request grants the roles `project-a` and `project-b`
-for the user `alice` after successful authentication,
-as they are stored within the `role` on the user object.
-
-### Roles search
-
-An alternative method for authorization is to conduct a search in the
-LDAP server for LDAP objects representing roles a user has. If the
-
-```
---ldap.roles-search=
-```
-
-configuration option
-is given, then the string `{USER}` in `` is replaced
-with the distinguished name of the authenticated LDAP user and the
-resulting search expression is used to match distinguished names of
-LDAP objects representing roles of that user.
-
-Example:
-
-```
---ldap.roles-search '(&(objectClass=groupOfUniqueNames)(uniqueMember={USER}))'
-```
-
-After a LDAP user is found and authenticated as described in the
-authentication section above, the `{USER}` in the search expression
-is replaced by its distinguished name, e.g. `uid=alice,dc=arangodb,dc=com`,
-and thus with the above search expression the actual search expression
-ends up being:
-
-```
-(&(objectClass=groupOfUniqueNames)(uniqueMember=uid=alice,dc=arangodb,dc=com}))
-```
-
-This search finds all objects of `groupOfUniqueNames` where
-at least one `uniqueMember` has the `dn` of `alice`.
-The list of results of that search would be the list of roles given by
-the values of the `dn` attributes of the found role objects.
-
-### Role transformations and filters
-
-For both of the above authorization methods there are further
-configuration options to tune the role lookup. In this section we
-describe these further options:
-
-- `--ldap.roles-include` can be used to specify a regular expression
- that is used to filter roles. Only roles that match the regular
- expression are used.
-
-- `--ldap.roles-exclude` can be used to specify a regular expression
- that is used to filter roles. Only roles that do not match the regular
- expression are used.
-
-- `--ldap.roles-transformation` can be used to specify a regular
- expression and replacement text as `/re/text/`. This regular
- expression is applied to the role name found. This is especially
- useful in the roles-search variant to extract the real role name
- out of the `dn` value.
-
-- `--ldap.superuser-role` can be used to specify the role associated
- with the superuser. Any user belonging to this role gains superuser
- status. This role is checked after applying the roles-transformation
- expression.
-
-Example:
-
-```
---ldap.roles-include "^arangodb"
-```
-
-This setting only considers roles that start with `arangodb`.
-
-```
---ldap.roles-exclude=disabled
-```
-
-This setting only considers roles that do contain the word `disabled`.
-
-```
---ldap.superuser-role "arangodb-admin"
-```
-
-Anyone belonging to the group `arangodb-admin` become a superuser.
-
-The roles-transformation deserves a larger example. Assume we are using
-roles search and have stored roles in the following way:
-
-```
-dn: cn=project-a,dc=arangodb,dc=com
-objectClass: top
-objectClass: groupOfUniqueNames
-uniqueMember: uid=alice,dc=arangodb,dc=com
-uniqueMember: uid=bob,dc=arangodb,dc=com
-cn: project-a
-description: Internal project A
-
-dn: cn=project-b,dc=arangodb,dc=com
-objectClass: top
-objectClass: groupOfUniqueNames
-uniqueMember: uid=alice,dc=arangodb,dc=com
-uniqueMember: uid=charlie,dc=arangodb,dc=com
-cn: project-b
-description: External project B
-```
-
-In this case, we find `cn=project-a,dc=arangodb,dc=com` as one
-role of `alice`. However, we actually want to configure a role name
-`:role:project-a`, which is easier to read and maintain for our
-administrators.
-
-If we now apply the following transformation:
-
-```
---ldap.roles-transformation=/^cn=([^,]*),.*$/$1/
-```
-
-The regex extracts out `project-a` respectively `project-b` of the
-`dn` attribute.
-
-In combination with the `superuser-role` we could make all
-`project-a` members arangodb admins by using:
-
-```
---ldap.roles-transformation=/^cn=([^,]*),.*$/$1/ \
---ldap.superuser-role=project-a
-```
-
-## Complete configuration examples
-
-In this section we would like to present complete examples
-for a successful LDAP configuration of ArangoDB.
-All of the following are just combinations of the details described above.
-
-**Simple authentication with role-search, using anonymous LDAP user**
-
-This example connects to the LDAP server with an anonymous read-only
-user. We use the simple authentication mode (`prefix` + `suffix`)
-to authenticate users and apply a role search for `groupOfUniqueNames` objects
-where the user is a `uniqueMember`. Furthermore we extract only the `cn`
-out of the distinguished role name.
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.prefix uid= \
---ldap.suffix ,dc=arangodb,dc=com \
---ldap.roles-search '(&(objectClass=groupOfUniqueNames)(uniqueMember={USER}))' \
---ldap.roles-transformation=/^cn=([^,]*),.*$/$1/ \
---ldap.superuser-role=project-a
-```
-
-**Search authentication with roles attribute using LDAP admin user having TLS enabled**
-
-This example connects to the LDAP server with a given distinguished name of an
-admin user + password.
-Furthermore we activate TLS and give the certificate file to validate server responses.
-We use the search authentication searching for the `uid` attribute of `person` objects.
-These `person` objects have `role` attribute(s) containing the role(s) of a user.
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.binddn=uid=arangoadmin,dc=arangodb,dc=com \
---ldap.bindpasswd=supersecretpassword \
---ldap.tls true \
---ldap.tls-cacert-file /path/to/certificate.pem \
---ldap.search-attribute=uid \
---ldap.search-filter=objectClass=person \
---ldap.roles-attribute-name=role
-```
diff --git a/site/content/3.10/components/tools/_index.md b/site/content/3.10/components/tools/_index.md
deleted file mode 100644
index a0d260bac0..0000000000
--- a/site/content/3.10/components/tools/_index.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Tools
-menuTitle: Tools
-weight: 180
-description: >-
- ArangoDB ships with command-line tools like for accessing server instances
- programmatically, deploying clusters, creating backups, and importing data
----
-A full ArangoDB installation package contains the [ArangoDB server](../arangodb-server/_index.md)
-(`arangod`) as well as the following client tools:
-
-| Executable name | Brief description |
-|-----------------|-------------------|
-| `arangosh` | [ArangoDB shell](arangodb-shell/_index.md). A client that implements a read-eval-print loop (REPL) and provides functions to access and administrate the ArangoDB server.
-| `arangodb` | [ArangoDB Starter](arangodb-starter/_index.md) for easy deployment of ArangoDB instances.
-| `arangodump` | Tool to [create backups](arangodump/_index.md) of an ArangoDB database.
-| `arangorestore` | Tool to [load backups](arangorestore/_index.md) back into an ArangoDB database.
-| `arangobackup` | Tool to [perform hot backup operations](arangobackup/_index.md) on an ArangoDB installation.
-| `arangoimport` | [Bulk importer](arangoimport/_index.md) for the ArangoDB server. It supports JSON and CSV.
-| `arangoexport` | [Bulk exporter](arangoexport/_index.md) for the ArangoDB server. It supports JSON, CSV and XML.
-| `arangobench` | [Benchmark and test tool](arangobench/_index.md). It can be used for performance and server function testing.
-| `arangovpack` | Utility to validate and [convert VelocyPack](arangovpack/_index.md) and JSON data.
-| `arangoinspect` | [Inspection tool](arangoinspect/_index.md) that gathers server setup information.
-
-A client installation package comes without the `arangod` server executable and
-the ArangoDB Starter.
-
-Additional tools which are available separately:
-
-| Name | Brief description |
-|-----------------|-------------------|
-| [Foxx CLI](foxx-cli/_index.md) | Command line tool for managing and developing Foxx services
-| [kube-arangodb](../../deploy/kubernetes.md) | Operators to manage Kubernetes deployments
-| [Oasisctl](../../arangograph/oasisctl/_index.md) | Command-line tool for managing the ArangoGraph Insights Platform
diff --git a/site/content/3.10/components/tools/arangodump/examples.md b/site/content/3.10/components/tools/arangodump/examples.md
deleted file mode 100644
index 1c3b95c6f4..0000000000
--- a/site/content/3.10/components/tools/arangodump/examples.md
+++ /dev/null
@@ -1,317 +0,0 @@
----
-title: _arangodump_ Examples
-menuTitle: Examples
-weight: 5
-description: ''
----
-_arangodump_ can be invoked in a command line by executing the following command:
-
-```
-arangodump --output-directory "dump"
-```
-
-This connects to an ArangoDB server and dump all non-system collections from
-the default database (`_system`) into an output directory named `dump`.
-Invoking _arangodump_ fails if the output directory already exists. This is
-an intentional security measure to prevent you from accidentally overwriting already
-dumped data. If you are positive that you want to overwrite data in the output
-directory, you can use the parameter `--overwrite true` to confirm this:
-
-```
-arangodump --output-directory "dump" --overwrite true
-```
-
-_arangodump_ connects to the `_system` database by default using the default
-endpoint. To override the endpoint, or specify a different user, use one of the
-following startup options:
-
-- `--server.endpoint `: endpoint to connect to
-- `--server.username `: username
-- `--server.password `: password to use (omit this and you'll be prompted for the
- password)
-- `--server.authentication `: whether or not to use authentication
-
-If you want to connect to a different database or dump all databases you can additionally
-use the following startup options:
-
-- `--all-databases true`: must have access to all databases, and not specify a database.
-- `--server.database `: name of the database to connect to
-
-Note that the specified user must have access to the databases.
-
-Here's an example of dumping data from a non-standard endpoint, using a dedicated
-[database name](../../../concepts/data-structure/databases.md#database-names):
-
-```
-arangodump \
- --server.endpoint tcp://192.168.173.13:8531 \
- --server.username backup \
- --server.database mydb \
- --output-directory "dump"
-```
-
-In contrast to the above call `--server.database` must not be specified when dumping
-all databases using `--all-databases true`:
-
-```
-arangodump \
- --server.endpoint tcp://192.168.173.13:8531 \
- --server.username backup \
- --all-databases true \
- --output-directory "dump-multiple"
-```
-
-When finished, _arangodump_ prints out a summary line with some aggregate
-statistics about what it did, e.g.:
-
-```
-Processed 43 collection(s), wrote 408173500 byte(s) into datafiles, sent 88 batch(es)
-```
-
-Also, more than one endpoint can be provided, such as:
-
-```
-arangodump \
- --server.endpoint tcp://192.168.173.13:8531 \
- --server.endpoint tcp://192.168.173.13:8532 \
- --server.username backup \
- --all-databases true \
- --output-directory "dump-multiple"
-```
-
-By default, _arangodump_ dumps both structural information and documents from all
-non-system collections. To adjust this, there are the following command-line
-arguments:
-
-- `--dump-data `: set to `true` to include documents in the dump. Set to `false`
- to exclude documents. The default value is `true`.
-- `--include-system-collections `: whether or not to include system collections
- in the dump. The default value is `false`. **Set to _true_ if you are using named
- graphs that you are interested in restoring.**
-
-For example, to only dump structural information of all collections (including system
-collections), use:
-
-```
-arangodump --dump-data false --include-system-collections true --output-directory "dump"
-```
-
-To restrict the dump to just specific collections, use the `--collection` option.
-It can be specified multiple times if required:
-
-```
-arangodump --collection myusers --collection myvalues --output-directory "dump"
-```
-
-Structural information for a collection is saved in files with name pattern
-`.structure.json`. Each structure file contains a JSON object
-with these attributes:
-- `parameters`: contains the collection properties
-- `indexes`: contains the collection indexes
-
-Document data for a collection is saved in files with name pattern
-`.data.json`. Each line in a data file is a document insertion/update or
-deletion marker, alongside with some meta data.
-
-## Cluster Backup
-
-The _arangodump_ tool supports sharding and can be used to backup data from a Cluster.
-Simply point it to one of the _Coordinators_ and it
-behaves exactly as described above, working on sharded collections
-in the Cluster.
-
-Please see the [Limitations](limitations.md).
-
-As above, the output is one structure description file and one data
-file per sharded collection. Note that the data in the data file is
-sorted first by shards and within each shard by ascending timestamp. The
-structural information of the collection contains the number of shards
-and the shard keys.
-
-Note that the version of the arangodump client tool needs to match the
-version of the ArangoDB server it connects to.
-
-### Dumping collections with sharding prototypes
-
-Collections may be created with the shard distribution identical to an existing
-prototypical collection (see [`distributeShardsLike`](../../../develop/javascript-api/@arangodb/db-object.md#db_createcollection-name--properties--type--options));
-i.e. shards are distributed in the very same pattern as in the prototype collection.
-Such collections cannot be dumped without the referenced collection or arangodump
-yields an error.
-
-```
-arangodump --collection clonedCollection --output-directory "dump"
-
-ERROR [f7ff5] {dump} An error occurred: Collection clonedCollection's shard distribution is based on that of collection prototypeCollection, which is not dumped along.
-```
-
-You need to dump the prototype collection as well:
-
-```
-arangodump --collection clonedCollection --collection prototypeCollection --output-directory "dump"
-
-...
-INFO [66c0e] {dump} Processed 2 collection(s) from 1 database(s) in 0.132990 s total time. Wrote 0 bytes into datafiles, sent 6 batch(es) in total.
-```
-
-## Encryption
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-You can encrypt dumps using an encryption keyfile, which must contain exactly 32
-bytes of data (required by the AES block cipher).
-
-The keyfile can be created by an external program, or, on Linux, by using a command
-like the following:
-
-```
-dd if=/dev/random bs=1 count=32 of=yourSecretKeyFile
-```
-
-For security reasons, it is best to create these keys offline (away from your
-database servers) and directly store them in your secret management
-tool.
-
-In order to create an encrypted backup, add the `--encryption.keyfile`
-option when invoking _arangodump_, in addition to any other option you
-are already using. The following example assumes that your secret key
-is stored in ~/SECRET-KEY:
-
-```
-arangodump --collection "secret-collection" dump --encryption.keyfile ~/SECRET-KEY
-```
-
-Note that _arangodump_ does not store the key anywhere. It is the responsibility
-of the user to find a safe place for the key. However, _arangodump_ stores
-the used encryption method in a file named `ENCRYPTION` in the dump directory.
-That way _arangorestore_ can later find out whether it is dealing with an
-encrypted dump or not.
-
-Trying to restore the encrypted dump without specifying the key fails
-and _arangorestore_ reports an error:
-
-```
-arangorestore --collection "secret-collection" dump --create-collection true
-...
-the dump data seems to be encrypted with aes-256-ctr, but no key information was specified to decrypt the dump
-it is recommended to specify either `--encryption.keyfile` or `--encryption.key-generator` when invoking arangorestore with an encrypted dump
-```
-
-It is required to use the exact same key when restoring the data. Again this is
-done by providing the `--encryption.keyfile` parameter:
-
-```
-arangorestore --collection "secret-collection" dump --create-collection true --encryption.keyfile ~/SECRET-KEY
-```
-
-Using a different key leads to the backup being non-recoverable.
-
-Note that encrypted backups can be used together with the already existing
-RocksDB encryption-at-rest feature.
-
-## Compression
-
-`--compress-output`
-
-Data can optionally be dumped in a compressed format to save space on disk.
-The `--compress-output` option cannot be used together with [Encryption](#encryption).
-
-If compression is enabled, no `.data.json` files are written. Instead, the
-collection data gets compressed using the Gzip algorithm and for each collection
-a `.data.json.gz` file is written. Metadata files such as `.structure.json` and
-`.view.json` do not get compressed.
-
-```
-arangodump --output-directory "dump" --compress-output
-```
-
-Compressed dumps can be restored with _arangorestore_, which automatically
-detects whether the data is compressed or not based on the file extension.
-
-```
-arangorestore --input-directory "dump"
-```
-
-## Dump output format
-
-Introduced in: v3.8.0
-
-Since its inception, _arangodump_ wrapped each dumped document into an extra
-JSON envelope, such as follows:
-
-```json
-{"type":2300,"key":"test","data":{"_key":"test","_rev":..., ...}}
-```
-
-This original dump format was useful when there was the MMFiles storage engine,
-which could use different `type` values in its datafiles.
-However, the RocksDB storage engine only uses `"type":2300` (document) when
-dumping data, so the JSON wrapper provides no further benefit except
-compatibility with older versions of ArangoDB.
-
-In case a dump taken with v3.8.0 or higher is known to never be used in older
-ArangoDB versions, the JSON envelopes can be turned off. The startup option
-`--envelope` controls this. The option defaults to `true`, meaning dumped
-documents are wrapped in envelopes, which makes new dumps compatible with
-older versions of ArangoDB.
-
-If that is not needed, the `--envelope` option can be set to `false`.
-In this case, the dump files only contain the raw documents, without any
-envelopes around them:
-
-```json
-{"_key":"test","_rev":..., ...}
-```
-
-Disabling the envelopes can **reduce dump sizes** a lot, especially if documents
-are small on average and the relative cost of the envelopes is high. Omitting
-the envelopes can also help to **save a bit on memory usage and bandwidth** for
-building up the dump results and sending them over the wire.
-
-As a bonus, turning off the envelopes turns _arangodump_ into a fast, concurrent
-JSONL exporter for one or multiple collections:
-
-```
-arangodump --collection "collection" --threads 8 --envelope false --compress-output false dump
-```
-
-The JSONL format is also supported by _arangoimport_ natively.
-
-{{< warning >}}
-Dumps created with the `--envelope false` setting cannot be restored into any
-ArangoDB versions older than v3.8.0!
-{{< /warning >}}
-
-## Threads
-
-_arangodump_ can use multiple threads for dumping database data in
-parallel. To speed up the dump of a database with multiple collections, it is
-often beneficial to increase the number of _arangodump_ threads.
-The number of threads can be controlled via the `--threads` option. The default
-value was changed from `2` to the maximum of `2` and the number of available CPU cores.
-
-The `--threads` option works dynamically, its value depends on the number of
-available CPU cores. If the amount of available CPU cores is less than `3`, a
-threads value of `2` is used. Otherwise the value of threads is set to the
-number of available CPU cores.
-
-For example:
-
-- If a system has 8 cores, then max(2,8) = 8, i.e. 8 threads are used.
-- If it has 1 core, then max(2,1) = 2, i.e. 2 threads are used.
-
-_arangodump_ versions prior to v3.8.0 distribute dump jobs for individual
-collections to concurrent worker threads, which is optimal for dumping many
-collections of approximately the same size, but does not help for dumping few
-large collections or few large collections with many shards.
-
-Since v3.8.0, _arangodump_ can also dispatch dump jobs for individual shards of
-each collection, allowing higher parallelism if there are many shards to dump
-but only few collections. Keep in mind that even when concurrently dumping the
-data from multiple shards of the same collection in parallel, the individual
-shards' results are still written into a single result file for the collection.
-With a massive number of concurrent dump threads, some contention on that shared
-file should be expected. Also note that when dumping the data of multiple shards
-from the same collection, each thread's results are written to the result
-file in a non-deterministic order. This should not be a problem when restoring
-such dump, as _arangorestore_ does not assume any order of input.
diff --git a/site/content/3.10/components/tools/arangodump/maskings.md b/site/content/3.10/components/tools/arangodump/maskings.md
deleted file mode 100644
index 032ea149b4..0000000000
--- a/site/content/3.10/components/tools/arangodump/maskings.md
+++ /dev/null
@@ -1,1050 +0,0 @@
----
-title: _arangodump_ Data Maskings
-menuTitle: Maskings
-weight: 15
-description: >-
- `arangodump` supports obfuscating and redacting information when dumping, to
- allow you sharing dumps without sensitive data with third parties
----
-The masking feature allows you to define how sensitive data shall be dumped.
-It is possible to exclude collections entirely, limit the dump to the
-structural information of a collection (name, indexes, sharding etc.)
-or to obfuscate certain fields for a dump.
-
-You can make use of the feature by specifying a configuration file using the
-`--maskings` startup option when invoking `arangodump`.
-
-A JSON configuration file is used to define which collections and fields to mask
-and how. The general structure of the configuration file looks like this:
-
-```js
-{
- "": {
- "type": "",
- "maskings": [ // if masking-type is "masked"
- { "path": "", "type": "", ... }, // rule 1
- { "path": "", "type": "", ... }, // rule 2
- ...
- ]
- },
- "": { ... },
- "": { ... },
- "*": { ... }
-}
-```
-
-At the top level, there is an object with collection names. The masking to be
-applied to the respective collection is defined by the `type` sub-attribute.
-If the `type` is `"masked"`, then a sibling `maskings` attribute is available
-to define rules for obfuscating documents.
-
-Using `"*"` as collection name defines a default behavior for collections
-not listed explicitly.
-
-## Masking Types
-
-`type` is a string describing how to mask the given collection.
-Possible values are:
-
-- `"exclude"`: the collection is ignored completely and not even the
- structure data is dumped.
-
-- `"structure"`: only the collection structure is dumped, but no data at all
- (the file `.data.json` or `.data.json.gz`
- respectively is still created, but will not contain data).
-
-- `"masked"`: the collection structure and all data is dumped. However, the data
- is subject to obfuscation defined in the attribute `maskings`. It is an array
- of objects, with one object per masking rule. Each object needs at least a
- `path` and a `type` attribute to [define which field to mask](#path) and which
- [masking function](#masking-functions) to apply. Depending on the
- masking type, there may exist additional attributes to control the masking
- function behavior.
-
-- `"full"`: the collection structure and all data is dumped. No masking is
- applied to this collection at all.
-
-**Example**
-
-```json
-{
- "private": {
- "type": "exclude"
- },
-
- "temperature": {
- "type": "full"
- },
-
- "log": {
- "type": "structure"
- },
-
- "person": {
- "type": "masked",
- "maskings": [
- {
- "path": "name",
- "type": "xifyFront",
- "unmaskedLength": 2
- },
- {
- "path": ".security_id",
- "type": "xifyFront",
- "unmaskedLength": 2
- }
- ]
- }
-}
-```
-
-- The collection called _private_ is completely ignored.
-- Only the structure of the collection _log_ is dumped, but not the data itself.
-- The structure and data of the _temperature_ collection is dumped without any
- obfuscation of document attributes.
-- The collection _person_ is dumped completely but with maskings applied:
- - The _name_ field is masked if it occurs on the top-level.
- - It also masks fields with the name _security_id_ anywhere in the document.
- - The masking function is of type [_xifyFront_](#xify-front) in both cases.
- The additional setting `unmaskedLength` is specific so _xifyFront_.
-- All additional collections that might exist in the targeted database is
- ignored (like the collection _private_), as there is no attribute key
- `"*"` to specify a different default type for the remaining collections.
-
-### Masking vs. dump-data option
-
-_arangodump_ also supports a very coarse masking with the option
-`--dump-data false`, which leaves out all data for the dump.
-
-You can either use `--maskings` or `--dump-data false`, but not both.
-
-### Masking vs. collection option
-
-_arangodump_ also supports a very coarse masking with the option
-`--collection`. This restricts the collections that are
-dumped to the ones explicitly listed.
-
-It is possible to combine `--maskings` and `--collection`.
-This takes the intersection of exportable collections.
-
-## Path
-
-`path` defines which field to obfuscate. There can only be a single
-path per masking, but an unlimited amount of maskings per collection.
-
-```json
-{
- "collection1": {
- "type": "masked",
- "maskings": [
- {
- "path": "attr1",
- "type": "random"
- },
- {
- "path": "attr2",
- "type": "randomString"
- },
- ...
- ]
- },
- "collection2": {
- "type": "masked",
- "maskings": [
- {
- "path": "attr3",
- "type": "random"
- }
- ]
- },
- ...
-}
-```
-
-Top-level **system attributes** (`_key`, `_id`, `_rev`, `_from`, `_to`) are
-never masked.
-
-To mask a top-level attribute value, the path is simply the attribute
-name, for instance `"name"` to mask the value `"foobar"`:
-
-```json
-{
- "_key": "1234",
- "name": "foobar"
-}
-```
-
-The path to a nested attribute `name` with a top-level attribute `person`
-as its parent is `"person.name"` (here: `"foobar"`):
-
-```json
-{
- "_key": "1234",
- "person": {
- "name": "foobar"
- }
-}
-```
-
-Example masking definition:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "person.name",
- "type": ""
- }
- ]
- }
-}
-```
-
-If the path starts with a `.` then it matches any path ending in `name`.
-For example, `.name` matches the field `name` of all leaf attributes
-in the document. Leaf attributes are attributes whose value is `null`,
-`true`, `false`, or of data type `string`, `number` or `array`.
-That means, it matches `name` at the top level as well as at any nested level
-(e.g. `foo.bar.name`), but not nested objects themselves.
-
-On the other hand, `name` only matches leaf attributes
-at top level. `person.name` matches the attribute `name` of a leaf
-in the top-level object `person`. If `person` was itself an object,
-then the masking settings for this path would be ignored, because it
-is not a leaf attribute.
-
-If the attribute value is an **array** then the masking is applied to
-**all array elements individually**.
-
-The special path `*` matches **all** leaf nodes of a document.
-
-If you have an attribute key that contains a dot (like `{ "name.with.dots": … }`)
-or a top-level attribute with a single asterisk as full name (`{ "*": … }`)
-then you need to quote the name in ticks or backticks:
-
-- `"path": "´name.with.dots´"`
-- `` "path": "`name.with.dots`" ``
-- `"path": "´*´"`
-- `` "path": "`*`" ``
-
-**Example**
-
-The following configuration replaces the value of the `name`
-attribute with an "xxxx"-masked string:
-
-```json
-{
- "type": "xifyFront",
- "path": ".name",
- "unmaskedLength": 2
-}
-```
-
-The document:
-
-```json
-{
- "name": "top-level-name",
- "age": 42,
- "nicknames" : [ { "name": "hugo" }, "egon" ],
- "other": {
- "name": [ "emil", { "secret": "superman" } ]
- }
-}
-```
-
-… is changed as follows:
-
-```json
-{
- "name": "xxxxxxxxxxxxme",
- "age": 42,
- "nicknames" : [ { "name": "xxgo" }, "egon" ],
- "other": {
- "name": [ "xxil", { "secret": "superman" } ]
- }
-}
-```
-
-The values `"egon"` and `"superman"` are not replaced, because they
-are not contained in an attribute value of which the attribute name is
-`name`.
-
-### Nested objects and arrays
-
-If you specify a path and the attribute value is an array then the
-masking decision is applied to each element of the array as if this
-was the value of the attribute. This applies to arrays inside the array too.
-
-If the attribute value is an object, then it is ignored and the attribute
-does not get masked. To mask nested fields, specify the full path for each
-leaf attribute.
-
-{{< tip >}}
-If some documents have an attribute `mail` with a string as value, but other
-documents store a nested object under the same attribute name, then make sure
-to set up proper masking for the latter case, in which sub-attributes are not
-masked if there is only a masking configured for the attribute `mail`
-but not its nested attributes.
-
-You can use the special path `"*"` to **match all leaf attributes** in the
-document.
-{{< /tip >}}
-
-**Examples**
-
-Masking `mail` with the _Xify Front_ function:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "mail",
- "type": "xifyFront"
- }
- ]
- }
-}
-```
-
-… converts this document:
-
-```json
-{
- "mail" : "mail address"
-}
-```
-
-… into:
-
-```json
-{
- "mail" : "xxil xxxxxxss"
-}
-```
-
-because `mail` is a leaf attribute. The document:
-
-```json
-{
- "mail" : [
- "address one",
- "address two",
- [
- "address three"
- ]
- ]
-}
-```
-
-… is converted into:
-
-```json
-{
- "mail" : [
- "xxxxxss xne",
- "xxxxxss xwo",
- [
- "xxxxxss xxxee"
- ]
- ]
-}
-```
-
-… because the masking is applied to each array element individually
-including the elements of the sub-array. The document:
-
-```json
-{
- "mail" : {
- "address" : "mail address"
- }
-}
-```
-
-… is not masked because `mail` is not a leaf attribute.
-To mask the mail address, you could use the paths `mail.address`
-or `.address` in the masking definition:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": ".address",
- "type": "xifyFront"
- }
- ]
- }
-}
-```
-
-A catch-all `"path": "*"` would apply to the nested `address` attribute too,
-but it would mask all other string attributes as well, which may not be what
-you want. A syntax `"path": "mail.*` to only match the sub-attributes of the
-top-level `mail` attribute is not supported.
-
-### Rule precedence
-
-Masking rules may overlap, for instance if you specify the same path multiple
-times, or if you define a rule for a specific field but also one which matches
-all leaf attributes of the same name.
-
-The precedence is determined by the order in which the rules are defined in the
-masking configuration file in such cases, giving priority to the first matching
-rule (i.e. the rule above the other ambiguous ones).
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "address",
- "type": "xifyFront"
- },
- {
- "path": ".address",
- "type": "random"
- }
- ]
- }
-}
-```
-
-Above masking definition obfuscates the top-level attribute `address` with
-the `xifyFront` function, whereas all nested attributes with name `address`
-will use the `random` masking function. If the rules are defined in reverse
-order however, then all attributes called `address` are obfuscated using
-`random`. The second, overlapping rule is effectively ignored:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": ".address",
- "type": "random"
- },
- {
- "path": "address",
- "type": "xifyFront"
- }
- ]
- }
-}
-```
-
-This behavior also applies to the catch-all path `"*"`, which means it should
-generally be placed below all other rules for a collection so that it is used
-for all unspecified attribute paths. Otherwise, all document attributes are
-processed by a single masking function, ignoring any other rules below it.
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "address",
- "type": "random"
- },
- {
- "path": ".address",
- "type": "xifyFront"
- },
- {
- "path": "*",
- "type": "email"
- }
- ]
- }
-}
-```
-
-## Masking Functions
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-- [Xify Front](#xify-front)
-- [Zip](#zip)
-- [Datetime](#datetime)
-- [Integer Number](#integer-number)
-- [Decimal Number](#decimal-number)
-- [Credit Card Number](#credit-card-number)
-- [Phone Number](#phone-number)
-- [Email Address](#email-address)
-
-The masking functions:
-
-- [Random String](#random-string)
-- [Random](#random)
-
-… are available in the Community Edition as well as the Enterprise Edition.
-
-### Random String
-
-This masking type replaces all values of attributes whose values are strings
-with key `name` with an anonymized string. It is not guaranteed that the
-string is of the same length. Attribute whose values are not strings
-are not modified.
-
-A hash of the original string is computed. If the original string is
-shorter, then the hash is used. This results in a longer
-replacement string. If the string is longer than the hash, then
-characters are repeated as many times as needed to reach the full
-original string length.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"randomString"`
-
-**Example**
-
-```json
-{
- "path": ".name",
- "type": "randomString"
-}
-```
-
-Above masking setting applies to all leaf attributes with name `.name`.
-A document like:
-
-```json
-{
- "_key" : "1234",
- "name" : [
- "My Name",
- {
- "other" : "Hallo Name"
- },
- [
- "Name One",
- "Name Two"
- ],
- true,
- false,
- null,
- 1.0,
- 1234,
- "This is a very long name"
- ],
- "deeply": {
- "nested": {
- "name": "John Doe",
- "not-a-name": "Pizza"
- }
- }
-}
-```
-
-… is converted to:
-
-```json
-{
- "_key": "1234",
- "name": [
- "+y5OQiYmp/o=",
- {
- "other": "Hallo Name"
- },
- [
- "ihCTrlsKKdk=",
- "yo/55hfla0U="
- ],
- true,
- false,
- null,
- 1.0,
- 1234,
- "hwjAfNe5BGw=hwjAfNe5BGw="
- ],
- "deeply": {
- "nested": {
- "name": "55fHctEM/wY=",
- "not-a-name": "Pizza"
- }
- }
-}
-```
-
-### Random
-
-This masking type substitutes leaf attribute values of all data types with
-random values of the same kind:
-
-- Strings are replaced with [random strings](#random-string).
-- Numbers are replaced with random integer or decimal numbers, depending on
- the original value (but not keeping sign or scientific notation).
- The generated numbers are between -1000 and 1000.
-- Booleans are randomly replaced with `true` or `false`.
-- `null` values remain `null`.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"random"`
-
-**Examples**
-
-```json
-{
- "collection": {
- "type": "masked",
- "maskings": [
- {
- "path": "*",
- "type": "random"
- }
- ]
- }
-}
-```
-
-Using above masking configuration, all leaf attributes of the documents in
-_collection_ would be randomized. A possible input document:
-
-```json
-{
- "_key" : "1121535",
- "_id" : "coll/1121535",
- "_rev" : "_Z3AKGjW--_",
- "nullValue" : null,
- "bool" : true,
- "int" : 1,
- "decimal" : 2.34,
- "string" : "hello",
- "array" : [
- null,
- false,
- true,
- 0,
- -123,
- 0.45,
- 6e7,
- -0.8e-3,
- "nine",
- "Lorem ipsum sit dolor amet.",
- [
- false,
- false
- ],
- {
- "obj" : "nested"
- }
- ]
-}
-```
-
-… could result in an output like this:
-
-```json
-{
- "_key": "1121535",
- "_id": "coll/1121535",
- "_rev": "_Z3AKGjW--_",
- "nullValue": null,
- "bool": false,
- "int": -900,
- "decimal": -4.27,
- "string": "etxfOC+K0HM=",
- "array": [
- null,
- true,
- false,
- 754,
- -692,
- 2.64,
- 834,
- 1.69,
- "NGf7NKGrMYw=",
- "G0czIlvaGw4=G0czIlvaGw4=G0c",
- [
- false,
- true
- ],
- {
- "obj": "eCGe36xiRho="
- }
- ]
-}
-```
-
-### Xify Front
-
-This masking type replaces the front characters with `x` and
-blanks. Alphanumeric characters, `_` and `-` are replaced by `x`,
-everything else is replaced by a blank.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"xifyFront"`
-- `unmaskedLength` (number, _default: `2`_): how many characters to
- leave as-is on the right-hand side of each word as integer value
-- `hash` (bool, _default: `false`_): whether to append a hash value to the
- masked string to avoid possible unique constraint violations caused by
- the obfuscation
-- `seed` (integer, _default: `0`_): used as secret for computing the hash.
- A value of `0` means a random seed
-
-**Examples**
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": ".name",
- "type": "xifyFront",
- "unmaskedLength": 2
- }
- ]
- }
-}
-```
-
-This affects attributes with key `"name"` at any level by masking all
-alphanumeric characters of a word except the last two characters. Words of
-length 1 and 2 remain unmasked. If the attribute value is not a string but
-boolean or numeric, then the result is `"xxxx"` (fixed length).
-`null` values remain `null`.
-
-```json
-{
- "name": "This is a test!Do you agree?",
- "bool": true,
- "number": 1.23,
- "null": null
-}
-```
-
-… becomes:
-
-```json
-{
- "name": "xxis is a xxst Do xou xxxee ",
- "bool": "xxxx",
- "number": "xxxx",
- "null": null
-}
-```
-
-There is a catch. If you have an index on the attribute the masking
-might distort the index efficiency or even cause errors in case of a
-unique index.
-
-```json
-{
- "path": ".name",
- "type": "xifyFront",
- "unmaskedLength": 2,
- "hash": true
-}
-```
-
-This adds a hash at the end of the string.
-
-```
-"This is a test!Do you agree?"
-```
-
-… becomes
-
-```
-"xxis is a xxst Do xou xxxee NAATm8c9hVQ="
-```
-
-Note that the hash is based on a random secret that is different for
-each run. This avoids dictionary attacks which could be used to guess
-values based pre-computations on dictionaries.
-
-If you need reproducible results, i.e. hashes that do not change between
-different runs of _arangodump_, you need to specify a secret as seed,
-a number which must not be `0`.
-
-```json
-{
- "path": ".name",
- "type": "xifyFront",
- "unmaskedLength": 2,
- "hash": true,
- "seed": 246781478647
-}
-```
-
-### Zip
-
-This masking type replaces a zip code with a random one.
-It uses the following rules:
-
-- If a character of the original zip code is a digit, it is replaced
- by a random digit.
-- If a character of the original zip code is a letter, it
- is replaced by a random letter keeping the case.
-- If the attribute value is not a string then the default value is used.
-
-Note that this generates random zip codes. Therefore there is a
-chance that the same zip code value is generated multiple times, which can
-cause unique constraint violations if a unique index is or will be
-used on the zip code attribute.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"zip"`
-- `default` (string, _default: `"12345"`_): if the input field is not of
- data type `string`, then this value is used
-
-**Examples**
-
-```json
-{
- "path": ".code",
- "type": "zip",
-}
-```
-
-This replaces real zip codes stored in fields called `code` at any level
-with random ones. `"12345"` is used as fallback value.
-
-```json
-{
- "path": ".code",
- "type": "zip",
- "default": "abcdef"
-}
-```
-
-If the original zip code is:
-
-```
-50674
-```
-
-… it is replaced by e.g.:
-
-```
-98146
-```
-
-If the original zip code is:
-
-```
-SA34-EA
-```
-
-… it is replaced by e.g.:
-
-```
-OW91-JI
-```
-
-If the original zip code is `null`, `true`, `false` or a number, then the
-user-defined default value of `"abcdef"` is used.
-
-### Datetime
-
-This masking type replaces the value of the attribute with a random
-date between two configured dates in a customizable format.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"datetime"`
-- `begin` (string, _default: `"1970-01-01T00:00:00.000"`_):
- earliest point in time to return. Date time string in ISO 8601 format.
-- `end` (string, _default: now_):
- latest point in time to return. Date time string in ISO 8601 format.
- In case a partial date time string is provided (e.g. `2010-06` without day
- and time) the earliest date and time is assumed (`2010-06-01T00:00:00.000`).
- The default value is the current system date and time.
-- `format` (string, _default: `""`_): the formatting string format is
- described in [DATE_FORMAT()](../../../aql/functions/date.md#date_format).
- If no format is specified, then the result is an empty string.
-
-**Example**
-
-```json
-{
- "path": "eventDate",
- "type": "datetime",
- "begin" : "2019-01-01",
- "end": "2019-12-31",
- "format": "%yyyy-%mm-%dd",
-}
-```
-
-Above example masks the field `eventDate` by returning a random date time
-string in the range of January 1st and December 31st in 2019 using a format
-like `2019-06-17`.
-
-### Integer Number
-
-This masking type replaces the value of the attribute with a random
-integer number. It replaces the value even if it is a string,
-Boolean, or `null`.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"integer"`
-- `lower` (number, _default: `-100`_): smallest integer value to return
-- `upper` (number, _default: `100`_): largest integer value to return
-
-**Example**
-
-```json
-{
- "path": "count",
- "type": "integer",
- "lower" : -100,
- "upper": 100
-}
-```
-
-This masks the field `count` with a random number between
--100 and 100 (inclusive).
-
-### Decimal Number
-
-This masking type replaces the value of the attribute with a random
-floating point number. It replaces the value even if it is a string,
-Boolean, or `null`.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"decimal"`
-- `lower` (number, _default: `-1`_): smallest floating point value to return
-- `upper` (number, _default: `1`_): largest floating point value to return
-- `scale` (number, _default: `2`_): maximal amount of digits in the
- decimal fraction part
-
-**Examples**
-
-```json
-{
- "path": "rating",
- "type": "decimal",
- "lower" : -0.3,
- "upper": 0.3
-}
-```
-
-This masks the field `rating` with a random floating point number between
--0.3 and +0.3 (inclusive). By default, the decimal has a scale of 2.
-That means, it has at most 2 digits after the dot.
-
-The configuration:
-
-```json
-{
- "path": "rating",
- "type": "decimal",
- "lower" : -0.3,
- "upper": 0.3,
- "scale": 3
-}
-```
-
-… generates numbers with at most 3 decimal digits.
-
-### Credit Card Number
-
-This masking type replaces the value of the attribute with a random
-credit card number (as integer number).
-See [Luhn algorithm](https://en.wikipedia.org/wiki/Luhn_algorithm)
-for details.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"creditCard"`
-
-**Example**
-
-```json
-{
- "path": "ccNumber",
- "type": "creditCard"
-}
-```
-
-This generates a random credit card number to mask field `ccNumber`,
-e.g. `4111111414443302`.
-
-### Phone Number
-
-This masking type replaces a phone number with a random one.
-It uses the following rule:
-
-- If a character of the original number is a digit
- it is replaced by a random digit.
-- If it is a letter it is replaced by a random letter.
-- All other characters are left unchanged.
-- If the attribute value is not a string it is replaced by the
- default value.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"phone"`
-- `default` (string, _default: `"+1234567890"`_): if the input field
- is not of data type `string`, then this value is used
-
-**Examples**
-
-```json
-{
- "path": "phone.landline",
- "type": "phone"
-}
-```
-
-This replaces an existing phone number with a random one, for instance
-`"+31 66-77-88-xx"` might get substituted by `"+75 10-79-52-sb"`.
-
-```json
-{
- "path": "phone.landline",
- "type": "phone",
- "default": "+49 12345 123456789"
-}
-```
-
-This masks a phone number as before, but falls back to a different default
-phone number in case the input value is not a string.
-
-### Email Address
-
-This masking type takes an email address, computes a hash value and
-splits it into three equal parts `AAAA`, `BBBB`, and `CCCC`. The
-resulting email address is in the format `AAAA.BBBB@CCCC.invalid`.
-The hash is based on a random secret that is different for each run.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"email"`
-
-**Example**
-
-```json
-{
- "path": ".email",
- "type": "email"
-}
-```
-
-This masks every leaf attribute `email` with a random email address
-similar to `"EHwG.3AOg@hGU=.invalid"`.
diff --git a/site/content/3.10/components/web-interface/_index.md b/site/content/3.10/components/web-interface/_index.md
deleted file mode 100644
index 12863abcb6..0000000000
--- a/site/content/3.10/components/web-interface/_index.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Web Interface
-menuTitle: Web Interface
-weight: 175
-description: >-
- ArangoDB has a graphical user interface you can access with your browser
----
-The ArangoDB server (*arangod*) comes with a built-in web interface for
-administration. It lets you manage databases, collections, documents,
-users, graphs and more. You can also run and explain queries in a
-convenient way. Statistics and server status are provided as well.
-
-The web interface (also referred to as Web UI, frontend or *Aardvark*) can be accessed with a
-browser under the URL `http://localhost:8529` with default server settings.
-
-
diff --git a/site/content/3.10/components/web-interface/cluster.md b/site/content/3.10/components/web-interface/cluster.md
deleted file mode 100644
index 91ae4bd075..0000000000
--- a/site/content/3.10/components/web-interface/cluster.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: Cluster
-menuTitle: Cluster
-weight: 10
-description: ''
----
-The web interface differs for cluster deployments and single-server instances.
-Instead of a single [Dashboard](dashboard.md), there
-is a **CLUSTER** and a **NODES** section.
-
-Furthermore, the **REPLICATION** and **LOGS** section are not available.
-You can access the logs of individual Coordinators and DB-Servers via the
-**NODES** section.
-
-The cluster section displays statistics about the general cluster performance.
-
-
-
-Statistics:
-
- - Available and missing Coordinators
- - Available and missing DB-Servers
- - Memory usage (percent)
- - Current connections
- - Data (bytes)
- - HTTP (bytes)
- - Average request time (seconds)
-
-## Nodes
-
-### Overview
-
-The overview shows available and missing Coordinators and DB-Servers.
-
-
-
-Functions:
-
-- Coordinator Dashboard: Click on a Coordinator will open a statistics dashboard.
-
-Information (Coordinator / DB-Servers):
-
-- Name
-- Endpoint
-- Last Heartbeat
-- Status
-- Health
-
-### Shards
-
-The shard section displays all available sharded collections.
-
-
-
-Functions:
-
-- Move Shard Leader: Click on a leader database of a shard server will open a move shard dialog. Shards can be
- transferred to all available DB-Servers, except the leading DB-Server or an available follower.
-- Move Shard Follower: Click on a follower database of a shard will open a move shard dialog. Shards can be
- transferred to all available DB-Servers, except the leading DB-Server or an available follower.
-
-Information (collection):
-
-- Shard
-- Leader (green state: sync is complete)
-- Followers
-
-### Rebalance Shards
-
-The rebalance shards section displays a button for rebalancing shards.
-A new DB-Server will not have any shards. With the rebalance functionality,
-the cluster will start to rebalance shards including empty DB-Servers.
-You can specify the maximum number of shards that can be moved in each
-operation by using the `--cluster.max-number-of-move-shards` startup option
-of _arangod_ (the default value is `10`).
-When the button is clicked, the number of scheduled move shards operations is
-shown, or it is displayed that no move operations have been scheduled if they
-are not necessary.
diff --git a/site/content/3.10/components/web-interface/collections.md b/site/content/3.10/components/web-interface/collections.md
deleted file mode 100644
index d53138f83e..0000000000
--- a/site/content/3.10/components/web-interface/collections.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title: Collections
-menuTitle: Collections
-weight: 15
-description: ''
----
-The collections section displays all available collections. From here you can
-create new collections and jump into a collection for details (click on a
-collection tile).
-
-
-
-Functions:
-
- - A: Toggle filter properties
- - B: Search collection by name
- - D: Create collection
- - C: Filter properties
- - H: Show collection details (click tile)
-
-Information:
-
- - E: Collection type
- - F: Collection state(unloaded, loaded, ...)
- - G: Collection name
-
-## Collection
-
-
-
-There are four view categories:
-
-1. Content:
- - Create a document
- - Delete a document
- - Filter documents
- - Download documents
- - Upload documents
-
-2. indexes:
- - Create indexes
- - Delete indexes
-
-3. Info:
- - Detailed collection information and statistics
-
-3. Settings:
- - Configure name, journal size, index buckets, wait for sync
- - Delete collection
- - Truncate collection
- - Unload/Load collection
- - Save modified properties (name, journal size, index buckets, wait for sync)
-
-Additional information:
-
-Upload format:
-
-I. Line-wise
-
-```js
-{ "_key": "key1", ... }
-{ "_key": "key2", ... }
-```
-
-II. JSON documents in a list
-
-```js
-[
- { "_key": "key1", ... },
- { "_key": "key2", ... }
-]
-```
diff --git a/site/content/3.10/components/web-interface/dashboard.md b/site/content/3.10/components/web-interface/dashboard.md
deleted file mode 100644
index aac4f439ae..0000000000
--- a/site/content/3.10/components/web-interface/dashboard.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Dashboard
-menuTitle: Dashboard
-weight: 5
-description: ''
----
-The **DASHBOARD** section provides statistics which are polled regularly from the
-ArangoDB server.
-
-
-
-There is a different interface for [Cluster](cluster.md) deployments.
-
-Statistics:
-
- - Requests per second
- - Request types
- - Number of client connections
- - Transfer size
- - Transfer size (distribution)
- - Average request time
- - Average request time (distribution)
-
-System Resources:
-
-- Number of threads
-- Memory
-- Virtual size
-- Major page faults
-- Used CPU time
-
-Replication:
-
-- Replication state
-- Totals
-- Ticks
-- Progress
diff --git a/site/content/3.10/components/web-interface/document.md b/site/content/3.10/components/web-interface/document.md
deleted file mode 100644
index 2ff9addb44..0000000000
--- a/site/content/3.10/components/web-interface/document.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: Document
-menuTitle: Document
-weight: 20
-description: ''
----
-The document section offers a editor which let you edit documents and edges of a collection.
-
-
-
-Functions:
-
- - Edit document
- - Save document
- - Delete document
- - Switch between Tree/Code - Mode
- - Create a new document
-
-Information:
-
- - Displays: _id, _rev, _key properties
diff --git a/site/content/3.10/components/web-interface/graphs.md b/site/content/3.10/components/web-interface/graphs.md
deleted file mode 100644
index 85e3affcc9..0000000000
--- a/site/content/3.10/components/web-interface/graphs.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: Graphs in the web interface
-menuTitle: Graphs
-weight: 30
-description: >-
- You can create and manage named graphs in the web interface, as well as
- visually explore graphs with the graph viewer
----
-The *Graphs* tab provides a viewer facility for graph data stored in ArangoDB.
-It allows browsing ArangoDB graphs stored in the *_graphs* system collection or
-a graph consisting of an arbitrary vertex and [edge collection](../../concepts/data-models.md#graph-model).
-
-
-
-Please note that the graph viewer requires canvas (optional: webgl) support
-in your browser. Especially Internet Explorer browsers older than version 9
-are likely to not support this.
-
-## Graph Viewer
-
-
-
-Top Toolbar Functions:
-
-- Load full graph (Also nodes without connections will be drawn. Useful during graph modeling setup)
-- Take a graph screenshot
-- Start full screen mode
-- Open graph options menu
-
-Default Context Menu (mouse-click background):
-
-- Add a new node
-- Close visible context menu(s)
-
-Node Context Menu (mouse-click node):
-
-- Delete node
-- Edit node
-- Expand node (Show all bound edges)
-- Draw edge (Connect with another node)
-- Set as startnode (The Graph will rerender starting the selected node and given options (graph options menu))
-
-Edge Context Menu (mouse-click edge):
-
-- Edit edge
-- Delete edge
-
-Edge Highlighting (right-mouse-click node):
-
-- Highlight all edges connected to the node (right-click at the background will remove highlighting)
-
-
-
-### Graph Viewer Options
-
-Graph Options Menu:
-
-- Startnode (string - valid node id or space separated list of id's): Heart of your graph. Rendering and traversing will start from here. Empty value means: a random starting point will be used.
-- Layout: Different graph layouting algorithms. No overlap (optimal: big graph), Force layout (optimal: medium graph), Fruchtermann (optimal: little to medium graph).
-- Renderer: Canvas mode allows editing. WebGL currently offers only display mode (a lot faster with much nodes/edges).
-- Search depth (number): Search depth which is starting from your start node.
-- Limit (number): Limit nodes count. If empty or zero, no limit is set.
-
-Nodes Options Menu:
-
-- Label (string): Nodes will be labeled by this attribute. If node attribute is not found, no label will be displayed.
-- Add Collection Name: This appends the collection name to the label, if it exists.
-- Color By Collections: Should nodes be colorized by their collection? If enabled, node color and node color attribute will be ignored.
-- Color: Default node color.
-- Color Attribute (string): If an attribute is given, nodes will then be colorized by the attribute. This setting ignores default node color if set.
-- Size By Connections: Should nodes be sized by their edges count? If enabled, node sizing attribute will be ignored.
-- Sizing Attribute (number): Default node size. Numeric value > 0.
-
-Edges Options Menu:
-
-- Label (string): Edges will be labeled by this attribute. If edge attribute is not found, no label will be displayed.
-- Add Collection Name: This appends the collection name to the label, if it exists.
-- Color By Collections: Should edges be colorized by their collection? If enabled, edge color and edge color attribute will be ignored.
-- Color: Default edge color.
-- Color Attribute (string): If an attribute is given, edges will then be colorized by the attribute. This setting ignores default node color if set.
-- Type: The renderer offers multiple types of rendering. They only differ in their display style, except for the type 'curved'. The curved type
-allows to display more than one edges between two nodes.
diff --git a/site/content/3.10/components/web-interface/logs.md b/site/content/3.10/components/web-interface/logs.md
deleted file mode 100644
index f9ddcc007b..0000000000
--- a/site/content/3.10/components/web-interface/logs.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Logs
-menuTitle: Logs
-weight: 45
-description: ''
----
-The logs section displays all available log entries. Log entries are filterable by
-their log level types.
-
-
-
-Functions:
-
- - Filter log entries by log level (all, info, error, warning, debug)
-
-Information:
-
- - Loglevel
- - Date
- - Message
diff --git a/site/content/3.10/components/web-interface/queries.md b/site/content/3.10/components/web-interface/queries.md
deleted file mode 100644
index c263e2e6b0..0000000000
--- a/site/content/3.10/components/web-interface/queries.md
+++ /dev/null
@@ -1,117 +0,0 @@
----
-title: Query View
-menuTitle: Queries
-weight: 25
-description: ''
----
-The query view offers you three different subviews:
-
-- Editor
-- Running Queries
-- Slow Query History
-
-## AQL Query Editor
-
-The web interface offers a AQL Query Editor:
-
-
-
-The editor is split into two parts, the query editor pane and the bind
-parameter pane.
-
-The left pane is your regular query input field, where you can edit and then
-execute or explain your queries. By default, the entered bind parameter will
-automatically be recognized and shown in the bind parameter table in the right
-pane, where you can easily edit them.
-
-The input fields are equipped with type detection. This means you don't have to
-use quote marks around string, just write them as-is. Numbers will be treated
-as numbers, *true* and *false* as booleans, *null* as null-type value. Square
-brackets can be used to define arrays, and curly braces for objects (keys and
-values have to be surrounded by double quotes). This will mostly be what you want.
-But if you want to force something to be treated as string, use quotation marks
-for the value:
-
-```js
-123 // interpreted as number
-"123" // interpreted as string
-
-["foo", "bar", 123, true] // interpreted as array
-['foo', 'bar', 123, true] // interpreted as string
-```
-
-If you are used to work with JSON, you may want to switch the bind parameter
-editor to JSON mode by clicking on the upper right toggle button. You can then
-edit the bind parameters in raw JSON format.
-
-### Custom Queries
-
-To save the current query use the *Save* button in the top left corner of
-the editor or use the shortcut (see below).
-
-
-
-By pressing the *Queries* button in the top left corner of the editor you
-activate the custom queries view. Here you can select a previously stored custom
-query or one of our query examples.
-
-Click on a query title to get a code preview. In addition, there are action
-buttons to:
-
-- Copy to editor
-- Explain query
-- Run query
-- Delete query
-
-For the built-in example queries, there is only *Copy to editor* available.
-
-To export or import queries to and from JSON you can use the buttons on the
-right-hand side.
-
-### Result
-
-
-
-Each query you execute or explain opens up a new result box, so you are able
-to fire up multiple queries and view their results at the same time. Every query
-result box gives you detailed query information and of course the query result
-itself. The result boxes can be dismissed individually, or altogether using the
-*Remove results* button. The toggle button in the top right corner of each box
-switches back and forth between the *Result* and *AQL* query with bind parameters.
-
-### Spotlight
-
-
-
-The spotlight feature opens up a modal view. There you can find all AQL keywords,
-AQL functions and collections (filtered by their type) to help you to be more
-productive in writing your queries. Spotlight can be opened by the magic wand icon
-in the toolbar or via shortcut (see below).
-
-### AQL Editor Shortcuts
-
-| Key combination | Action |
-|:----------------|:-------|
-| `Ctrl`/`Cmd` + `Return` | Execute query
-| `Ctrl`/`Cmd` + `Alt` + `Return` | Execute selected query
-| `Ctrl`/`Cmd` + `Shift` + `Return` | Explain query
-| `Ctrl`/`Cmd` + `Shift` + `S` | Save query
-| `Ctrl`/`Cmd` + `Shift` + `C` | Toggle comments
-| `Ctrl`/`Cmd` + `Z` | Undo last change
-| `Ctrl`/`Cmd` + `Shift` + `Z` | Redo last change
-| `Shift` + `Alt` + `Up` | Increase font size
-| `Shift` + `Alt` + `Down` | Decrease font size
-| `Ctrl` + `Space` | Open up the spotlight search
-
-## Running Queries
-
-
-
-The *Running Queries* tab gives you a compact overview of all running queries.
-By clicking the red minus button, you can abort the execution of a running query.
-
-## Slow Query History
-
-
-
-The *Slow Query History* tab gives you a compact overview of all past slow queries.
diff --git a/site/content/3.10/components/web-interface/services.md b/site/content/3.10/components/web-interface/services.md
deleted file mode 100644
index 3bae62eb84..0000000000
--- a/site/content/3.10/components/web-interface/services.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: Services
-menuTitle: Services
-weight: 35
-description: ''
----
-The services section displays all installed Foxx applications. You can create new services
-or go into a detailed view of a chosen service.
-
-
-
-## Create Service
-
-There are four different possibilities to create a new service:
-
-1. Create service via zip file
-2. Create service via github repository
-3. Create service via official ArangoDB store
-4. Create a blank service from scratch
-
-
-
-## Service View
-
-This section offers several information about a specific service.
-
-
-
-There are four view categories:
-
-1. Info:
- - Displays name, short description, license, version, mode (production, development)
- - Offers a button to go to the services interface (if available)
-
-2. Api:
- - Display API as SwaggerUI
- - Display API as RAW JSON
-
-3. Readme:
- - Displays the services manual (if available)
-
-4. Settings:
- - Download service as zip file
- - Run service tests (if available)
- - Run service scripts (if available)
- - Configure dependencies (if available)
- - Change service parameters (if available)
- - Change mode (production, development)
- - Replace the service
- - Delete the service
diff --git a/site/content/3.10/components/web-interface/users.md b/site/content/3.10/components/web-interface/users.md
deleted file mode 100644
index 3ecc4fc927..0000000000
--- a/site/content/3.10/components/web-interface/users.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: Managing Users in the Web Interface
-menuTitle: Users
-weight: 40
-description: ''
----
-ArangoDB users are globally stored in the `_system` database and can only be
-managed while logged on to this database. There you can find the *Users* section:
-
-
-
-## General
-
-Select a user to bring up the *General* tab with the username, name and active
-status, as well as options to delete the user or change the password.
-
-
-
-## Permissions
-
-Select a user and go to the *Permissions* tab. You will see a list of databases
-and their corresponding database access level for that user.
-
-
-
-Please note that server access level follows from the access level on
-the `_system` database. Furthermore, the default database access level
-for this user appear in the artificial row with the database name `*`.
-
-Below this table is another one for the collection category access
-levels. At first, it shows the list of databases, too. If you click on a
-database, the list of collections in that database will be open and you
-can see the defined collection access levels for each collection of that
-database (which can be all unselected which means that nothing is
-explicitly set). The default access levels for this user and database
-appear in the artificial row with the collection name `*`.
-
-{{< info >}}
-Also see [Managing Users](../../operations/administration/user-management/_index.md) about access levels.
-{{< /info >}}
diff --git a/site/content/3.10/data-science/_index.md b/site/content/3.10/data-science/_index.md
deleted file mode 100644
index c6182c0cc9..0000000000
--- a/site/content/3.10/data-science/_index.md
+++ /dev/null
@@ -1,141 +0,0 @@
----
-title: Data Science
-menuTitle: Data Science
-weight: 115
-description: >-
- ArangoDB lets you apply analytics and machine learning to graph data at scale
-aliases:
- - data-science/overview
----
-ArangoDB's Graph Analytics and GraphML capabilities provide various solutions
-in data science and data analytics. Multiple data science personas within the
-engineering space can make use of ArangoDB's set of tools and technologies that
-enable analytics and machine learning on graph data.
-
-ArangoDB, as the foundation for GraphML, comes with the following key features:
-
-- **Scalable**: designed to support true scalability with high performance for
- enterprise use cases.
-- **Simple Ingestion**: easy integration in existing data infrastructure with
- connectors to all leading data processing and data ecosystems.
-- **Open Source**: extensibility and community.
-- **NLP Support**: built-in text processing, search, and similarity ranking.
-
-
-
-## Graph Analytics vs. GraphML
-
-This section classifies the complexity of the queries we can answer -
-like running a simple query that shows what is the path that goes from one node
-to another, or more complex tasks like node classification,
-link prediction, and graph classification.
-
-### Graph Query
-
-When running a query with AQL on a graph, the query goes from a vertex to an edge,
-and then the edge indicates what the next connected vertex will be.
-
-Graph queries can answer questions like _**Who can introduce me to person X**_?
-
-
-
-### Graph Analytics
-
-Graph analytics or graph algorithms is what you run on a graph if you want to
-know aggregate information about your graph, while analyzing the entire graph.
-
-Graph analytics can answer questions like _**Who are the most connected persons**_?
-
-
-
-### GraphML
-
-When applying machine learning on a graph, you can predict connections, get
-better product recommendations, and also classify vertices, edges, and graphs.
-
-GraphML can answer questions like:
-- _**Is there a connection between person X and person Y?**_
-- _**Will a customer churn?**_
-- _**Is this particular transaction Anomalous?**_
-
-
-
-## Use Cases
-
-This section contains an overview of different use cases where Graph Analytics
-and GraphML can be applied.
-
-### Graph Analytics
-
-Graph Analytics is applicable in various fields such as marketing, fraud detection, supply chain,
-product recommendations, drug development, law enforcement, and cybersecurity.
-
-Graph Analytics uses an unsupervised
-learning method based on algorithms that perform analytical processing
-directly on graphs stored in ArangoDB. The
-[Distributed Iterative Graph Processing (Pregel)](pregel/_index.md)
-is intended to help you gain analytical insights in
-your data, without having to use external processing systems.
-
-ArangoDB includes the following graph algorithms:
-- [Page Rank](pregel/algorithms.md#pagerank): used for ranking documents in a graph
- search/traversal.
-- [Single-Source Shortest Path](pregel/algorithms.md#single-source-shortest-path): calculates
- the shortest path length between the source and all other vertices.
- For example, _How to get from a to b_?
-- [Hyperlink-Induced Topic Search (HITS)](pregel/algorithms.md#hyperlink-induced-topic-search-hits):
- a link analysis algorithm that rates web pages.
-- [Vertex Centrality](pregel/algorithms.md#vertex-centrality): identifies the most important
- nodes in a graph. For example, _Who are the influencers in a social network?_
-- [Community Detection](pregel/algorithms.md#community-detection): identifies distinct subgroups
- within a community structure.
-
-### GraphML
-
-GraphML capabilities of using more data outperform conventional deep learning
-methods and **solve high-computational complexity graph problems**, such as:
-- Drug discovery, repurposing, and predicting adverse effects.
-- Personalized product/service recommendation.
-- Supply chain and logistics.
-
-With GraphML, you can also **predict relationships and structures**, such as:
-- Predict molecules for treating diseases (precision medicine).
-- Predict fraudulent behavior, credit risk, purchase of product or services.
-- Predict relationships among customers, accounts.
-
-ArangoDB uses well-known GraphML frameworks like
-[Deep Graph Library](https://www.dgl.ai)
-and [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/)
-and connects to these external machine learning libraries. When coupled to
-ArangoDB, you are essentially integrating them with your graph dataset.
-
-## Example: ArangoFlix
-
-ArangoFlix is a complete movie recommendation application that predicts missing
-links between a user and the movies they have not watched yet.
-
-This [interactive tutorial](https://colab.research.google.com/github/arangodb/interactive_tutorials/blob/master/notebooks/Integrate_ArangoDB_with_PyG.ipynb)
-demonstrates how to integrate ArangoDB with PyTorch Geometric to
-build recommendation systems using Graph Neural Networks (GNNs).
-
-The full ArangoFlix demo website is accessible from the ArangoGraph Insights Platform,
-the managed cloud for ArangoDB. You can open the demo website that connects to
-your running database from the **Examples** tab of your deployment.
-
-{{< tip >}}
-You can try out the ArangoGraph Insights Platform free of charge for 14 days.
-Sign up at [dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-{{< /tip >}}
-
-The ArangoFlix demo uses five different recommendation methods:
-- Content-Based using AQL
-- Collaborative Filtering using AQL
-- Content-Based using ML
-- Matrix Factorization
-- Graph Neural Networks
-
-
-
-The ArangoFlix website not only offers an example of how the user recommendations might
-look like in real life, but it also provides information on a recommendation method,
-an AQL query, a custom graph visualization for each movie, and more.
diff --git a/site/content/3.10/data-science/arangograph-notebooks.md b/site/content/3.10/data-science/arangograph-notebooks.md
deleted file mode 100644
index 34ca9529be..0000000000
--- a/site/content/3.10/data-science/arangograph-notebooks.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: ArangoGraph Notebooks
-menuTitle: ArangoGraph Notebooks
-weight: 130
-description: >-
- Colocated Jupyter Notebooks within the ArangoGraph Insights Platform
----
-{{< tip >}}
-ArangoGraph Notebooks don't include the ArangoGraphML services.
-To enable the ArangoGraphML services,
-[get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team.
-{{< /tip >}}
-
-The ArangoGraph Notebook is a JupyterLab notebook embedded in the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-The notebook integrates seamlessly with the platform,
-automatically connecting to ArangoGraph services and ArangoDB.
-This makes it much easier to leverage these resources without having
-to download any data locally or to remember user IDs, passwords, and endpoint URLs.
-
-For more information, see the [Notebooks](../arangograph/notebooks.md) documentation.
diff --git a/site/content/3.10/data-science/arangographml/_index.md b/site/content/3.10/data-science/arangographml/_index.md
deleted file mode 100644
index baa200deaa..0000000000
--- a/site/content/3.10/data-science/arangographml/_index.md
+++ /dev/null
@@ -1,115 +0,0 @@
----
-title: ArangoGraphML
-menuTitle: ArangoGraphML
-weight: 125
-description: >-
- Enterprise-ready, graph-powered machine learning as a cloud service or self-managed
-aliases:
- - graphml
----
-Traditional machine learning overlooks the connections and relationships
-between data points, which is where graph machine learning excels. However,
-accessibility to GraphML has been limited to sizable enterprises equipped with
-specialized teams of data scientists. ArangoGraphML, on the other hand,
-simplifies the utilization of GraphML, enabling a broader range of personas to
-extract profound insights from their data.
-
-## How GraphML works
-
-GraphML focuses on the utilization of neural networks specifically for
-graph-related tasks. It is well-suited for addressing vague or fuzzy problems
-and facilitating their resolution. The process involves incorporating a graph's
-topology (node and edge structure) and the node and edge characteristics and
-features to create a numerical representation known as an embedding.
-
-
-
-Graph Neural Networks (GNNs) are explicitly designed to learn meaningful
-numerical representations, or embeddings, for nodes and edges in a graph.
-
-By applying a series of steps, GNNs effectively create graph embeddings,
-which are numerical representations that encode the essential information
-about the nodes and edges in the graph. These embeddings can then be used
-for various tasks, such as node classification, link prediction, and
-graph-level classification, where the model can make predictions based on the
-learned patterns and relationships within the graph.
-
-
-
-It is no longer necessary to understand the complexities involved with graph
-machine learning, thanks to the accessibility of the ArangoML package.
-Solutions with ArangoGraphML only require input from a user about
-their data, and the ArangoGraphML managed service handles the rest.
-
-The platform comes preloaded with all the tools needed to prepare your graph
-for machine learning, high-accuracy training, and persisting predictions back
-to the database for application use.
-
-### Classification
-
-Node classification is a natural fit for graph databases as it can leverage
-existing graph analytics insights during model training. For instance, if you
-have performed some community detection, potentially using ArangoDB's built-in
-Pregel support, you can use these insights as inputs for graph machine learning.
-
-#### What is Node Classification
-
-The goal of node classification is to categorize the nodes in a graph based on
-their neighborhood connections and characteristics in the graph. Based on the
-behaviors or patterns in the graph, the Graph Neural Network (GNN) will be able
-to learn what makes a node belong to a category.
-
-Node classification can be used to solve complex problems such as:
-- Entity Categorization
- - Email
- - Books
- - WebPage
- - Transaction
-- Social Networks
- - Events
- - Friends
- - Interests
-- BioPharmaceutical
- - Protein-protein interaction
- - Drug Categorization
- - Sequence grouping
-- Behavior
- - Fraud
- - Purchase/decision making
- - Anomaly
-
-Many use cases can be solved with node classification. With many challenges,
-there are multiple ways to attempt to solve them, and that's why the
-ArangoGraphML node classification is only the first of many techniques to be
-introduced. You can sign up to get immediate access to our latest stable
-features and also try out other features included in the pipeline, such as
-embedding similarity or link prediction.
-
-For more information, [get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team.
-
-### Metrics and Compliance
-
-#### Training Performance
-
-Before using a model to provide predictions to your application, there needs
-to be a way to determine its level of accuracy. Additionally, a mechanism must
-be in place to ensure the experiments comply with auditor requirements.
-
-ArangoGraphML supports these objectives by storing all relevant training data
-and metrics in a metadata graph, which is only available to you and is never
-viewable by ArangoDB. This metagraph contains valuable training metrics such as
-average accuracy (the general metric for determining model performance), F1,
-Recall, Precision, and confusion matrix data. This graph links all experiments
-to the source data, feature generation activities, training runs, and prediction
-jobs. Having everything linked across the entire pipeline ensures that, at any
-time, anything done that could be considered associated with sensitive user data,
-it is logged and easily accessible.
-
-### Security
-
-Each deployment that uses ArangoGraphML has an `arangopipe` database created,
-which houses all this information. Since the data lives with the deployment,
-it benefits from the ArangoGraph SOC 2 compliance and Enterprise security features.
-All ArangoGraphML services live alongside the ArangoGraph deployment and are only
-accessible within that organization.
\ No newline at end of file
diff --git a/site/content/3.10/data-science/arangographml/deploy.md b/site/content/3.10/data-science/arangographml/deploy.md
deleted file mode 100644
index 0d62cb12f6..0000000000
--- a/site/content/3.10/data-science/arangographml/deploy.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: Deploy ArangoGraphML
-menuTitle: Deploy
-weight: 5
-description: >-
- You can deploy ArangoGraphML in your own Kubernetes cluster or use the managed
- cloud service that comes with a ready-to-go, pre-configured environment
----
-
-## Managed cloud service versus self-managed
-
-ArangoDB offers two deployment options, tailored to suit diverse requirements
-and infrastructure preferences:
-- Managed cloud service via the [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-- Self-managed solution via the [ArangoDB Kubernetes Operator](https://github.com/arangodb/kube-arangodb)
-
-### ArangoGraphML
-
-ArangoGraphML provides enterprise-ready Graph Machine Learning as a
-Cloud Service via Jupyter Notebooks that run on the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-{{< tip >}}
-To get access to ArangoGraphML services and packages,
-[get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team.
-{{< /tip >}}
-
-- **Accessible at all levels**
- - Low code UI
- - Notebooks
- - APIs
-- **Full usability**
- - MLOps lifecycle
- - Metrics
- - Metadata capture
- - Model management
-
-
-
-#### Setup
-
-The ArangoGraphML managed-service runs on the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-It offers a pre-configured environment where everything,
-including necessary components and configurations, comes preloaded. You don't
-need to set up or configure the infrastructure, and can immediately start using the
-GraphML functionalities.
-
-To summarize, all you need to do is:
-1. Sign up for an [ArangoGraph account](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-2. Create a new [deployment in ArangoGraph](../../arangograph/deployments/_index.md#how-to-create-a-new-deployment).
-3. Start using the ArangoGraphML functionalities.
-
-### Self-managed ArangoGraphML
-
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-The self-managed solution enables you to deploy and manage ArangoML within your
-Kubernetes cluster using the [ArangoDB Kubernetes Operator](https://github.com/arangodb/kube-arangodb).
-
-The self-managed package includes the same features as in ArangoGraphML.
-The primary distinction lies in the environment setup: with the self-managed
-solution, you have direct control over configuring your environment.
-
-#### Setup
-
-You can run ArangoGraphML in your Kubernetes
-cluster provided you already have a running `ArangoDeployment`. If you don't
-have one yet, consider checking the installation guide of the
-[ArangoDB Kubernetes Operator](https://arangodb.github.io/kube-arangodb/docs/using-the-operator.html)
-and the [ArangoDeployment Custom Resource](https://arangodb.github.io/kube-arangodb/docs/deployment-resource-reference.html)
-description.
-
-To start ArangoGraphML in your Kubernetes cluster, follow the instructions provided
-in the [ArangoMLExtension Custom Resource](https://arangodb.github.io/kube-arangodb/docs/mlextension-resource.html)
-description. Once the `CustomResource` has been created and the ArangoGraphML extension
-is ready, you can start using it.
\ No newline at end of file
diff --git a/site/content/3.10/data-science/arangographml/getting-started.md b/site/content/3.10/data-science/arangographml/getting-started.md
deleted file mode 100644
index 8742ec3aa0..0000000000
--- a/site/content/3.10/data-science/arangographml/getting-started.md
+++ /dev/null
@@ -1,816 +0,0 @@
----
-title: Getting Started with ArangoGraphML
-menuTitle: Getting Started
-weight: 10
-description: >-
- How to control all resources inside ArangoGraphML in a scriptable manner
-aliases:
- - getting-started-with-arangographml
----
-ArangoGraphML provides an easy-to-use & scalable interface to run Graph Machine Learning on ArangoDB Data. Since all of the orchestration and ML logic is managed by ArangoGraph, all that is typically required are JSON specifications outlining individual processes to solve an ML Task. If you are using the self-managed solution, additional configurations may be required.
-
-The `arangoml` is a Python Package allowing you to manage all of the necessary ArangoGraphML components, including:
-- **Project Management**: Projects are a metadata-tracking entity that sit at the top level of ArangoGraphML. All activities must link to a project.
-- **Featurization**: The step of converting human-understandable data to machine-understandable data (i.e features), such that it can be used to train Graph Neural Networks (GNNs).
-- **Training**: Train a set of models based on the name of the generated/existing features, and a definition of the ML Task we want to solve (e.g Node Classification, Embedding Generation).
-- **Model Selection**: Select the best model based on the metrics generated during training.
-- **Predictions**: Generate predictions based on the selected model, and persit the results to the source graph (either in the source document, or in a new collection).
-
-{{< tip >}}
-To enable the ArangoGraphML services in the ArangoGraph platform,
-[get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team. Regular notebooks in ArangoGraph don't include the
-`arangoml` package.
-{{< /tip >}}
-
-ArangoGraphML's suite of services and packages is driven by **"specifications"**. These specifications are standard Python dictionaries that describe the task being performed, & the data being used. The ArangoGraphML services work closely together, with the previous task being used as the input for the next.
-
-Let's take a look at using the `arangoml` package to:
-
-1. Manage projects
-2. Featurize data
-3. Submit Training Jobs
-4. Evaluate Model Metrics
-5. Generate Predictions
-
-## Initialize ArangoML
-
-{{< tabs "arangoml" >}}
-
-{{< tab "ArangoGraphML" >}}
-
-**API Documentation: [arangoml.ArangoMLMagics.enable_arangoml](https://arangoml.github.io/arangoml/magics.html#arangoml.magic.ArangoMLMagics.enable_arangoml)**
-
-The `arangoml` package comes pre-loaded with every ArangoGraphML notebook environment.
-To start using it, simply import it, and enable it via a Jupyter Magic Command.
-
-```py
-arangoml = %enable_arangoml
-```
-
-{{< tip >}}
-ArangoGraphML comes with other ArangoDB Magic Commands! See the full list [here](https://arangoml.github.io/arangoml/magics.html).
-{{< /tip >}}
-
-{{< /tab >}}
-
-{{< tab "Self-managed" >}}
-
-**API Documentation: [arangoml.ArangoML](https://arangoml.github.io/arangoml/client.html#arangoml.main.ArangoML)**
-
-The `ArangoML` class is the main entry point for the `arangoml` package.
-It requires the following parameters:
-- `client`: An instance of arango.client.ArangoClient. Defaults to `None`. If not provided, the **hosts** argument must be provided.
-- `hosts`: The ArangoDB host(s) to connect to. This can be a single host, or a
- list of hosts.
-- `username`: The ArangoDB username to use for authentication.
-- `password`: The ArangoDB password to use for authentication.
-- `user_token`: The ArangoDB user token to use for authentication.
- This is an alternative to username/password authentication.
-- `ca_cert_file`: (Optional) The path to the CA certificate file to use for TLS
- verification.
-- `user_token`: (Optional) The ArangoDB user token to use for authentication.
- This is an alternative to username/password authentication.
-- `api_endpoint`: The URL to the ArangoGraphML API Service.
-- `settings`: (Optional) A list of secrets files to be loaded as settings. Parameters provided as arguments will override those in the settings files (e.g `settings.toml`).
-- `version`: The ArangoML API date version. Defaults to the latest version.
-
-It is possible to instantiate an ArangoML object in multiple ways:
-
-1. Via parameters
-```py
-from arangoml import ArangoML
-
-arangoml = ArangoML(
- hosts="http://localhost:8529"
- username="root",
- password="password",
- # ca_cert_file="/path/to/ca.pem",
- # user_token="..."
- api_endpoint="http://localhost:8501",
-)
-```
-
-2. Via parameters and a custom `ArangoClient` instance
-```py
-from arangoml import ArangoML
-from arango import ArangoClient
-
-client = ArangoClient(
- hosts="http://localhost:8529",
- verify_override="/path/to/ca.pem",
- hosts_resolver=...,
- ...
-)
-
-arangoml = ArangoML(
- client=client,
- username="root",
- password="password",
- # user_token="..."
- api_endpoint="http://localhost:8501",
-)
-```
-
-3. Via environment variables
-```py
-import os
-from arangoml import ArangoML
-
-os.environ["ARANGODB_HOSTS"] = "http://localhost:8529"
-os.environ["ARANGODB_CA_CERT_FILE"]="/path/to/ca.pem"
-os.environ["ARANGODB_USER"] = "root"
-os.environ["ARANGODB_PW"] = "password"
-# os.environ["ARANGODB_USER_TOKEN"] = "..."
-os.environ["ML_API_SERVICES_ENDPOINT"] = "http://localhost:8501"
-
-arangoml = ArangoML()
-```
-
-4. Via configuration files
-```py
-import os
-from arangoml import ArangoML
-
-arangoml = ArangoML(settings_files=["settings_1.toml", "settings_2.toml"])
-```
-
-5. Via a Jupyter Magic Command
-
-**API Documentation: [arangoml.ArangoMLMagics.enable_arangoml](https://arangoml.github.io/arangoml/magics.html#arangoml.magic.ArangoMLMagics.enable_arangoml)**
-
-```
-%load_ext arangoml
-%enable_arangoml
-```
-{{< info >}}
-This assumes you are working out of a Jupyter Notebook environment, and
-have set the environment variables in the notebook environment with user
-authentication that has **_system** access.
-{{< /info >}}
-
-{{< tip >}}
-Running `%load_ext arangoml` also provides access to other [ArangoGraphML
-Jupyter Magic Commands](https://arangoml.github.io/arangoml/magics.html).
-{{< /tip >}}
-
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Load the database
-
-This example is using ArangoML to predict the **class** of `Events` in a
-Knowledge Graph constructed from the [GDELT Project](https://www.gdeltproject.org/).
-
-> GDELT monitors the world's news media from nearly every corner of every
- country in print, broadcast, and web formats, in over 100 languages, every
- moment of every day. [...] Put simply, the GDELT Project is a realtime open
- data global graph over human society as seen through the eyes of the world's
- news media, reaching deeply into local events, reaction, discourse, and
- emotions of the most remote corners of the world in near-realtime and making
- all of this available as an open data firehose to enable research over human
- society.
-
-The events used range from peaceful protests to significant battles in Angola.
-The image below depicts the connections around an example event:
-
-
-
-You can also see a larger portion of this graph, showing how the events, actors,
-news sources, and locations are interconnected into a large graph.
-
-
-
-Let's get started!
-
-{{< tabs "arangoml" >}}
-
-{{< tab "ArangoGraphML" >}}
-
-The [arango-datasets](https://github.com/arangoml/arangodb_datasets) package
-allows you to load pre-defined datasets into ArangoDB. It comes pre-installed in the
-ArangoGraphML notebook environment.
-
-```py
-DATASET_NAME = "OPEN_INTELLIGENCE_ANGOLA"
-
-%delete_database {DATASET_NAME}
-%create_database {DATASET_NAME}
-%use_database {DATASET_NAME}
-%load_dataset {DATASET_NAME}
-```
-
-{{< /tab >}}
-
-{{< tab "Self-managed" >}}
-
-The [arango-datasets](https://github.com/arangoml/arangodb_datasets) package
-allows you to load a dataset into ArangoDB. It can be installed with:
-
-```
-pip install arango-datasets
-```
-
-```py
-from arango_datasets.datasets import Datasets
-
-DATASET_NAME = "OPEN_INTELLIGENCE_ANGOLA"
-
-db = arangoml.client.db(
- name=DATASET_NAME,
- username=arangoml.settings.get("ARANGODB_USER"),
- password=arangoml.settings.get("ARANGODB_PW"),
- user_token=arangoml.settings.get("ARANGODB_USER_TOKEN"),
- verify=True
-)
-
-Datasets(dataset_db).load(DATASET_NAME)
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Projects
-
-**API Documentation: [ArangoML.projects](https://arangoml.github.io/arangoml/api.html#projects)**
-
-Projects are an important reference used throughout the entire ArangoGraphML
-lifecycle. All activities link back to a project. The creation of the project
-is very simple.
-
-### Get/Create a project
-```py
-project = arangoml.get_or_create_project(DATASET_NAME)
-```
-
-### List projects
-
-```py
-arangoml.projects.list_projects()
-```
-
-## Featurization
-
-**API Documentation: [ArangoML.jobs.featurize](https://arangoml.github.io/arangoml/api.html#agml_api.jobs.v1.api.jobs_api.JobsApi.featurize)**
-
-**The Featurization Service depends on a `Featurization Specification` that contains**:
-- `featurizationName`: A name for the featurization task.
-
-- `projectName`: The associated project name. You can use `project.name` here
- if it was created or retrieved as described above.
-
-- `graphName`: The associated graph name that exists within the database.
-
-- `featureSetID` Optional: The ID of an existing Feature Set to re-use. If provided, the `metagraph` dictionary can be ommitted. Defaults to `None`.
-
-- `featurizationConfiguration` Optional: The optional default configuration to be applied
- across all features. Individual collection feature settings override this option.
-
- - `featurePrefix`: The prefix to be applied to all individual features generated. Default is `feat_`.
-
- - `outputName`: Adjust the default feature name. This can be any valid ArangoDB attribute name. Defaults to `x`.
-
- - `dimensionalityReduction`: Object configuring dimensionality reduction.
- - `disabled`: Boolean for enabling or disabling dimensionality reduction. Default is `false`.
- - `size`: The number of dimensions to reduce the feature length to. Default is `512`.
-
- - `defaultsPerFeatureType`: A dictionary mapping each feature to how missing or mismatched values should be handled. The keys of this dictionary are the features, and the values are sub-dictionaries with the following keys:
- - `missing`: A sub-dictionary detailing how missing values should be handled.
- - `strategy`: The strategy to use for missing values. Options include `REPLACE` or `RAISE`.
- - `replacement`: The value to replace missing values with. Only needed if `strategy` is `REPLACE`.
- - `mismatch`: A sub-dictionary detailing how mismatched values should be handled.
- - `strategy`: The strategy to use for mismatched values. Options include `REPLACE`, `RAISE`, `COERCE_REPLACE`, or `COERCE_RAISE`.
- - `replacement`: The value to replace mismatched values with. Only needed if `strategy` is `REPLACE`, or `COERCE_REPLACE`.
-
-- `jobConfiguration` Optional: A set of configurations that are applied to the job.
- - `batchSize`: The number of documents to process in a single batch. Default is `32`.
- - `runAnalysisChecks`: Boolean for enabling or disabling analysis checks. Default is `true`.
- - `skipLabels`: Boolean for enabling or disabling label skipping. Default is `false`.
- - `overwriteFSGraph`: Boolean for enabling or disabling overwriting the feature store graph. Default is `false`.
- - `writeToSourceGraph`: Boolean for enabling or disabling writing features to the source graph. Default is `true`.
- - `useFeatureStore`: Boolean for enabling or disabling the use of the feature store. Default is `false`.
-
-- `metagraph`: Metadata to represent the vertex & edge collections of the graph.
- - `vertexCollections`: A dictionary mapping the vertex collection names to the following values:
- - `features`: A dictionary mapping document properties to the following values:
- - `featureType`: The type of feature. Options include `text`, `category`, `numeric`, or `label`.
- - `config`: Collection-level configuration settings.
- - `featurePrefix`: Identical to global `featurePrefix` but for this collection.
- - `dimensionalityReduction`: Identical to global `dimensionalityReduction` but for this collection.
- - `outputName`: Identical to global `outputName` but for this collection.
- - `defaultsPerFeatureType`: Identical to global `defaultsPerFeatureType` but for this collection.
- - `edgeCollections`: A dictionary mapping the edge collection names to an empty dictionary, as edge attributes are not currently supported.
-
-The Featurization Specification example is used for the GDELT dataset:
-- It featurizes the `name` attribute of the `Actor`, `Class`, `Country`,
- `Source`, `Location`, and `Region` collections as a `text` features.
-- It featurizes the `description` attribute of the `Event` collection as a
- `text` feature.
-- It featurizes the `label` attribute of the `Event` collection as a `label`
- feature (this is the attribute you want to predict).
-- It featurizes the `sourceScale` attribute of the `Source` collection as a
- `category` feature.
-- It featurizes the `name` attribute of the `Region` collection as a
- `category` feature.
-
-```py
-# 1. Define the Featurization Specification
-
-featurization_spec = {
- "databaseName": dataset_db.name,
- "projectName": project.name,
- "graphName": graph.name,
- "featurizationName": f"{DATASET_NAME}_Featurization",
- "featurizationConfiguration": {
- "featurePrefix": "feat_",
- "dimensionalityReduction": { "size": 256 },
- "outputName": "x"
- },
- "jobConfiguration": {
- "batchSize": 512,
- "useFeatureStore": False,
- "runAnalysisChecks": False,
- },
- "metagraph": {
- "vertexCollections": {
- "Actor": {
- "features": {
- "name": {
- "featureType": "text",
- },
- }
- },
- "Country": {
- "features": {
- "name": {
- "featureType": "text",
- }
- }
- },
- "Event": {
- "features": {
- "description": {
- "featureType": "text",
- },
- "label": {
- "featureType": "label",
- },
- }
- },
- "Source": {
- "features": {
- "name": {
- "featureType": "text",
- },
- "sourceScale": {
- "featureType": "category",
- },
- }
- },
- "Location": {
- "features": {
- "name": {
- "featureType": "text",
- }
- }
- },
- "Region": {
- "features": {
- "name": {
- "featureType": "category",
- },
- }
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- }
- }
-}
-```
-
-Once the specification has been defined, a Featurization Job can be triggered using the `arangoml.jobs.featurize` method:
-
-```py
-# 2. Submit a Featurization Job
-
-featurization_job = arangoml.jobs.featurize(featurization_spec)
-```
-
-Once a Featurization Job has been submitted, you can wait for it to complete using the `arangoml.wait_for_featurization` method:
-
-```py
-# 3. Wait for the Featurization Job to complete
-
-featurization_job_result = arangoml.wait_for_featurization(featurization_job.job_id)
-```
-
-
-**Example Output:**
-```py
-{
- "job_id": "16349541",
- "output_db_name": "OPEN_INTELLIGENCE_ANGOLA",
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "feature_set_id": "16349537",
- "feature_set_ids": [
- "16349537"
- ],
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Class": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- "subClass": {},
- "type": {}
- },
- "label_field": "OPEN_INTELLIGENCE_ANGOLA_y",
- "input_field": "OPEN_INTELLIGENCE_ANGOLA_x",
- "feature_set_id_to_results": {
- "16349537": {
- "feature_set_id": "16349537",
- "output_db_name": "OPEN_INTELLIGENCE_ANGOLA",
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Class": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- "subClass": {},
- "type": {}
- },
- "label_field": "OPEN_INTELLIGENCE_ANGOLA_y",
- "input_field": "OPEN_INTELLIGENCE_ANGOLA_x",
- "is_feature_store": false,
- "target_collection": "Event"
- }
- },
- "is_feature_store": false,
- "target_collection": "Event"
-}
-```
-
-You can also cancel a Featurization Job using the `arangoml.jobs.cancel_job` method:
-
-```python
-arangoml.jobs.cancel_job(prediction_job.job_id)
-```
-
-
-## Training
-
-**API Documentation: [ArangoML.jobs.train](https://arangoml.github.io/arangoml/api.html#agml_api.jobs.v1.api.jobs_api.JobsApi.train)**
-
-Training Graph Machine Learning Models with ArangoGraphML only requires two steps:
-1. Describe which data points should be included in the Training Job.
-2. Pass the Training Specification to the Training Service.
-
-**The Training Service depends on a `Training Specification` that contains**:
-- `featureSetID`: The feature set ID that was generated during the Featurization Job (if any). It replaces the need to provide the `metagraph`, `databaseName`, and `projectName` fields.
-
-- `databaseName`: The database name the source data is in. Can be omitted if `featureSetID` is provided.
-
-- `projectName`: The top-level project to which all the experiments will link back. Can be omitted if `featureSetID` is provided.
-
-- `useFeatureStore`: Boolean for enabling or disabling the use of the feature store. Default is `false`.
-
-- `mlSpec`: Describes the desired machine learning task, input features, and
- the attribute label to be predicted.
- - `classification`: Dictionary to describe the Node Classification Task Specification.
- - `targetCollection`: The ArangoDB collection name that contains the prediction label.
- - `inputFeatures`: The name of the feature to be used as input.
- - `labelField`: The name of the attribute to be predicted.
- - `batchSize`: The number of documents to process in a single batch. Default is `64`.
-
-- `metagraph`: Metadata to represent the vertex & edge collections of the graph. If `featureSetID` is provided, this can be omitted.
- - `graph`: The ArangoDB graph name.
- - `vertexCollections`: A dictionary mapping the collection names to the following values:
- - `x`: The name of the feature to be used as input.
- - `y`: The name of the attribute to be predicted. Can only be specified for one collection.
- - `edgeCollections`: A dictionary mapping the edge collection names to an empty dictionary, as edge features are not currently supported.
-
-A Training Specification allows for concisely defining your training task in a
-single object and then passing that object to the training service using the
-Python API client, as shown below.
-
-
-The ArangoGraphML Training Service is responsible for training a series of
-Graph Machine Learning Models using the data provided in the Training
-Specification. It assumes that the data has been featurized and is ready to be
-used for training.
-
-Given that we have run a Featurization Job, we can create the Training Specification using the `featurization_job_result` object returned from the Featurization Job:
-
-```py
-# 1. Define the Training Specification
-
-training_spec = {
- "featureSetID": featurization_job_result.result.feature_set_id,
- "mlSpec": {
- "classification": {
- "targetCollection": "Event",
- "inputFeatures": "OPEN_INTELLIGENCE_ANGOLA_x",
- "labelField": "OPEN_INTELLIGENCE_ANGOLA_y",
- }
- },
-}
-```
-
-Once the specification has been defined, a Training Job can be triggered using the `arangoml.jobs.train` method:
-
-```py
-# 2. Submit a Training Job
-
-training_job = arangoml.jobs.train(training_spec)
-```
-
-Once a Training Job has been submitted, you can wait for it to complete using the `arangoml.wait_for_training` method:
-
-```py
-# 3. Wait for the Training Job to complete
-
-training_job_result = arangoml.wait_for_training(training_job.job_id)
-```
-
-**Example Output:**
-```py
-{
- "job_id": "691ceb2f-1931-492a-b4eb-0536925a4697",
- "job_status": "COMPLETED",
- "project_name": "OPEN_INTELLIGENCE_ANGOLA_GraphML_Node_Classification",
- "project_id": "16832427",
- "database_name": "OPEN_INTELLIGENCE_ANGOLA",
- "metagraph": {
- "mlSpec": {
- "classification": {
- "targetCollection": "Event",
- "inputFeatures": "OPEN_INTELLIGENCE_ANGOLA_x",
- "labelField": "OPEN_INTELLIGENCE_ANGOLA_y",
- "metrics": None
- }
- },
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Class": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- "subClass": {},
- "type": {}
- },
- "batch_size": 64
- },
- "time_submitted": "2024-01-12T02:19:19.686286",
- "time_started": "2024-01-12T02:19:29.403742",
- "time_ended": "2024-01-12T02:30:59.313038",
- "job_state": None,
- "job_conditions": None
-}
-```
-
-You can also cancel a Training Job using the `arangoml.jobs.cancel_job` method:
-
-```python
-arangoml.jobs.cancel_job(training_job.job_id)
-```
-
-## Model Selection
-
-Model Statistics can be observed upon completion of a Training Job.
-To select a Model, the ArangoGraphML Projects Service can be used to gather
-all relevant models and choose the preferred model for a Prediction Job.
-
-First, let's list all the trained models using [ArangoML.list_models](https://arangoml.github.io/arangoml/client.html#arangoml.main.ArangoML.list_models):
-
-```py
-# 1. List all trained Models
-
-models = arangoml.list_models(
- project_name=project.name,
- training_job_id=training_job.job_id
-)
-
-print(len(models))
-```
-
-
-The cell below selects the model with the highest **test accuracy** using [ArangoML.get_best_model](https://arangoml.github.io/arangoml/client.html#arangoml.main.ArangoML.get_best_model), but there may be other factors that motivate you to choose another model. See the `model_statistics` in the output field below for more information on the full list of available metrics.
-
-```py
-best_model = arangoml.get_best_model(
- project.name,
- training_job.job_id,
- sort_parent_key="test",
- sort_child_key="accuracy",
-)
-
-print(best_model)
-```
-
-**Example Output:**
-```py
-{
- "job_id": "691ceb2f-1931-492a-b4eb-0536925a4697",
- "model_id": "02297435-3394-4e7e-aaac-82e1d224f85c",
- "model_statistics": {
- "_id": "devperf/123",
- "_key": "123",
- "_rev": "_gkUc8By--_",
- "run_id": "123",
- "test": {
- "accuracy": 0.8891242216547955,
- "confusion_matrix": [[13271, 2092], [1276, 5684]],
- "f1": 0.9,
- "loss": 0.1,
- "precision": 0.9,
- "recall": 0.8,
- "roc_auc": 0.8,
- },
- "validation": {
- "accuracy": 0.9,
- "confusion_matrix": [[13271, 2092], [1276, 5684]],
- "f1": 0.85,
- "loss": 0.1,
- "precision": 0.86,
- "recall": 0.85,
- "roc_auc": 0.85,
- },
- },
- "target_collection": "Event",
- "target_field": "label",
-}
-```
-
-## Prediction
-
-**API Documentation: [ArangoML.jobs.predict](https://arangoml.github.io/arangoml/api.html#agml_api.jobs.v1.api.jobs_api.JobsApi.predict)**
-
-Final step!
-
-After selecting a model, a Prediction Job can be created. The Prediction Job
-will generate predictions and persist them to the source graph in a new
-collection, or within the source documents.
-
-**The Prediction Service depends on a `Prediction Specification` that contains**:
-- `projectName`: The top-level project to which all the experiments will link back.
-- `databaseName`: The database name the source data is in.
-- `modelID`: The model ID to use for generating predictions.
-- `featurizeNewDocuments`: Boolean for enabling or disabling the featurization of new documents. Useful if you don't want to re-train the model upon new data. Default is `false`.
-- `featurizeOutdatedDocuments`: Boolean for enabling or disabling the featurization of outdated documents. Outdated documents are those whose features have changed since the last featurization. Default is `false`.
-- `schedule`: A cron expression to schedule the prediction job (e.g `0 0 * * *` for daily predictions). Default is `None`.
-
-
-```py
-# 1. Define the Prediction Specification
-
-prediction_spec = {
- "projectName": project.name,
- "databaseName": dataset_db.name,
- "modelID": best_model.model_id,
-}
-```
-
-This job updates all documents with the predictions derived from the trained model.
-Once the specification has been defined, a Prediction Job can be triggered using the `arangoml.jobs.predict` method:
-
-```py
-# 2. Submit a Prediction Job
-prediction_job = arangoml.jobs.predict(prediction_spec)
-```
-
-Similar to the Training Service, we can wait for a Prediction Job to complete with the `arangoml.wait_for_prediction` method:
-
-```py
-# 3. Wait for the Prediction Job to complete
-
-prediction_job_result = arangoml.wait_for_prediction(prediction_job.job_id)
-```
-
-**Example Output:**
-```py
-{
- "job_id": "b2a422bb-5650-4fbc-ba6b-0578af0049d9",
- "job_status": "COMPLETED",
- "project_name": "OPEN_INTELLIGENCE_ANGOLA_GraphML_Node_Classification",
- "project_id": "16832427",
- "database_name": "OPEN_INTELLIGENCE_ANGOLA",
- "model_id": "1a365657-f5ed-4da9-948b-1ff60bc6e7de",
- "job_state_information": {
- "outputGraphName": "OPEN_INTELLIGENCE_ANGOLA",
- "outputCollectionName": "Event",
- "outputAttribute": "OPEN_INTELLIGENCE_ANGOLA_y_predicted",
- "numberOfPredictedDocuments": 3302,
- "outputEdgeCollectionName": None
- },
- "time_submitted": "2024-01-12T02:31:18.382625",
- "time_started": "2024-01-12T02:31:23.550469",
- "time_ended": "2024-01-12T02:31:40.021035"
-}
-```
-
-You can also cancel a Prediction Job using the `arangoml.jobs.cancel_job` method:
-
-```python
-arangoml.jobs.cancel_job(prediction_job.job_id)
-```
-
-### Viewing Predictions
-
-We can now access our predictions via AQL:
-
-```py
-import json
-
-collection_name = prediction_job_result.job_state_information['outputCollectionName']
-
-query = f"""
- FOR doc IN `{collection_name}`
- SORT RAND()
- LIMIT 3
- RETURN doc
-"""
-
-docs = list(dataset_db.aql.execute(query))
-
-print(json.dumps(docs, indent=2))
-```
\ No newline at end of file
diff --git a/site/content/3.10/data-science/llm-knowledge-graphs.md b/site/content/3.10/data-science/llm-knowledge-graphs.md
deleted file mode 100644
index 80d8be9666..0000000000
--- a/site/content/3.10/data-science/llm-knowledge-graphs.md
+++ /dev/null
@@ -1,73 +0,0 @@
----
-title: Large Language Models (LLMs) and Knowledge Graphs
-menuTitle: Large Language Models and Knowledge Graphs
-weight: 133
-description: >-
- Integrate large language models (LLMs) with knowledge graphs using ArangoDB
----
-Large language models (LLMs) and knowledge graphs are two prominent and
-contrasting concepts, each possessing unique characteristics and functionalities
-that significantly impact the methods we employ to extract valuable insights from
-constantly expanding and complex datasets.
-
-LLMs, exemplified by OpenAI's ChatGPT, represent a class of powerful language
-transformers. These models leverage advanced neural networks to exhibit a
-remarkable proficiency in understanding, generating, and participating in
-contextually-aware conversations.
-
-On the other hand, knowledge graphs contain carefully structured data and are
-designed to capture intricate relationships among discrete and seemingly
-unrelated information. With knowledge graphs, you can explore contextual
-insights and execute structured queries that reveal hidden connections within
-complex datasets.
-
-ArangoDB's unique capabilities and flexible integration of knowledge graphs and
-LLMs provide a powerful and efficient solution for anyone seeking to extract
-valuable insights from diverse datasets.
-
-## Knowledge Graphs
-
-A knowledge graph can be thought of as a dynamic and interconnected network of
-real-world entities and the intricate relationships that exist between them.
-
-Key aspects of knowledge graphs:
-- **Domain specific knowledge**: You can tailor knowledge graphs to specific
- domains and industries.
-- **Structured information**: Makes it easy to query, analyze, and extract
- meaningful insights from your data.
-- **Accessibility**: You can build a Semantic Web knowledge graph or using
- custom data.
-
-LLMs can help distill knowledge graphs from natural language by performing
-the following tasks:
-- Entity discovery
-- Relation extraction
-- Coreference resolution
-- End-to-end knowledge graph construction
-- (Text) Embeddings
-
-
-
-## ArangoDB and LangChain
-
-[LangChain](https://www.langchain.com/) is a framework for developing applications
-powered by language models.
-
-LangChain enables applications that are:
-- Data-aware (connect a language model to other sources of data)
-- Agentic (allow a language model to interact with its environment)
-
-The ArangoDB integration with LangChain provides you the ability to analyze
-data seamlessly via natural language, eliminating the need for query language
-design. By using LLM chat models such as OpenAI’s ChatGPT, you can "speak" to
-your data instead of querying it.
-
-### Get started with ArangoDB QA chain
-
-The [ArangoDB QA chain notebook](https://langchain-langchain.vercel.app/docs/use_cases/more/graph/graph_arangodb_qa.html)
-shows how to use LLMs to provide a natural language interface to an ArangoDB
-instance.
-
-Run the notebook directly in [Google Colab](https://colab.research.google.com/github/arangodb/interactive_tutorials/blob/master/notebooks/Langchain.ipynb).
-
-See also other [machine learning interactive tutorials](https://github.com/arangodb/interactive_tutorials#machine-learning).
\ No newline at end of file
diff --git a/site/content/3.10/data-science/pregel/algorithms.md b/site/content/3.10/data-science/pregel/algorithms.md
deleted file mode 100644
index b596d7669b..0000000000
--- a/site/content/3.10/data-science/pregel/algorithms.md
+++ /dev/null
@@ -1,369 +0,0 @@
----
-title: Pregel Algorithms
-menuTitle: Pregel Algorithms
-weight: 5
-description: >-
- You can use Pregel algorithms for graph exploration, path finding, analytics
- queries, and much more
-aliases:
- - pregel-algorithms
----
-Pregel algorithms are used in scenarios where you need to do an
-analysis of a graph stored in ArangoDB to get insights about its
-nature and structure - without having to use external processing systems.
-
-Pregel can solve numerous graph problems and offers solutions that are
-essential building blocks in the cycle of a real world application.
-For example, in a network system, detecting the weaknesses of the network
-design and determining the times when the network is vulnerable may
-significantly reduce any downtime.
-
-In the section below you can find more details about all available
-Pregel algorithms in ArangoDB.
-
-## Available Algorithms
-
-### PageRank
-
-PageRank is a well known algorithm to rank vertices in a graph: the more
-important a vertex, the higher rank it gets. It goes back to L. Page and S. Brin's
-[paper](http://infolab.stanford.edu/pub/papers/google.pdf) and
-is used to rank pages in in search engines (hence the name). The algorithm runs
-until the execution converges. To specify a custom threshold, use the `threshold`
-parameter; to run for a fixed number of iterations, use the `maxGSS` parameter.
-
-The rank of a vertex is a positive real number. The algorithm starts with every
-vertex having the same rank (one divided by the number of vertices) and sends its
-rank to its out-neighbors. The computation proceeds in iterations. In each iteration,
-the new rank is computed according to the formula
-`(0.15/total number of vertices) + (0.85 * the sum of all incoming ranks)`.
-The value sent to each of the out-neighbors is the new rank divided by the number
-of those neighbors, thus every out-neighbor gets the same part of the new rank.
-
-The algorithm stops when at least one of the two conditions is satisfied:
-- The maximum number of iterations is reached. This is the same `maxGSS`
- parameter as for the other algorithms.
-- Every vertex changes its rank in the last iteration by less than a certain
- threshold. The default threshold is 0.00001, a custom value can be set with
- the `threshold` parameter.
-
-```js
-var pregel = require("@arangodb/pregel");
-pregel.start("pagerank", "graphname", { maxGSS: 100, threshold: 0.00000001, resultField: "rank" })
-```
-
-#### Seeded PageRank
-
-It is possible to specify an initial distribution for the vertex documents in
-your graph. To define these seed ranks / centralities, you can specify a
-`sourceField` in the properties for this algorithm. If the specified field is
-set on a document _and_ the value is numeric, then it is used instead of
-the default initial rank of `1 / numVertices`.
-
-```js
-var pregel = require("@arangodb/pregel");
-pregel.start("pagerank", "graphname", { maxGSS: 20, threshold: 0.00000001, sourceField: "seed", resultField: "rank" })
-```
-
-### Single-Source Shortest Path
-
-Calculates the distances, that is, the lengths of shortest paths from the
-given source to all other vertices, called _targets_. The result is written
-to the specified property of the respective target.
-The distance to the source vertex itself is returned as `0` and a length above
-`9007199254740991` (max safe integer) means that there is no path from the
-source to the vertex in the graph.
-
-The algorithm runs until all distances are computed. The number of iterations is bounded by the
-diameter of your graph (the longest distance between two vertices).
-
-A call of the algorithm requires the `source` parameter whose value is the
-document ID of the source vertex. The result field needs to be
-specified in `_resultField` (note the underscore).
-
-```js
-var pregel = require("@arangodb/pregel");
-pregel.start("sssp", "graphname", { source: "vertices/1337", _resultField: "distance" });
-```
-
-### Connected Components
-
-There are three algorithms to find connected components in a graph:
-
-1. If your graph is effectively undirected (for every edge from vertex A to
- vertex B there is also an edge from B to A),
- then the simple **connected components** algorithm named
- `"connectedcomponents"` is suitable.
-
- It is a very simple and fast algorithm, but it only works correctly on
- undirected graphs. Your results on directed graphs may vary, depending on
- how connected your components are.
-
- In an undirected graph, a _connected component_ is a subgraph:
- - where there is a path between every pair of vertices from this component and
- - which is maximal with this property: adding any other vertex would destroy it.
- In other words, there is no path between any vertex from the component and
- any vertex not in the component.
-
-2. To find **weakly connected components** (WCC) you can use the algorithm named `"wcc"`.
- A _weakly connected component_ in a directed graph is a maximal subgraph such
- that there is a path between each pair of vertices
- where _we can walk also against the direction of edges_. More formally, it is
- a connected component (see the definition above) in the
- _underlying undirected graph_, i.e., in the undirected graph obtained by
- adding an edge from vertex B to vertex A (if it does not already exist),
- if there is an edge from vertex A to vertex B.
-
- This algorithm works on directed graphs but, in general, requires a greater amount of
- traffic between DB-Servers.
-
-3. To find **strongly connected components** (SCC) you can use the algorithm named `"scc"`.
- A _strongly connected component_ is a maximal subgraph,
- where for every two vertices, there is a path from one of them to the other.
- It is thus defined as a weakly connected component,
- but one is not allowed to run against the edge directions.
-
- The algorithm is more complex than the WCC algorithm and, in general, requires more memory.
-
-All above algorithms assign a component ID to each vertex, a number which is
-written into the specified `resultField`. All vertices from the same component
-obtain the same component ID, every two vertices from different components
-obtain different IDs.
-
-```js
-var pregel = require("@arangodb/pregel");
-
-// connected components
-pregel.start("connectedcomponents", "graphname", { resultField: "component" });
-
-// weakly connected components
-pregel.start("wcc", "graphname", { resultField: "component_weak" });
-
-// strongly connected components
-pregel.start("scc", "graphname", { resultField: "component_strong" });
-```
-
-### Hyperlink-Induced Topic Search (HITS)
-
-HITS is a link analysis algorithm that rates Web pages, developed by
-Jon Kleinberg in J. Kleinberg,
-[Authoritative sources in a hyperlinked environment](http://www.cs.cornell.edu/home/kleinber/auth.pdf),
-Journal of the ACM. 46 (5): 604–632, 1999. The algorithm is also known as _Hubs and Authorities_.
-
-The idea behind hubs and authorities comes from the typical structure of the early web:
-certain websites, known as hubs, serve as large directories that are not actually
-authoritative on the information that they point to. These hubs are used as
-compilations of a broad catalog of information that leads users to other,
-authoritative, webpages.
-
-The algorithm assigns two scores to each vertex: the authority score and the
-hub score. The authority score of a vertex rates the total hub score of vertices
-pointing to that vertex; the hub score rates the total authority
-score of vertices pointed by it. Also see
-[en.wikipedia.org/wiki/HITS_algorithm](https://en.wikipedia.org/wiki/HITS_algorithm).
-Note, however, that this version of the algorithm is slightly different from that of the original paper.
-
-ArangoDB offers two versions of the algorithm: the original Kleinberg's version and our own version
-that has some advantages and disadvantages as discussed below.
-
-Both versions keep two values for each vertex: the hub value and the authority value and update
-both of them in iterations until the corresponding sequences converge or until the maximum number of steps
-is reached. The hub value of a vertex is updated from the authority values of the vertices pointed by it;
-the authority value is updated from the hub values of the vertices pointing to it.
-
-The differences of the two versions are technical (and we omit the tedious description here)
-but have some less technical implications:
-- The original version needs twice as many global super-steps as our version.
-- The original version is guaranteed to converge, our version may also converge, but there are examples
-where it does not (for instance, on undirected stars).
-- In the original version, the output values are normed in the sense that the sum of their squared values
-is 1, our version does not guarantee that.
-
-In a call of either version, the `threshold` parameter can be used to set a limit for the convergence
-(measured as the maximum absolute difference of the hub and authority scores
-between the current and last iteration).
-
-If the value of the result field is ``, then the hub score is stored in
-the `_hub` field and the authority score in the `_auth` field.
-
-The algorithm can be executed like this:
-
-```js
-var pregel = require("@arangodb/pregel");
-var jobId = pregel.start("hits", "graphname", { threshold:0.00001, resultField: "score" });
-```
-
-for ArangoDB's version and
-
-```js
-var pregel = require("@arangodb/pregel");
-var jobId = pregel.start("hitskleinberg", "graphname", { threshold:0.00001, resultField: "score" });
-```
-
-for the original version.
-
-### Vertex Centrality
-
-Centrality measures help identify the most important vertices in a graph.
-They can be used in a wide range of applications:
-to identify influencers in social networks, or middlemen in terrorist
-networks.
-
-There are various definitions for centrality, the simplest one being the
-vertex degree. These definitions were not designed with scalability in mind.
-It is probably impossible to discover an efficient algorithm which computes
-them in a distributed way. Fortunately there are scalable substitutions
-available, which should be equally usable for most use cases.
-
-
-
-#### Effective Closeness
-
-A common definitions of centrality is the **closeness centrality**
-(or closeness). The closeness of a vertex in a graph is the inverse average
-length of the shortest path between the vertex and all other vertices.
-For vertices *x*, *y* and shortest distance `d(y, x)` it is defined as:
-
-
-
-Effective Closeness approximates the closeness measure. The algorithm works by
-iteratively estimating the number of shortest paths passing through each vertex.
-The score approximates the real closeness score, since it is not possible
-to actually count all shortest paths due to the horrendous `O(n^2 * d)` memory
-requirements. The algorithm is from the paper
-*Centralities in Large Networks: Algorithms and Observations (U Kang et.al. 2011)*.
-
-ArangoDBs implementation approximates the number of shortest path in each
-iteration by using a HyperLogLog counter with 64 buckets. This should work well
-on large graphs and on smaller ones as well. The memory requirements should be
-**O(n * d)** where *n* is the number of vertices and *d* the diameter of your
-graph. Each vertex stores a counter for each iteration of the algorithm.
-
-The algorithm can be used like this:
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("effectivecloseness", "graphname", { resultField: "closeness" });
-```
-
-#### LineRank
-
-Another common measure is the [*betweenness* centrality](https://en.wikipedia.org/wiki/Betweenness_centrality):
-It measures the number of times a vertex is part of shortest paths between any
-pairs of vertices. For a vertex *v* betweenness is defined as:
-
-
-
-Where the σ represents the number of shortest paths between *x* and *y*,
-and σ(v) represents the number of paths also passing through a vertex *v*.
-By intuition a vertex with higher betweenness centrality has more
-information passing through it.
-
-**LineRank** approximates the random walk betweenness of every vertex in a
-graph. This is the probability that someone, starting on an arbitrary vertex,
-visits this node when they randomly choose edges to visit.
-
-The algorithm essentially builds a line graph out of your graph
-(switches the vertices and edges), and then computes a score similar to PageRank.
-This can be considered a scalable equivalent to vertex betweenness, which can
-be executed distributedly in ArangoDB. The algorithm is from the paper
-*Centralities in Large Networks: Algorithms and Observations (U Kang et.al. 2011)*.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("linerank", "graphname", { resultField: "linerank" });
-```
-
-### Community Detection
-
-Graphs based on real world networks often have a community structure.
-This means it is possible to find groups of vertices such that each vertex
-group is internally more densely connected than outside the group.
-This has many applications when you want to analyze your networks, for example
-Social networks include community groups (the origin of the term, in fact)
-based on common location, interests, occupation, etc.
-
-#### Label Propagation
-
-*Label Propagation* can be used to implement community detection on large
-graphs. The algorithm assigns a community, more precisely, a Community ID
-(a natural number), to every vertex in the graph.
-The idea is that each vertex should be in the community that most of
-its neighbors are in.
-
-At first, the algorithm assigns unique initial Community IDs to the vertices.
-The assignment is deterministic given the graph and the distribution of vertices
-on the shards, but there is no guarantee that a vertex obtains
-the same initial ID in two different runs of the algorithm, even if the graph does not change
-(because the sharding may change). Moreover, there is no guarantee on a particular
-distribution of the initial IDs over the vertices.
-
-Then, in each iteration, a vertex sends its current Community
-ID to all its neighbor vertices. After that each vertex adopts the Community ID it
-received most frequently in the last step.
-
-Note that, in a usual implementation of Label Propagation, if there are
-multiple most frequently received Community IDs, one is chosen randomly.
-An advantage of our implementation is that this choice is deterministic.
-This comes for the price that the choice rules are somewhat involved:
-If a vertex obtains only one ID and the ID of the vertex from the previous step,
-its old ID, is less than the obtained ID, the old ID is kept.
-(IDs are numbers and thus comparable to each other.) If a vertex obtains
-more than one ID, its new ID is the lowest ID among the most frequently
-obtained IDs. (For example, if the obtained IDs are 1, 2, 2, 3, 3,
-then 2 is the new ID.) If, however, no ID arrives more than once, the new ID is
-the minimum of the lowest obtained IDs and the old ID. (For example, if the
-old ID is 5 and the obtained IDs are 3, 4, 6, then the new ID is 3.
-If the old ID is 2, it is kept.)
-
-If a vertex keeps its ID 20 times or more in a row, it does not send its ID.
-Vertices that did not obtain any IDs do not update their ID and do not send it.
-
-The algorithm runs until it converges, which likely never really happens on
-large graphs. Therefore you need to specify a maximum iteration bound.
-The default bound is 500 iterations, which is too large for
-common applications.
-
-The algorithm should work best on undirected graphs. On directed
-graphs, the resulting partition into communities might change, if the number
-of performed steps changes. How strong the dependence is
-may be influenced by the density of the graph.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("labelpropagation", "graphname", { maxGSS: 100, resultField: "community" });
-```
-
-#### Speaker-Listener Label Propagation
-
-The [Speaker-listener Label Propagation](https://arxiv.org/pdf/1109.5720.pdf)
-(SLPA) can be used to implement community detection. It works similar to the
-label propagation algorithm, but now every node additionally accumulates a
-memory of observed labels (instead of forgetting all but one label).
-
-Before the algorithm run, every vertex is initialized with an unique ID
-(the initial community label).
-During the run three steps are executed for each vertex:
-
-1. Current vertex is the listener, all other vertices are speakers.
-2. Each speaker sends out a label from memory, we send out a random label with a
- probability proportional to the number of times the vertex observed the label.
-3. The listener remembers one of the labels, we always choose the most
- frequently observed label.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("slpa", "graphname", { maxGSS:100, resultField: "community" });
-```
-
-You can also execute SLPA with the `maxCommunities` parameter to limit the
-number of output communities. Internally the algorithm still keeps the
-memory of all labels, but the output is reduced to just the `n` most frequently
-observed labels.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("slpa", "graphname", { maxGSS: 100, resultField: "community", maxCommunities: 1 });
-// check the status periodically for completion
-pregel.status(jobId);
-```
diff --git a/site/content/3.10/deploy/_index.md b/site/content/3.10/deploy/_index.md
deleted file mode 100644
index 2b049622fb..0000000000
--- a/site/content/3.10/deploy/_index.md
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Deploy ArangoDB
-menuTitle: Deploy
-weight: 185
-description: >-
- ArangoDB supports multiple deployment modes to meet the exact needs of your
- project for resilience and performance
----
-For installation instructions, please refer to the
-[Installation](../operations/installation/_index.md) chapter.
-
-For _production_ deployments, please also carefully check the
-[ArangoDB Production Checklist](production-checklist.md).
-
-## Deployment Modes
-
-ArangoDB can be deployed in different configurations, depending on your needs.
-
-### Single Instance
-
-A [Single Instance deployment](single-instance/_index.md) is the most simple way
-to get started. Unlike other setups, which require some specific procedures,
-deploying a stand-alone instance is straightforward and can be started manually
-or by using the ArangoDB Starter tool.
-
-### Active Failover
-
-[Active Failover deployments](active-failover/_index.md) use ArangoDB's
-multi-node technology to provide high availability for smaller projects with
-fast asynchronous replication from the leading node to multiple replicas.
-If the leader fails, then a follower takes over seamlessly.
-
-### Cluster
-
-[Cluster deployments](cluster/_index.md) are designed for large scale
-operations and analytics, allowing you to scale elastically with your
-applications and data models. ArangoDB's synchronously-replicating cluster
-technology runs on premises, on Kubernetes, and in the cloud on
-[ArangoGraph](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic) - ArangoDB's fully managed service.
-
-Clustering ArangoDB not only delivers better performance and capacity improvements,
-but it also provides resilience through replication and automatic failover.
-You can deploy systems that dynamically scale up and down according to demand.
-
-### OneShard
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-[OneShard deployments](oneshard.md) are cluster deployments but with the data of
-each database restricted to a single shard. This allows queries to run locally
-on a single DB-Server node for better performance and with transactional
-guarantees similar to a single server deployment. OneShard is primarily intended
-for multi-tenant use cases.
-
-### Datacenter-to-Datacenter
-
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-For cluster deployments, ArangoDB supports
-[Datacenter-to-Datacenter Replication](arangosync/_index.md) (DC2DC). You can
-use it as an additional security feature to replicate your entire cluster
-off-site to another datacenter. The leading datacenter asynchronously replicates
-the data and configuration to the other datacenter for disaster recovery.
-
-## How to deploy
-
-There are different ways to set up and operate ArangoDB.
-
-- You can start all the needed server processes manually, locally or on different
- machines, bare-metal or in Docker containers. This gives you the most control
- but you also need to manually deal with upgrades, outages, and so on.
-
-- You can use the ArangoDB _Starter_ (the _arangodb_ executable) to mostly
- automatically create and keep deployments running, either bare-metal or in
- Docker containers.
-
-- If you want to deploy in your Kubernetes cluster, you can use the
- ArangoDB Kubernetes Operator (`kube-arangodb`).
-
-The fastest way to get ArangoDB up and running is to run it in the cloud - the
-[ArangoGraph Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic) offers a fully managed
-cloud service, available on AWS, Microsoft Azure, and Google Cloud Platform.
-
-### Manual Deployment
-
-**Single Instance:**
-
-- [Manually created processes](single-instance/manual-start.md)
-- [Manually created Docker containers](single-instance/manual-start.md#manual-start-in-docker)
-
-**Active Failover:**
-
-- [Manually created processes](active-failover/manual-start.md)
-- [Manually created Docker containers](active-failover/manual-start.md#manual-start-in-docker)
-
-**Cluster:**
-
-- [Manually created processes](cluster/deployment/manual-start.md)
-- [Manually created Docker containers](cluster/deployment/manual-start.md#manual-start-in-docker)
-
-### Deploying using the ArangoDB Starter
-
-Setting up an ArangoDB cluster, for example, involves starting various nodes
-with different roles (Agents, DB-Servers, and Coordinators). The starter
-simplifies this process.
-
-The Starter supports different deployment modes (single server, Active Failover,
-cluster) and it can either use Docker containers or processes (using the
-`arangod` executable).
-
-Besides starting and maintaining ArangoDB deployments, the Starter also provides
-various commands to create TLS certificates and JWT token secrets to secure your
-ArangoDB deployments.
-
-The ArangoDB Starter is an executable called `arangodb` and comes with all
-current distributions of ArangoDB.
-
-If you want a specific version, download the precompiled executable via the
-[GitHub releases page](https://github.com/arangodb-helper/arangodb/releases).
-
-**Single Instance:**
-
-- [_Starter_ using processes](single-instance/using-the-arangodb-starter.md)
-- [_Starter_ using Docker containers](single-instance/using-the-arangodb-starter.md#using-the-arangodb-starter-in-docker)
-
-**Active Failover:**
-
-- [_Starter_ using processes](active-failover/using-the-arangodb-starter.md)
-- [_Starter_ using Docker containers](active-failover/using-the-arangodb-starter.md#using-the-arangodb-starter-in-docker)
-
-**Cluster:**
-
-- [_Starter_ using processes](cluster/deployment/using-the-arangodb-starter.md)
-- [_Starter_ using Docker containers](cluster/deployment/using-the-arangodb-starter.md#using-the-arangodb-starter-in-docker)
-
-### Run in the cloud
-
-- [AWS and Azure](in-the-cloud.md)
-- [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
- fully managed, available on AWS, Azure & GCP
-
-### Run in Kubernetes
-
-- [ArangoDB Kubernetes Operator](kubernetes.md)
diff --git a/site/content/3.10/deploy/active-failover/_index.md b/site/content/3.10/deploy/active-failover/_index.md
deleted file mode 100644
index 1bbfdf8f2a..0000000000
--- a/site/content/3.10/deploy/active-failover/_index.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-title: Active Failover deployments
-menuTitle: Active Failover
-weight: 10
-description: >-
- You can set up multiple single server instances to have one leader and multiple
- asynchronously replicated followers with automatic failover
----
-An _Active Failover_ is defined as:
-
-- One ArangoDB Single-Server instance which is read / writable by clients called **Leader**
-- One or more ArangoDB Single-Server instances, which are passive and not writable
- called **Followers**, which asynchronously replicate data from the Leader
-- At least one _Agency_ acting as a "witness" to determine which server becomes the _leader_
- in a _failure_ situation
-
-An _Active Failover_ behaves differently from an [ArangoDB Cluster](../cluster/_index.md),
-please see the [limitations section](#limitations) for more details.
-
-
-
-The advantage of the _Active Failover_ setup is that there is an active third party, the _Agency_,
-which observes and supervises all involved server processes.
-_Follower_ instances can rely on the _Agency_ to determine the correct _Leader_ server.
-From an operational point of view, one advantage is that
-the failover, in case the _Leader_ goes down, is automatic. An additional operational
-advantage is that there is no need to start a _replication applier_ manually.
-
-The _Active Failover_ setup is made **resilient** by the fact that all the official
-ArangoDB drivers can automatically determine the correct _leader_ server and
-redirect requests appropriately. Furthermore, Foxx Services do also automatically
-perform a failover: should the _leader_ instance fail (which is also the _Foxxmaster_)
-the newly elected _leader_ will reinstall all Foxx services and resume executing
-queued [Foxx tasks](../../develop/foxx-microservices/guides/scripts-and-scheduling.md).
-[Database users](../../operations/administration/user-management/_index.md)
-which were created on the _leader_ will also be valid on the newly elected _leader_
-(always depending on the condition that they were synced already).
-
-Consider the case for two *arangod* instances. The two servers are connected via
-server wide (global) asynchronous replication. One of the servers is
-elected _Leader_, and the other one is made a _Follower_ automatically. At startup,
-the two servers race for the leadership position. This happens through the _Agency
-locking mechanism_ (which means that the _Agency_ needs to be available at server start).
-You can control which server becomes the _Leader_ by starting it earlier than
-other server instances in the beginning.
-
-The _Follower_ automatically starts replication from the _Leader_ for all
-available databases, using the server-level replication introduced in version 3.3.
-
-When the _Leader_ goes down, this is automatically detected by the _Agency_
-instance, which is also started in this mode. This instance will make the
-previous follower stop its replication and make it the new _Leader_.
-
-{{< info >}}
-The different instances participating in an Active Failover setup are supposed
-to be run in the same _Data Center_ (DC), with a reliable high-speed network
-connection between all the machines participating in the Active Failover setup.
-
-Multi-datacenter Active Failover setups are currently not supported.
-
-A multi-datacenter solution currently supported is the _Datacenter-to-Datacenter Replication_
-(DC2DC) among ArangoDB Clusters. See [DC2DC](../arangosync/deployment/_index.md) chapter for details.
-{{< /info >}}
-
-## Operative Behavior
-
-In contrast to the normal behavior of a single-server instance, the Active-Failover
-mode can change the behavior of ArangoDB in some situations.
-
-The _Follower_ will _always_ deny write requests from client applications. Starting from ArangoDB 3.4,
-read requests are _only_ permitted if the requests are marked with the `X-Arango-Allow-Dirty-Read: true` header,
-otherwise they are denied too.
-Only the replication itself is allowed to access the follower's data until the
-follower becomes a new _Leader_ (should a _failover_ happen).
-
-When sending a request to read or write data on a _Follower_, the _Follower_
-responds with `HTTP 503 (Service unavailable)` and provides the address of
-the current _Leader_. Client applications and drivers can use this information to
-then make a follow-up request to the proper _Leader_:
-
-```
-HTTP/1.1 503 Service Unavailable
-X-Arango-Endpoint: http://[::1]:8531
-....
-```
-
-Client applications can also detect who the current _Leader_ and the _Followers_
-are by calling the `/_api/cluster/endpoints` REST API. This API is accessible
-on _Leader_ and _Followers_ alike.
-
-## Reading from Followers
-
-Followers in the active-failover setup are in read-only mode. It is possible to read from these
-followers by adding a `X-Arango-Allow-Dirty-Read: true` header on each request. Responses will then automatically
-contain the `X-Arango-Potential-Dirty-Read: true` header so that clients can reject accidental dirty reads.
-
-Depending on the driver support for your specific programming language, you should be able
-to enable this option.
-
-## How to deploy
-
-The tool _ArangoDB Starter_ supports starting two servers with asynchronous
-replication and failover [out of the box](using-the-arangodb-starter.md).
-
-The _arangojs_ driver for JavaScript, the Go driver, the Java driver, ArangoJS and
-the PHP driver support active failover in case the currently accessed server endpoint
-responds with `HTTP 503`.
-
-You can also deploy an *Active Failover* environment [manually](manual-start.md).
-
-## Limitations
-
-The _Active Failover_ setup in ArangoDB has a few limitations.
-
-- In contrast to the [ArangoDB Cluster](../cluster/_index.md):
- - Active Failover has only asynchronous replication, and hence **no guarantee** on how many database operations may have been lost during a failover.
- - Active Failover has no global state, and hence a failover to a bad follower (see the example above) overrides all other followers with that state (including the previous leader, which might have more up-to-date data). In a Cluster setup, a global state is provided by the agency and hence ArangoDB is aware of the latest state.
-- Should you add more than one _follower_, be aware that during a _failover_ situation
- the failover attempts to pick the most up-to-date follower as the new leader on a **best-effort** basis.
-- Should you be using the [ArangoDB Starter](../../components/tools/arangodb-starter/_index.md)
- or the [Kubernetes Operator](../kubernetes.md) to manage your Active-Failover
- deployment, be aware that upgrading might trigger an unintentional failover between machines.
diff --git a/site/content/3.10/deploy/arangosync/_index.md b/site/content/3.10/deploy/arangosync/_index.md
deleted file mode 100644
index b660c58918..0000000000
--- a/site/content/3.10/deploy/arangosync/_index.md
+++ /dev/null
@@ -1,129 +0,0 @@
----
-title: Datacenter-to-Datacenter Replication
-menuTitle: Datacenter-to-Datacenter Replication
-weight: 25
-description: >-
- A detailed guide to Datacenter-to-Datacenter Replication (DC2DC) for clusters
- and the _arangosync_ tool
----
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-At some point in the grows of a database, there comes a need for replicating it
-across multiple datacenters.
-
-Reasons for that can be:
-
-- Fallback in case of a disaster in one datacenter
-- Regional availability
-- Separation of concerns
-
-And many more.
-
-ArangoDB supports _Datacenter-to-Datacenter Replication_, via the _arangosync_ tool.
-
-ArangoDB's _Datacenter-to-Datacenter Replication_ is a solution that enables you
-to asynchronously replicate the entire structure and content in an ArangoDB Cluster
-in one place to a Cluster in another place. Typically it is used from one datacenter
-to another. It is possible to replicate to multiple other datacenters as well.
-It is not a solution for replicating single server instances.
-
-
-
-The replication done by _ArangoSync_ is **asynchronous**. That means that when
-a client is writing data into the source datacenter, it will consider the
-request finished before the data has been replicated to the other datacenter.
-The time needed to completely replicate changes to the other datacenter is
-typically in the order of seconds, but this can vary significantly depending on
-load, network & computer capacity.
-
-_ArangoSync_ performs replication in a **single direction** only. That means that
-you can replicate data from cluster _A_ to cluster _B_ or from cluster _B_ to
-cluster _A_, but never at the same time (one leader, one or more follower clusters).
-Data modified in the destination cluster **will be lost!**
-
-Replication is a completely **autonomous** process. Once it is configured it is
-designed to run 24/7 without frequent manual intervention.
-This does not mean that it requires no maintenance or attention at all.
-As with any distributed system some attention is needed to monitor its operation
-and keep it secure (e.g. certificate & password rotation).
-
-In the event of an outage of the leader cluster, user intervention is required
-to either bring the leader back up or to decide on making a follower cluster the
-new leader. There is no automatic failover as follower clusters lag behind the leader
-because of network latency etc. and resuming operation with the state of a follower
-cluster can therefore result in the loss of recent writes. How much can be lost
-largely depends on the data rate of the leader cluster and the delay between
-the leader and the follower clusters. Followers will typically be behind the
-leader by a couple of seconds or minutes.
-
-Once configured, _ArangoSync_ will replicate both **structure and data** of an
-**entire cluster**. This means that there is no need to make additional configuration
-changes when adding/removing databases or collections.
-Also meta data such as users, Foxx application & jobs are automatically replicated.
-
-A message queue developed by ArangoDB in Go and called **DirectMQ** is used for
-replication. It is tailored for DC2DC replication with efficient native
-networking routines.
-
-## When to use it... and when not
-
-The _Datacenter-to-Datacenter Replication_ is a good solution in all cases where
-you want to replicate data from one cluster to another without the requirement
-that the data is available immediately in the other cluster.
-
-The _Datacenter-to-Datacenter Replication_ is not a good solution when one of the
-following applies:
-
-- You want to replicate data from cluster A to cluster B and from cluster B
-to cluster A at the same time.
-- You need synchronous replication between 2 clusters.
-- There is no network connection between cluster A and B.
-- You want complete control over which database, collection & documents are replicate and which not.
-
-## Requirements
-
-To use _Datacenter-to-Datacenter Replication_ you need the following:
-
-- Two datacenters, each running an ArangoDB Enterprise Edition cluster.
-- A network connection between both datacenters with accessible endpoints
- for several components (see individual components for details).
-- TLS certificates for ArangoSync master instances (can be self-signed).
-- Optional (but recommended) TLS certificates for ArangoDB clusters (can be self-signed).
-- Client certificates CA for _ArangoSync masters_ (typically self-signed).
-- Client certificates for _ArangoSync masters_ (typically self-signed).
-- At least 2 instances of the _ArangoSync master_ in each datacenter.
-- One instances of the _ArangoSync worker_ on every machine in each datacenter.
-
-{{< info >}}
-In several places you will need a (x509) certificate.
-The [Certificates](security.md#certificates) section provides more guidance for creating
-and renewing these certificates.
-{{< /info >}}
-
-Besides the above list, you probably want to use the following:
-
-- An orchestrator to keep all components running, e.g. `systemd`.
-- A log file collector for centralized collection & access to the logs of all components.
-- A metrics collector & viewing solution such as _Prometheus_ + _Grafana_.
-
-## Limitations
-
-The _Datacenter-to-Datacenter Replication_ setup in ArangoDB has a few limitations.
-Some of these limitations may be removed in later versions of ArangoDB:
-
-- All the machines where the ArangoDB Server processes run must run the Linux
- operating system using the AMD64 (x86-64) or ARM64 (AArch64) architecture. Clients can run from any platform.
-
-- All the machines where the ArangoSync Server processes run must run the Linux
- operating system using the AMD64 (x86-64) or ARM64 (AArch64) architecture.
- The ArangoSync command line tool is available for Linux, Windows & macOS.
-
-- The entire cluster is replicated. It is not possible to exclude specific
- databases or collections from replication.
-
-- In any DC2DC setup, the minor version of the target cluster must be equal to
- or greater than the minor version of the source cluster. Replication from a higher to a
- lower minor version (i.e., from 3.9.x to 3.8.x) is not supported.
- Syncing between different patch versions of the same minor version is possible, however.
- For example, you cannot sync from a 3.9.1 cluster to a 3.8.7 cluster, but
- you can sync from a 3.9.1 cluster to a 3.9.0 cluster.
diff --git a/site/content/3.10/deploy/architecture/data-sharding.md b/site/content/3.10/deploy/architecture/data-sharding.md
deleted file mode 100644
index d495f38981..0000000000
--- a/site/content/3.10/deploy/architecture/data-sharding.md
+++ /dev/null
@@ -1,192 +0,0 @@
----
-title: Sharding
-menuTitle: Data Sharding
-weight: 10
-description: >-
- ArangoDB can divide collections into multiple shards to distribute the data
- across multiple cluster nodes
----
-ArangoDB organizes its collection data in _shards_. Sharding allows to use
-multiple machines to run a cluster of ArangoDB instances that together
-constitute a single database system.
-
-Sharding is used to distribute data across physical machines in an ArangoDB
-Cluster. It is a method to determine the optimal placement of documents on
-individual DB-Servers.
-
-This enables you to store much more data, since ArangoDB distributes the data
-automatically to the different servers. In many situations one can also reap a
-benefit in data throughput, again because the load can be distributed to
-multiple machines.
-
-Using sharding allows ArangoDB to support deployments with large amounts of
-data, which would not fit on a single machine. A high rate of write / read
-operations or AQL queries can also overwhelm a single servers RAM and disk
-capacity.
-
-There are two main ways of scaling a database system:
-- Vertical scaling
-- Horizontal scaling
-
-Vertical scaling scaling means to upgrade to better server hardware (faster
-CPU, more RAM / disk). This can be a cost effective way of scaling, because
-administration is easy and performance characteristics do not change much.
-Reasoning about the behavior of a single machine is also a lot easier than
-having multiple machines. However at a certain point larger machines are either
-not available anymore or the cost becomes prohibitive.
-
-Horizontal scaling is about increasing the number of servers. Servers typically
-being based on commodity hardware, which is readily available from many
-different Cloud providers. The capability of each single machine may not be
-high, but the combined the computing power of these machines can be arbitrarily
-large. Adding more machines on-demand is also typically easier and more
-cost-effective than pre-provisioning a single large machine. Increased
-complexity in infrastructure can be managed using modern containerization and
-cluster orchestrations tools like [Kubernetes](../kubernetes.md).
-
-
-
-To achieve this ArangoDB splits your dataset into so called _shards_. The number
-of shards is something you may choose according to your needs. Proper sharding
-is essential to achieve optimal performance. From the outside the process of
-splitting the data and assembling it again is fully transparent and as such we
-achieve the goals of what other systems call "master-master replication".
-
-An application may talk to any _Coordinator_ and it automatically figures
-out where the data is currently stored when reading or is to be stored
-when writing. The information about the _shards_ is shared across all
-_Coordinators_ using the _Agency_.
-
-_Shards_ are configured per _collection_ so multiple _shards_ of data form the
-_collection_ as a whole. To determine in which _shard_ the data is to be stored
-ArangoDB performs a hash across the values. By default this hash is being
-created from the `_key` document attribute.
-
-Every shard is a local collection on any _DB-Server_, that houses such a shard
-as depicted above for our example with 5 shards and 3 replicas. Here, every
-leading shard _S1_ through _S5_ is followed each by 2 replicas _R1_ through _R5_.
-The collection creation mechanism on ArangoDB _Coordinators_ tries to best
-distribute the shards of a collection among the _DB-Servers_. This seems to
-suggest, that one shards the data in 5 parts, to make best use of all our
-machines. We further choose a replication factor of 3 as it is a reasonable
-compromise between performance and data safety. This means, that the collection
-creation ideally distributes 15 shards, 5 of which are leaders to each 2
-replicas. This in turn implies, that a complete pristine replication would
-involve 10 shards which need to catch up with their leaders.
-
-Not all use cases require horizontal scalability. In such cases, consider the
-[OneShard](../oneshard.md) feature as alternative to flexible sharding.
-
-## Shard Keys
-
-ArangoDB uses the specified _shard key_ attributes to determine in which shard
-a given document is to be stored. Choosing the right shard key can have
-significant impact on your performance can reduce network traffic and increase
-performance.
-
-
-
-ArangoDB uses consistent hashing to compute the target shard from the given
-values (as specified via by the `shardKeys` collection property). The ideal set
-of shard keys allows ArangoDB to distribute documents evenly across your shards
-and your _DB-Servers_. By default ArangoDB uses the `_key` field as a shard key.
-For a custom shard key you should consider a few different properties:
-
-- **Cardinality**: The cardinality of a set is the number of distinct values
- that it contains. A shard key with only _N_ distinct values cannot be hashed
- onto more than _N_ shards. Consider using multiple shard keys, if one of your
- values has a low cardinality.
-
-- **Frequency**: Consider how often a given shard key value may appear in
- your data. Having a lot of documents with identical shard keys leads
- to unevenly distributed data.
-
-This means that a single shard could become a bottleneck in your cluster.
-The effectiveness of horizontal scaling is reduced if most documents end up in
-a single shard. Shards are not divisible at this time, so paying attention to
-the size of shards is important.
-
-Consider both frequency and cardinality when picking a shard key, if necessary
-consider picking multiple shard keys.
-
-### Configuring Shards
-
-The number of _shards_ can be configured at collection creation time, e.g. in
-the web interface or via _arangosh_:
-
-```js
-db._create("sharded_collection", {"numberOfShards": 4, "shardKeys": ["country"]});
-```
-
-The example above, where `country` has been used as _shardKeys_ can be useful
-to keep data of every country in one shard, which would result in better
-performance for queries working on a per country base.
-
-It is also possible to specify multiple `shardKeys`.
-
-Note however that if you change the shard keys from their default `["_key"]`,
-then finding a document in the collection by its primary key involves a request
-to every single shard. However this can be mitigated: All CRUD APIs and AQL
-support using the shard key values as a lookup hints. Just send them as part
-of the update / replace or removal operation, or in case of AQL, that
-you use a document reference or an object for the UPDATE, REPLACE or REMOVE
-operation which includes the shard key attributes:
-
-```aql
-UPDATE { _key: "123", country: "…" } WITH { … } IN sharded_collection
-```
-
-If custom shard keys are used, one can no longer prescribe the primary key value of
-a new document but must use the automatically generated one. This latter
-restriction comes from the fact that ensuring uniqueness of the primary key
-would be very inefficient if the user could specify the primary key.
-
-On which DB-Server in a Cluster a particular _shard_ is kept is undefined.
-There is no option to configure an affinity based on certain _shard_ keys.
-
-For more information on shard rebalancing and administration topics please have
-a look in the [Cluster Administration](../cluster/administration.md) section.
-
-### Indexes On Shards
-
-Unique indexes on sharded collections are only allowed if the fields used to
-determine the shard key are also included in the list of attribute paths for the index:
-
-| shardKeys | indexKeys | |
-|----------:|----------:|------------:|
-| a | a | allowed |
-| a | b | not allowed |
-| a | a, b | allowed |
-| a, b | a | not allowed |
-| a, b | b | not allowed |
-| a, b | a, b | allowed |
-| a, b | a, b, c | allowed |
-| a, b, c | a, b | not allowed |
-| a, b, c | a, b, c | allowed |
-
-## High Availability
-
-A cluster can still read from a collection if shards become unavailable for
-some reason. The data residing on the unavailable shard cannot be accessed,
-but reads on other shards can still succeed.
-
-If you enable data redundancy by setting a replication factor of `2` or higher
-for a collection, the collection data remains fully available for reading as
-long as at least one replica of every shard is available.
-In a production environment, you should always deploy your collections with a
-`replicationFactor` greater than `1` to ensure that the shards stay available
-even when a machine fails.
-
-Collection data also remains available for writing as long as a replica of every
-shard is available. You can optionally increase the write concern to require a
-higher number of in-sync shard replicas for writes. The `writeConcern` can be
-as high as the `replicationFactor`.
-
-## Storage Capacity
-
-The cluster distributes your data across multiple machines in your cluster.
-Every machine only contains a subset of your data. Thus, the cluster has
-the combined storage capacity of all your machines.
-
-Please note that increasing the replication factor also increases the space
-required to keep all your data in the cluster.
diff --git a/site/content/3.10/deploy/cluster/_index.md b/site/content/3.10/deploy/cluster/_index.md
deleted file mode 100644
index 4d10cec023..0000000000
--- a/site/content/3.10/deploy/cluster/_index.md
+++ /dev/null
@@ -1,395 +0,0 @@
----
-title: Cluster deployments
-menuTitle: Cluster
-weight: 15
-description: >-
- ArangoDB clusters are comprised of DB-Servers, Coordinators, and Agents, with
- synchronous data replication between DB-Servers and automatic failover
----
-The Cluster architecture of ArangoDB is a _CP_ master/master model with no
-single point of failure.
-
-With "CP" in terms of the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem)
-we mean that in the presence of a
-network partition, the database prefers internal consistency over
-availability. With "master/master" we mean that clients can send their
-requests to an arbitrary node, and experience the same view on the
-database regardless. "No single point of failure" means that the cluster
-can continue to serve requests, even if one machine fails completely.
-
-In this way, ArangoDB has been designed as a distributed multi-model
-database. This section gives a short outline on the Cluster architecture and
-how the above features and capabilities are achieved.
-
-## Structure of an ArangoDB Cluster
-
-An ArangoDB Cluster consists of a number of ArangoDB instances
-which talk to each other over the network. They play different roles,
-which are explained in detail below.
-
-The current configuration
-of the Cluster is held in the _Agency_, which is a highly-available
-resilient key/value store based on an odd number of ArangoDB instances
-running [Raft Consensus Protocol](https://raft.github.io/).
-
-For the various instances in an ArangoDB Cluster there are three distinct
-roles:
-
-- _Agents_
-- _Coordinators_
-- _DB-Servers_.
-
-
-
-### Agents
-
-One or multiple _Agents_ form the _Agency_ in an ArangoDB Cluster. The
-_Agency_ is the central place to store the configuration in a Cluster. It
-performs leader elections and provides other synchronization services for
-the whole Cluster. Without the _Agency_ none of the other components can
-operate.
-
-While generally invisible to the outside the _Agency_ is the heart of the
-Cluster. As such, fault tolerance is of course a must have for the
-_Agency_. To achieve that the _Agents_ are using the
-[Raft Consensus Algorithm](https://raft.github.io/).
-The algorithm formally guarantees
-conflict free configuration management within the ArangoDB Cluster.
-
-At its core the _Agency_ manages a big configuration tree. It supports
-transactional read and write operations on this tree, and other servers
-can subscribe to HTTP callbacks for all changes to the tree.
-
-### Coordinators
-
-_Coordinators_ should be accessible from the outside. These are the ones
-the clients talk to. They coordinate cluster tasks like
-executing queries and running Foxx services. They know where the
-data is stored and optimize where to run user-supplied queries or
-parts thereof. _Coordinators_ are stateless and can thus easily be shut down
-and restarted as needed.
-
-### DB-Servers
-
-_DB-Servers_ are the ones where the data is actually hosted. They
-host shards of data and using synchronous replication a _DB-Server_ may
-either be _leader_ or _follower_ for a shard. Document operations are first
-applied on the _leader_ and then synchronously replicated to
-all followers.
-
-Shards must not be accessed from the outside but indirectly through the
-_Coordinators_. They may also execute queries in part or as a whole when
-asked by a _Coordinator_.
-
-See [Sharding](#sharding) below for more information.
-
-## Many sensible configurations
-
-This architecture is very flexible and thus allows many configurations,
-which are suitable for different usage scenarios:
-
- 1. The default configuration is to run exactly one _Coordinator_ and
- one _DB-Server_ on each machine. This achieves the classical
- master/master setup, since there is a perfect symmetry between the
- different nodes, clients can equally well talk to any one of the
- _Coordinators_ and all expose the same view to the data store. _Agents_
- can run on separate, less powerful machines.
- 2. One can deploy more _Coordinators_ than _DB-Servers_. This is a sensible
- approach if one needs a lot of CPU power for the Foxx services,
- because they run on the _Coordinators_.
- 3. One can deploy more _DB-Servers_ than _Coordinators_ if more data capacity
- is needed and the query performance is the lesser bottleneck
- 4. One can deploy a _Coordinator_ on each machine where an application
- server (e.g. a node.js server) runs, and the _Agents_ and _DB-Servers_
- on a separate set of machines elsewhere. This avoids a network hop
- between the application server and the database and thus decreases
- latency. Essentially, this moves some of the database distribution
- logic to the machine where the client runs.
-
-As you can see, the _Coordinator_ layer can be scaled and deployed independently
-from the _DB-Server_ layer.
-
-{{< warning >}}
-It is a best practice and a recommended approach to run _Agent_ instances
-on different machines than _DB-Server_ instances.
-
-When deploying using the tool [_Starter_](../../components/tools/arangodb-starter/_index.md)
-this can be achieved by using the options `--cluster.start-dbserver=false` and
-`--cluster.start-coordinator=false` on the first three machines where the _Starter_
-is started, if the desired _Agency_ _size_ is 3, or on the first 5 machines
-if the desired _Agency_ _size_ is 5.
-{{< /warning >}}
-
-{{< info >}}
-The different instances that form a Cluster are supposed to be run in the same
-_Data Center_ (DC), with reliable and high-speed network connection between
-all the machines participating to the Cluster.
-
-Multi-datacenter Clusters, where the entire structure and content of a Cluster located
-in a specific DC is replicated to others Clusters located in different DCs, are
-possible as well. See [Datacenter-to-Datacenter Replication](../arangosync/deployment/_index.md)
-(DC2DC) for further details.
-{{< /info >}}
-
-## Sharding
-
-Using the roles outlined above an ArangoDB Cluster is able to distribute
-data in so called _shards_ across multiple _DB-Servers_. Sharding
-allows to use multiple machines to run a cluster of ArangoDB
-instances that together constitute a single database. This enables
-you to store much more data, since ArangoDB distributes the data
-automatically to the different servers. In many situations one can
-also reap a benefit in data throughput, again because the load can
-be distributed to multiple machines.
-
-
-
-From the outside this process is fully transparent:
-An application may talk to any _Coordinator_ and
-it automatically figures out where the data is currently stored when reading
-or is to be stored when writing. The information about the _shards_
-is shared across all _Coordinators_ using the _Agency_.
-
-_Shards_ are configured per _collection_ so multiple _shards_ of data form
-the _collection_ as a whole. To determine in which _shard_ the data is to
-be stored ArangoDB performs a hash across the values. By default this
-hash is being created from the document __key_.
-
-For further information, please refer to the
-[_Cluster Sharding_](../architecture/data-sharding.md) section.
-
-## OneShard
-
-A OneShard deployment offers a practicable solution that enables significant
-performance improvements by massively reducing cluster-internal communication
-and allows running transactions with ACID guarantees on shard leaders.
-
-For more information, please refer to the [OneShard](../oneshard.md)
-chapter.
-
-## Synchronous replication
-
-In an ArangoDB Cluster, the replication among the data stored by the _DB-Servers_
-is synchronous.
-
-Synchronous replication works on a per-shard basis. Using the `replicationFactor`
-option, you can configure for each _collection_ how many copies of each _shard_
-are kept in the Cluster.
-
-{{< danger >}}
-If a collection has a _replication factor_ of `1`, its data is **not**
-replicated to other _DB-Servers_. This exposes you to a risk of data loss, if
-the machine running the _DB-Server_ with the only copy of the data fails permanently.
-
-You need to set the _replication factor_ to a value equal or higher than `2`
-to achieve minimal data redundancy via the synchronous replication.
-
-You need to set a _replication factor_ equal to or higher than `2`
-**explicitly** when creating a collection, or you can adjust it later if you
-forgot to set it at creation time. You can also enforce a
-minimum replication factor for all collections by setting the
-[`--cluster.min-replication-factor` startup option](../../components/arangodb-server/options.md#--clustermin-replication-factor).
-
-When using a Cluster, please make sure all the collections that are important
-(and should not be lost in any case) have a _replication factor_ equal or higher
-than `2`.
-{{< /danger >}}
-
-At any given time, one of the copies is declared to be the _leader_ and
-all other replicas are _followers_. Internally, write operations for this _shard_
-are always sent to the _DB-Server_ which happens to hold the _leader_ copy,
-which in turn replicates the changes to all _followers_ before the operation
-is considered to be done and reported back to the _Coordinator_.
-Internally, read operations are all served by the _DB-Server_ holding the _leader_ copy,
-this allows to provide snapshot semantics for complex transactions.
-
-Using synchronous replication alone guarantees consistency and high availability
-at the cost of reduced performance: write requests have a higher latency
-(due to every write-request having to be executed on the _followers_) and
-read requests do not scale out as only the _leader_ is being asked.
-
-In a Cluster, synchronous replication is managed by the _Coordinators_ for the client.
-The data is always stored on the _DB-Servers_.
-
-The following example gives you an idea of how synchronous operation
-has been implemented in ArangoDB Cluster:
-
-1. Connect to a _Coordinator_ via [_arangosh_](../../components/tools/arangodb-shell/_index.md)
-2. Create a collection: `db._create("test", {"replicationFactor": 2});`
-3. The _Coordinator_ figures out a *leader* and one *follower* and creates
- one *shard* (as this is the default)
-4. Insert data: `db.test.insert({"foo": "bar"});`
-5. The _Coordinator_ writes the data to the _leader_, which in turn
- replicates it to the _follower_.
-6. Only when both are successful, the result is reported indicating success:
-
- ```json
- {
- "_id" : "test/7987",
- "_key" : "7987",
- "_rev" : "7987"
- }
- ```
-
-Synchronous replication comes at the cost of an increased latency for
-write operations, simply because there is one more network hop within the
-Cluster for every request. Therefore the user can set the _replicationFactor_
-to 1, which means that only one copy of each shard is kept, thereby
-switching off synchronous replication. This is a suitable setting for
-less important or easily recoverable data for which low latency write
-operations matter.
-
-## Automatic failover
-
-### Failure of a follower
-
-If a _DB-Server_ that holds a _follower_ copy of a _shard_ fails, then the _leader_
-can no longer synchronize its changes to that _follower_. After a short timeout
-(3 seconds), the _leader_ gives up on the _follower_ and declares it to be
-out of sync.
-
-One of the following two cases can happen:
-
-- **A**: If another _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available in the Cluster, a new _follower_ is automatically
- created on this other _DB-Server_ (so the _replication factor_ constraint is
- satisfied again).
-
-- **B**: If no other _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available, the service continues with one _follower_ less than the number
- prescribed by the _replication factor_.
-
-If the old _DB-Server_ with the _follower_ copy comes back, one of the following
-two cases can happen:
-
-- Following case **A**, the _DB-Server_ recognizes that there is a new
- _follower_ that was elected in the meantime, so it is no longer a _follower_
- for that _shard_.
-
-- Following case **B**, the _DB-Server_ automatically resynchronizes its
- data with the _leader_. The _replication factor_ constraint is now satisfied again
- and order is restored.
-
-### Failure of a leader
-
-If a _DB-Server_ that holds a _leader_ copy of a shard fails, then the _leader_
-can no longer serve any requests. It no longer sends a heartbeat to
-the _Agency_. Therefore, a _supervision_ process running in the _Raft_ _leader_
-of the Agency, can take the necessary action (after 15 seconds of missing
-heartbeats), namely to promote one of the _DB-Servers_ that hold in-sync
-replicas of the _shard_ to _leader_ for that _shard_. This involves a
-reconfiguration in the _Agency_ and leads to the fact that _Coordinators_
-now contact a different _DB-Server_ for requests to this _shard_. Service
-resumes. The other surviving _replicas_ automatically resynchronize their
-data with the new _leader_.
-
-In addition to the above, one of the following two cases cases can happen:
-
-- **A**: If another _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available in the Cluster, a new _follower_ is automatically
- created on this other _DB-Server_ (so the _replication factor_ constraint is
- satisfied again).
-
-- **B**: If no other _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available the service continues with one _follower_ less than the number
- prescribed by the _replication factor_.
-
-When the _DB-Server_ with the original _leader_ copy comes back, it recognizes
-that a new _leader_ was elected in the meantime, and one of the following
-two cases can happen:
-
-- Following case **A**, since also a new _follower_ was created and
- the _replication factor_ constraint is satisfied, the _DB-Server_ is no
- longer a _follower_ for that _shard_.
-
-- Following case **B**, the _DB-Server_ notices that it now holds
- a _follower_ _replica_ of that _shard_ and it resynchronizes its data with the
- new _leader_. The _replication factor_ constraint is satisfied again,
- and order is restored.
-
-The following example gives you an idea of how _failover_
-has been implemented in ArangoDB Cluster:
-
-1. The _leader_ of a _shard_ (let's name it _DBServer001_) is going down.
-2. A _Coordinator_ is asked to return a document: `db.test.document("100069");`
-3. The _Coordinator_ determines which server is responsible for this document
- and finds _DBServer001_
-4. The _Coordinator_ tries to contact _DBServer001_ and timeouts because it is
- not reachable.
-5. After a short while, the _supervision_ (running in parallel on the _Agency_)
- sees that _heartbeats_ from _DBServer001_ are not coming in
-6. The _supervision_ promotes one of the _followers_ (say _DBServer002_), that
- is in sync, to be _leader_ and makes _DBServer001_ a _follower_.
-7. As the _Coordinator_ continues trying to fetch the document, it sees that
- the _leader_ changed to _DBServer002_
-8. The _Coordinator_ tries to contact the new _leader_ (_DBServer002_) and returns
- the result:
- ```json
- {
- "_key" : "100069",
- "_id" : "test/100069",
- "_rev" : "513",
- "foo" : "bar"
- }
- ```
-9. After a while the _supervision_ declares _DBServer001_ to be completely dead.
-10. A new _follower_ is determined from the pool of _DB-Servers_.
-11. The new _follower_ syncs its data from the _leader_ and order is restored.
-
-Please note that there may still be timeouts. Depending on when exactly
-the request has been done (in regard to the _supervision_) and depending
-on the time needed to reconfigure the Cluster the _Coordinator_ might fail
-with a timeout error.
-
-## Shard movement and resynchronization
-
-All _shard_ data synchronizations are done in an incremental way, such that
-resynchronizations are quick. This technology allows to move shards
-(_follower_ and _leader_ ones) between _DB-Servers_ without service interruptions.
-Therefore, an ArangoDB Cluster can move all the data on a specific _DB-Server_
-to other _DB-Servers_ and then shut down that server in a controlled way.
-This allows to scale down an ArangoDB Cluster without service interruption,
-loss of fault tolerance or data loss. Furthermore, one can re-balance the
-distribution of the _shards_, either manually or automatically.
-
-All these operations can be triggered via a REST/JSON API or via the
-graphical web interface. All fail-over operations are completely handled within
-the ArangoDB Cluster.
-
-## Microservices and zero administration
-
-The design and capabilities of ArangoDB are geared towards usage in
-modern microservice architectures of applications. With the
-[Foxx services](../../develop/foxx-microservices/_index.md) it is very easy to deploy a data
-centric microservice within an ArangoDB Cluster.
-
-In addition, one can deploy multiple instances of ArangoDB within the
-same project. One part of the project might need a scalable document
-store, another might need a graph database, and yet another might need
-the full power of a multi-model database actually mixing the various
-data models. There are enormous efficiency benefits to be reaped by
-being able to use a single technology for various roles in a project.
-
-To simplify life of the _devops_ in such a scenario we try as much as
-possible to use a _zero administration_ approach for ArangoDB. A running
-ArangoDB Cluster is resilient against failures and essentially repairs
-itself in case of temporary failures.
-
-## Deployment
-
-An ArangoDB Cluster can be deployed in several ways, e.g. by manually
-starting all the needed instances, by using the tool
-[_Starter_](../../components/tools/arangodb-starter/_index.md), in Docker and in Kubernetes.
-
-See the [Cluster Deployment](deployment/_index.md)
-chapter for instructions.
-
-ArangoDB is also available as a cloud service, the
-[**ArangoGraph Insights Platform**](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-## Cluster ID
-
-Every ArangoDB instance in a Cluster is assigned a unique
-ID during its startup. Using its ID, a node is identifiable
-throughout the Cluster. All cluster operations communicate
-via this ID.
diff --git a/site/content/3.10/deploy/cluster/deployment/_index.md b/site/content/3.10/deploy/cluster/deployment/_index.md
deleted file mode 100644
index 4bd6b1550d..0000000000
--- a/site/content/3.10/deploy/cluster/deployment/_index.md
+++ /dev/null
@@ -1,96 +0,0 @@
----
-title: Cluster Deployment
-menuTitle: Deployment
-weight: 5
-description: ''
----
-You can deploy an ArangoDB cluster in different ways:
-
-- [Using the ArangoDB Starter](using-the-arangodb-starter.md)
-- [Manual Start](manual-start.md)
-- [Kubernetes](../../kubernetes.md)
-- [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
- fully managed, available on AWS, Azure & GCP
-
-## Preliminary Information For Debian/Ubuntu Systems
-
-### Use a different configuration file for the Cluster instance
-
-The configuration file used for the standalone instance is
-`/etc/arangodb3/arangod.conf` (on Linux), and you should use a different one for
-the cluster instance(s). If you are using the _Starter_ binary `arangodb`, that is
-automatically the case. Otherwise, you might have to copy the configuration
-somewhere else and pass it to your `arangod` cluster instance via
-`--configuration`.
-
-### Use a different data directory for the standalone instance
-
-The data directory is configured in `arangod.conf`:
-
-```conf
-[database]
-directory = /var/lib/arangodb3
-```
-
-You have to make sure that the Cluster instance uses a different data directory
-as the standalone instance. If that is not already the case, change the
-`database.directory` entry in `arangod.conf` as seen above to a different
-directory
-
-```conf
-# in arangod.conf:
-[database]
-directory = /var/lib/arangodb3.standalone
-```
-
-and create it with the correct permissions:
-
-```bash
-$ mkdir -vp /var/lib/arangodb3.standalone
-$ chown -c arangodb:arangodb /var/lib/arangodb3.standalone
-$ chmod -c 0700 /var/lib/arangodb3.standalone
-```
-
-### Use a different socket for the standalone instance
-
-The standalone instance must use a different socket, i.e. it cannot use the
-same port on the same network interface than the Cluster. For that, change the
-standalone instance's port in `/etc/arangodb3/arangod.conf`
-
-```conf
-[server]
-endpoint = tcp://127.0.0.1:8529
-```
-
-to something unused, e.g.
-
-```conf
-[server]
-endpoint = tcp://127.1.2.3:45678
-```
-.
-
-### Use a different *init* script for the Cluster instance
-
-This section applies to SystemV-compatible init systems (e.g. sysvinit, OpenRC,
-upstart). The steps are different for systemd.
-
-The package install scripts use the default _init_ script `/etc/init.d/arangodb3`
-(on Linux) to stop and start ArangoDB during the installation. If you are using
-an _init_ script for your Cluster instance, make sure it is named differently.
-In addition, the installation might overwrite your _init_ script otherwise.
-
-If you have previously changed the default _init_ script, move it out of the way
-
-```bash
-$ mv -vi /etc/init.d/arangodb3 /etc/init.d/arangodb3.cluster
-```
-
-and add it to the _autostart_; how this is done depends on your distribution and
-_init_ system. On older Debian and Ubuntu systems, you can use `update-rc.d`:
-
-```bash
-$ update-rc.d arangodb3.cluster defaults
-```
-
-Make sure your _init_ script uses a different `PIDFILE` than the default script!
diff --git a/site/content/3.10/deploy/oneshard.md b/site/content/3.10/deploy/oneshard.md
deleted file mode 100644
index cd4eed572b..0000000000
--- a/site/content/3.10/deploy/oneshard.md
+++ /dev/null
@@ -1,320 +0,0 @@
----
-title: OneShard cluster deployments
-menuTitle: OneShard
-weight: 20
-description: >-
- The OneShard feature offers a practicable solution that enables significantly
- improved performance and transactional guarantees for cluster deployments
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-The OneShard option for ArangoDB clusters restricts all collections of a
-database to a single shard so that every collection has `numberOfShards` set to `1`,
-and all leader shards are placed on one DB-Server node. This way, whole queries
-can be pushed to and executed on that server, massively reducing cluster-internal
-communication. The Coordinator only gets back the final result.
-
-Queries are always limited to a single database, and with the data of a whole
-database on a single node, the OneShard option allows running transactions with
-ACID guarantees on shard leaders.
-
-Collections can have replicas by setting a `replicationFactor` greater than `1`
-as usual. For each replica, the follower shards are all placed on one DB-Server
-node when using the OneShard option. This allows for a quick failover in case
-the DB-Server with the leader shards fails.
-
-A OneShard setup is highly recommended for most graph use cases and join-heavy
-queries.
-
-{{< info >}}
-For graphs larger than what fits on a single DB-Server node, you can use the
-[SmartGraphs](../graphs/smartgraphs/_index.md) feature to efficiently limit the
-network hops between Coordinator and DB-Servers.
-{{< /info >}}
-
-Without the OneShard feature query processing works as follows in a cluster:
-
-- The Coordinator accepts and analyzes the query.
-- If collections are accessed then the Coordinator distributes the accesses
- to collections to different DB-Servers that hold parts (shards) of the
- collections in question.
-- This distributed access requires network-traffic from Coordinator to
- DB-Servers and back from DB-Servers to Coordinators and is therefore
- expensive.
-
-Another cost factor is the memory and CPU time required on the Coordinator
-when it has to process several concurrent complex queries. In such
-situations Coordinators may become a bottleneck in query processing,
-because they need to send and receive data on several connections, build up
-results for collection accesses from the received parts followed by further
-processing.
-
-
-
-If the database involved in a query is a OneShard database,
-then the OneShard optimization can be applied to run the query on the
-responsible DB-Server node like on a single server. However, it still being
-a cluster setup means collections can be replicated synchronously to ensure
-resilience etc.
-
-### How to use the OneShard feature
-
-The OneShard feature is enabled by default if you use the ArangoDB
-Enterprise Edition and if the database is sharded as `"single"`. In this case the
-optimizer rule `cluster-one-shard` is applied automatically.
-There are two ways to achieve this:
-
-- If you want your entire cluster to be a OneShard deployment, use the
- [startup option](../components/arangodb-server/options.md#cluster)
- `--cluster.force-one-shard`. It sets the immutable `sharding` database
- property to `"single"` for all newly created databases, which in turn
- enforces the OneShard conditions for collections that are created in it.
- The `_graphs` system collection is used for `distributeShardsLike`.
-
-- For individual OneShard databases, set the `sharding` database property to `"single"`
- to enforce the OneShard condition. The `_graphs` system collection is used for
- `distributeShardsLike`. It is not possible to change the `sharding` database
- property afterwards or overwrite this setting for individual collections.
- For non-OneShard databases the value of the `sharding` database property is
- either `""` or `"flexible"`.
-
-{{< info >}}
-The prototype collection does not only control the sharding, but also the
-replication factor for all collections which follow its example. If the
-`_graphs` system collection is used for `distributeShardsLike`, then the
-replication factor can be adjusted by changing the `replicationFactor`
-property of the `_graphs` collection (affecting this and all following
-collections) or via the startup option `--cluster.system-replication-factor`
-(affecting all system collections and all following collections).
-{{< /info >}}
-
-**Example**
-
-The easiest way to make use of the OneShard feature is to create a database
-with the extra option `{ sharding: "single" }`. As done in the following
-example:
-
-```js
-arangosh> db._createDatabase("oneShardDB", { sharding: "single" } )
-
-arangosh> db._useDatabase("oneShardDB")
-
-arangosh@oneShardDB> db._properties()
-{
- "id" : "6010005",
- "name" : "oneShardDB",
- "isSystem" : false,
- "sharding" : "single",
- "replicationFactor" : 1,
- "writeConcern" : 1,
- "path" : ""
-}
-```
-
-Now you can go ahead and create a collection as usual:
-
-```js
-arangosh@oneShardDB> db._create("example1")
-
-arangosh@oneShardDB> db.example1.properties()
-{
- "isSmart" : false,
- "isSystem" : false,
- "waitForSync" : false,
- "shardKeys" : [
- "_key"
- ],
- "numberOfShards" : 1,
- "keyOptions" : {
- "allowUserKeys" : true,
- "type" : "traditional"
- },
- "replicationFactor" : 2,
- "minReplicationFactor" : 1,
- "writeConcern" : 1,
- "distributeShardsLike" : "_graphs",
- "shardingStrategy" : "hash",
- "cacheEnabled" : false
-}
-```
-
-As you can see, the `numberOfShards` is set to `1` and `distributeShardsLike`
-is set to `_graphs`. These attributes have automatically been set
-because the `{ "sharding": "single" }` options object was
-specified when creating the database.
-
-To do this manually for individual collections, use `{ "sharding": "flexible" }`
-on the database level and then create a collection in the following way:
-
-```js
-db._create("example2", { "numberOfShards": 1 , "distributeShardsLike": "_graphs" })
-```
-
-Here, the `_graphs` collection is used again, but any other existing
-collection that has not been created with the `distributeShardsLike`
-option itself can be used as well in a flexibly sharded database.
-
-### Running Queries
-
-For this arangosh example, first insert a few documents into a collection,
-then create a query and explain it to inspect the execution plan.
-
-```js
-arangosh@oneShardDB> for (let i = 0; i < 10000; i++) { db.example.insert({ "value" : i }); }
-
-arangosh@oneShardDB> q = "FOR doc IN @@collection FILTER doc.value % 2 == 0 SORT doc.value ASC LIMIT 10 RETURN doc";
-
-arangosh@oneShardDB> db._explain(q, { "@collection" : "example" })
-
-Query String (88 chars, cacheable: true):
- FOR doc IN @@collection FILTER doc.value % 2 == 0 SORT doc.value ASC LIMIT 10 RETURN doc
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 2 EnumerateCollectionNode DBS 10000 - FOR doc IN example /* full collection scan, 1 shard(s) */ FILTER ((doc.`value` % 2) == 0) /* early pruning */
- 5 CalculationNode DBS 10000 - LET #3 = doc.`value` /* attribute expression */ /* collections used: doc : example */
- 6 SortNode DBS 10000 - SORT #3 ASC /* sorting strategy: constrained heap */
- 7 LimitNode DBS 10 - LIMIT 0, 10
- 9 RemoteNode COOR 10 - REMOTE
- 10 GatherNode COOR 10 - GATHER
- 8 ReturnNode COOR 10 - RETURN doc
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 move-filters-up
- 3 move-calculations-up-2
- 4 move-filters-up-2
- 5 cluster-one-shard
- 6 sort-limit
- 7 move-filters-into-enumerate
-
-```
-
-As it can be seen in the explain output, almost the complete query is
-executed on the DB-Server (`DBS` for nodes 1-7) and only 10 documents are
-transferred to the Coordinator. In case you do the same with a collection
-that consists of several shards, you get a different result:
-
-```js
-arangosh> db._createDatabase("shardedDB")
-
-arangosh> db._useDatabase("shardedDB")
-
-arangosh@shardedDB> db._properties()
-{
- "id" : "6010017",
- "name" : "shardedDB",
- "isSystem" : false,
- "sharding" : "flexible",
- "replicationFactor" : 1,
- "writeConcern" : 1,
- "path" : ""
-}
-
-arangosh@shardedDB> db._create("example", { numberOfShards : 5})
-
-arangosh@shardedDB> for (let i = 0; i < 10000; i++) { db.example.insert({ "value" : i }); }
-
-arangosh@shardedDB> db._explain(q, { "@collection" : "example" })
-
-Query String (88 chars, cacheable: true):
- FOR doc IN @@collection FILTER doc.value % 2 == 0 SORT doc.value ASC LIMIT 10 RETURN doc
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 2 EnumerateCollectionNode DBS 10000 - FOR doc IN example /* full collection scan, 5 shard(s) */ FILTER ((doc.`value` % 2) == 0) /* early pruning */
- 5 CalculationNode DBS 10000 - LET #3 = doc.`value` /* attribute expression */ /* collections used: doc : example */
- 6 SortNode DBS 10000 - SORT #3 ASC /* sorting strategy: constrained heap */
- 11 RemoteNode COOR 10000 - REMOTE
- 12 GatherNode COOR 10000 - GATHER #3 ASC /* parallel, sort mode: heap */
- 7 LimitNode COOR 10 - LIMIT 0, 10
- 8 ReturnNode COOR 10 - RETURN doc
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 move-filters-up
- 3 move-calculations-up-2
- 4 move-filters-up-2
- 5 scatter-in-cluster
- 6 distribute-filtercalc-to-cluster
- 7 distribute-sort-to-cluster
- 8 remove-unnecessary-remote-scatter
- 9 sort-limit
- 10 move-filters-into-enumerate
- 11 parallelize-gather
-```
-
-{{< tip >}}
-It can be checked whether the OneShard feature is active or not by
-inspecting the explain output. If the list of rules contains
-`cluster-one-shard`, then the feature is active for the given query.
-{{< /tip >}}
-
-Without the OneShard feature all documents potentially have to be sent to
-the Coordinator for further processing. With this simple query this is actually
-not true, because some other optimizations are performed that reduce the number
-of documents. But still, a considerable amount of documents has to be
-transferred from DB-Server to Coordinator only to apply a `LIMIT` of 10
-documents there. The estimate for the *RemoteNode* is 10,000 in this example,
-whereas it is 10 in the OneShard case.
-
-### ACID Transactions on Leader Shards
-
-ArangoDB's transactional guarantees are tunable. For transactions to be ACID
-on the leader shards in a cluster, a few things need to be considered:
-
-- The AQL query or [Stream Transaction](../develop/http-api/transactions/stream-transactions.md)
- must be eligible for the OneShard optimization, so that it is executed on a
- single DB-Server node.
-- To ensure durability, enable `waitForSync` on query level to wait until data
- modifications have been written to disk.
-- The collection option `writeConcern: 2` makes sure that a transaction is only
- successful if at least one follower shard is in sync with the leader shard,
- for a total of two shard replicas.
-- The RocksDB storage engine uses intermediate commits for larger document
- operations carried out by standalone AQL queries
- (outside of JavaScript Transactions and Stream Transactions).
- This potentially breaks the atomicity of transactions. To prevent
- this for individual queries you can increase `intermediateCommitSize`
- (default 512 MB) and `intermediateCommitCount` accordingly as query option.
- Also see [Known limitations for AQL queries](../aql/fundamentals/limitations.md#storage-engine-properties).
-
-### Limitations
-
-The OneShard optimization is used automatically for all eligible AQL queries
-and Stream Transactions.
-
-For AQL queries, any of the following factors currently makes a query
-unsuitable for the OneShard optimization:
-
-- The query accesses collections with more than a single shard, different leader
- DB-Servers, or different `distributeShardsLike` prototype collections
-- The query writes into a SatelliteCollection
-- The query accesses an edge collection of a SmartGraph
-- Usage of AQL functions that can only execute on Coordinators.
- These functions are:
- - `APPLY`
- - `CALL`
- - `COLLECTION_COUNT`
- - `COLLECTIONS`
- - `CURRENT_DATABASE`
- - `CURRENT_USER`
- - `FULLTEXT`
- - `NEAR`
- - `SCHEMA_GET`
- - `SCHEMA_VALIDATE`
- - `V8`
- - `VERSION`
- - `WITHIN`
- - `WITHIN_RECTANGLE`
- - User-defined AQL functions (UDFs)
diff --git a/site/content/3.10/develop/http-api/_index.md b/site/content/3.10/develop/http-api/_index.md
deleted file mode 100644
index 3068e60f26..0000000000
--- a/site/content/3.10/develop/http-api/_index.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: HTTP API Documentation
-menuTitle: HTTP API
-weight: 275
-description: >-
- All functionality of ArangoDB servers is provided via a RESTful API over the
- HTTP protocol, and you can call the API endpoints directly, via database
- drivers, or other tools
----
-ArangoDB servers expose an application programming interface (API) for managing
-the database system. It is based on the HTTP protocol that powers the
-world wide web. All interactions with a server are ultimately carried out via
-this HTTP API.
-
-You can use the API by sending HTTP requests to the server directly, but the
-more common way of communicating with the server is via a [database driver](../drivers/_index.md).
-A driver abstracts the complexity of the API away by providing a simple
-interface for your programming language or environment and handling things like
-authentication, connection pooling, asynchronous requests, and multi-part replies
-in the background. You can also use ArangoDB's [web interface](../../components/web-interface/_index.md),
-the [_arangosh_](../../components/tools/arangodb-shell/_index.md) shell, or other tools.
-
-The API documentation is relevant for you in the following cases:
-
-- You want to build or extend a driver.
-- You want to utilize a feature that isn't exposed by your driver or tool.
-- You need to send many requests and avoid any overhead that a driver or tool might add.
-- You operate a server instance and need to perform administrative actions via the API.
-- You are interested in how the low-level communication works.
-
-## RESTful API
-
-The API adheres to the design principles of [REST](https://en.wikipedia.org/wiki/Representational_state_transfer)
-(Representational State Transfer). A REST API is a specific type of HTTP API
-that uses HTTP methods to represent operations on resources (mainly `GET`,
-`POST`, `PATCH`, `PUT`, and `DELETE`), and resources are identified by URIs.
-A resource can be a database record, a server log, or any other data entity or
-object. The communication between client and server is stateless.
-
-A request URL can look like this:
-`http://localhost:8529/_db/DATABASE/_api/document/COLLECTION/KEY?returnOld=true&keepNull=false`
-- `http` is the scheme, which is `https` if you use TLS encryption
-- `http:` is the protocol
-- `localhost` is the hostname, which can be an IP address or domain name including subdomains
-- `8529` is the port
-- `/_db/DATABASE/_api/document/COLLECTION/KEY` is the pathname
-- `?returnOld=true&keepNull=false` is the search string
-- `returnOld=true&keepNull=false` is the query string
-
-The HTTP API documentation mainly describes the available **endpoints**, like
-for updating a document, creating a graph, and so on. Each endpoint description
-starts with the HTTP method and the pathname, like `PATCH /_api/document/{collection}/{key}`.
-- The `PATCH` method is for updating, `PUT` for replacing, `POST` for creating
- (or triggering an action), `DELETE` for removing, `GET` for reading,
- `HEAD` for reading metadata only
-- `/_api/document/…` is the path of ArangoDB's HTTP API for handling documents
- and can be preceded by `/_db/{database}` with `{database}` replaced by a
- database name to select another database than the default `_system` database
-- `{collection}` and `{key}` are placeholders called **Path Parameters** that
- you have to replace with a collection name and document key in this case
-- The pathname can be followed by a question mark and the so-called
- **Query Parameters**, which is a series of key/value pairs separated by
- ampersands to set options, like `/_api/document/COLLECTION/KEY?returnOld=true&keepNull=false`
-- Some endpoints allow you to specify **HTTP headers** in the request
- (not in the URL), like `If-Match: "REVISION"`
-- A **Request Body** is the payload you may need to send, typically JSON data
-- **Responses** are the possible HTTP responses in reply to your request in terms
- of the HTTP status code and typically a JSON payload with a result or error information
-
-On the wire, a simplified HTTP request can look like this:
-
-```
-PATCH /_api/document/coll1/docA?returnOld=true HTTP/1.1
-Host: localhost:8529
-Authorization: Basic cm9vdDo=
-If-Match: "_hV2oH9y---"
-Content-Type: application/json; charset=utf-8
-Content-Length: 20
-
-{"attr":"new value"}
-```
-
-And a simplified HTTP response can look like this:
-
-```
-HTTP/1.1 202 Accepted
-Etag: "_hV2r5XW---"
-Location: /_db/_system/_api/document/coll1/docA
-Server: ArangoDB
-Connection: Keep-Alive
-Content-Type: application/json; charset=utf-8
-Content-Length: 160
-
-{"_id":"coll1/docA","_key":"docA","_rev":"_hV2r5XW---","_oldRev":"_hV2oH9y---","old":{"_key":"docA","_id":"coll1/docA","_rev":"_hV2oH9y---","attr":"value"}}
-```
-
-## Swagger specification
-
-ArangoDB's RESTful HTTP API is documented using the industry-standard
-**OpenAPI Specification**, more specifically [OpenAPI version 3.1](https://swagger.io/specification/).
-You can explore the API with the interactive **Swagger UI** using the
-[ArangoDB web interface](../../components/web-interface/_index.md).
-
-1. Click **SUPPORT** in the main navigation of the web interface.
-2. Click the **Rest API** tab.
-3. Click a section and endpoint to view the description and parameters.
-
-
-
-Also see this blog post:
-[Using the ArangoDB Swagger.io Interactive API Documentation](https://www.arangodb.com/2018/03/using-arangodb-swaggerio-interactive-api-documentation/).
diff --git a/site/content/3.10/develop/http-api/documents.md b/site/content/3.10/develop/http-api/documents.md
deleted file mode 100644
index ec68f4fb1e..0000000000
--- a/site/content/3.10/develop/http-api/documents.md
+++ /dev/null
@@ -1,2616 +0,0 @@
----
-title: HTTP interface for documents
-menuTitle: Documents
-weight: 30
-description: >-
- The HTTP API for documents lets you create, read, update, and delete documents
- in collections, either one or multiple at a time
----
-The basic operations for documents are mapped to the standard HTTP methods:
-
-- Create: `POST`
-- Read: `GET`
-- Update: `PATCH` (partially modify)
-- Replace: `PUT`
-- Delete: `DELETE`
-- Check: `HEAD` (test for existence and get document metadata)
-
-## Addresses of documents
-
-Any document can be retrieved using its unique URI:
-
-```
-http://server:port/_api/document/
-```
-
-For example, assuming that the document identifier is `demo/362549736`, then the URL
-of that document is:
-
-```
-http://localhost:8529/_api/document/demo/362549736
-```
-
-The above URL schema does not specify a [database name](../../concepts/data-structure/databases.md#database-names)
-explicitly, so the default `_system` database is used. To explicitly specify the
-database context, use the following URL schema:
-
-```
-http://server:port/_db//_api/document/
-```
-
-For example, using a database called `mydb`:
-
-```
-http://localhost:8529/_db/mydb/_api/document/demo/362549736
-```
-
-{{< tip >}}
-Many examples in the documentation use the short URL format (and thus the
-`_system` database) for brevity.
-{{< /tip >}}
-
-### Multiple documents in a single request
-
-The document API can handle not only single documents but multiple documents in
-a single request. This is crucial for performance, in particular in the cluster
-situation, in which a single request can involve multiple network hops
-within the cluster. Another advantage is that it reduces the overhead of
-the HTTP protocol and individual network round trips between the client
-and the server. The general idea to perform multiple document operations
-in a single request is to use a JSON array of objects in the place of a
-single document. As a consequence, document keys, identifiers and revisions
-for preconditions have to be supplied embedded in the individual documents
-given. Multiple document operations are restricted to a single collection
-(document collection or edge collection).
-
-
-Note that the `GET`, `HEAD` and `DELETE` HTTP operations generally do
-not allow to pass a message body. Thus, they cannot be used to perform
-multiple document operations in one request. However, there are alternative
-endpoints to request and delete multiple documents in one request.
-
-### Single document operations
-
-#### Get a document
-
-```openapi
-paths:
- /_api/document/{collection}/{key}:
- get:
- operationId: getDocument
- description: |
- Returns the document identified by the collection name and document key.
- The returned document contains three special attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the document revision.
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the collection from which the document is to be read.
- schema:
- type: string
- - name: key
- in: path
- required: true
- description: |
- The document key.
- schema:
- type: string
- - name: If-None-Match
- in: header
- required: false
- description: |
- If the "If-None-Match" header is given, then it must contain exactly one
- ETag. The document is returned, if it has a different revision than the
- given ETag. Otherwise an *HTTP 304* is returned.
- schema:
- type: string
- - name: If-Match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one
- ETag. The document is returned, if it has the same revision as the
- given ETag. Otherwise a *HTTP 412* is returned.
- schema:
- type: string
- - name: x-arango-allow-dirty-read
- in: header
- required: false
- description: |
- Set this header to `true` to allow the Coordinator to ask any shard replica for
- the data, not only the shard leader. This may result in "dirty reads".
-
- The header is ignored if this operation is part of a Stream Transaction
- (`x-arango-trx-id` header). The header set when creating the transaction decides
- about dirty reads for the entire transaction, not the individual read operations.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- is returned if the document was found
- '304':
- description: |
- is returned if the "If-None-Match" header is given and the document has
- the same version
- '404':
- description: |
- is returned if the document or collection was not found
- '412':
- description: |
- is returned if an "If-Match" header is given and the found
- document has a different version. The response will also contain the found
- document's current revision in the `_rev` attribute. Additionally, the
- attributes `_id` and `_key` will be returned.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Use a document identifier:
-name: RestDocumentHandlerReadDocument
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Use a document identifier and an ETag:
-name: RestDocumentHandlerReadDocumentIfNoneMatch
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-var url = "/_api/document/" + document._id;
-var headers = {"If-None-Match": "\"" + document._rev + "\""};
-
-var response = logCurlRequest('GET', url, "", headers);
-
-assert(response.code === 304);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Unknown document identifier:
-name: RestDocumentHandlerReadDocumentUnknownHandle
----
-var url = "/_api/document/products/unknown-identifier";
-
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 404);
-
-logJsonResponse(response);
-```
-
-#### Get a document header
-
-```openapi
-paths:
- /_api/document/{collection}/{key}:
- head:
- operationId: getDocumentHeader
- description: |
- Like `GET`, but only returns the header fields and not the body. You
- can use this call to get the current revision of a document or check if
- the document was deleted.
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` from which the document is to be read.
- schema:
- type: string
- - name: key
- in: path
- required: true
- description: |
- The document key.
- schema:
- type: string
- - name: If-None-Match
- in: header
- required: false
- description: |
- If the "If-None-Match" header is given, then it must contain exactly one
- ETag. If the current document revision is not equal to the specified ETag,
- an *HTTP 200* response is returned. If the current document revision is
- identical to the specified ETag, then an *HTTP 304* is returned.
- schema:
- type: string
- - name: If-Match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one
- ETag. The document is returned, if it has the same revision as the
- given ETag. Otherwise a *HTTP 412* is returned.
- schema:
- type: string
- - name: x-arango-allow-dirty-read
- in: header
- required: false
- description: |
- Set this header to `true` to allow the Coordinator to ask any shard replica for
- the data, not only the shard leader. This may result in "dirty reads".
-
- The header is ignored if this operation is part of a Stream Transaction
- (`x-arango-trx-id` header). The header set when creating the transaction decides
- about dirty reads for the entire transaction, not the individual read operations.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- is returned if the document was found
- '304':
- description: |
- is returned if the "If-None-Match" header is given and the document has
- the same version
- '404':
- description: |
- is returned if the document or collection was not found
- '412':
- description: |
- is returned if an "If-Match" header is given and the found
- document has a different version. The response will also contain the found
- document's current revision in the `ETag` header.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: RestDocumentHandlerReadDocumentHead
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest('HEAD', url);
-
-assert(response.code === 200);
-db._drop(cn);
-```
-
-#### Create a document
-
-```openapi
-paths:
- /_api/document/{collection}:
- post:
- operationId: createDocument
- description: |
- Creates a new document from the document given in the body, unless there
- is already a document with the `_key` given. If no `_key` is given, a
- new unique `_key` is generated automatically. The `_id` is automatically
- set in both cases, derived from the collection name and `_key`.
-
- {{* info */>}}
- An `_id` or `_rev` attribute specified in the body is ignored.
- {{* /info */>}}
-
- If the document was created successfully, then the `Location` header
- contains the path to the newly created document. The `ETag` header field
- contains the revision of the document. Both are only set in the single
- document case.
-
- Unless `silent` is set to `true`, the body of the response contains a
- JSON object with the following attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the document revision.
-
- If the collection parameter `waitForSync` is `false`, then the call
- returns as soon as the document has been accepted. It does not wait
- until the documents have been synced to disk.
-
- Optionally, the query parameter `waitForSync` can be used to force
- synchronization of the document creation operation to disk even in
- case that the `waitForSync` flag had been disabled for the entire
- collection. Thus, the `waitForSync` query parameter can be used to
- force synchronization of just this specific operations. To use this,
- set the `waitForSync` parameter to `true`. If the `waitForSync`
- parameter is not specified or set to `false`, then the collection's
- default `waitForSync` behavior is applied. The `waitForSync` query
- parameter cannot be used to disable synchronization for collections
- that have a default `waitForSync` value of `true`.
-
- If the query parameter `returnNew` is `true`, then, for each
- generated document, the complete new document is returned under
- the `new` attribute in the result.
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` in which the document is to be created.
- schema:
- type: string
- - name: collection
- in: query
- required: false
- description: |
- The name of the collection. This query parameter is only for backward compatibility.
- In ArangoDB versions < 3.0, the URL path was `/_api/document` and
- this query parameter was required. This combination still works, but
- the recommended way is to specify the collection in the URL path.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until document has been synced to disk.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Additionally return the complete new document under the attribute `new`
- in the result.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Additionally return the complete old document under the attribute `old`
- in the result. Only available if the overwrite option is used.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if the document operation
- succeeds. No meta-data is returned for the created document. If the
- operation raises an error, an error object is returned.
-
- You can use this option to save network traffic.
- schema:
- type: boolean
- - name: overwrite
- in: query
- required: false
- description: |
- If set to `true`, the insert becomes a replace-insert. If a document with the
- same `_key` already exists, the new document is not rejected with unique
- constraint violation error but replaces the old document. Note that operations
- with `overwrite` parameter require a `_key` attribute in the request payload,
- therefore they can only be performed on collections sharded by `_key`.
- schema:
- type: boolean
- - name: overwriteMode
- in: query
- required: false
- description: |
- This option supersedes `overwrite` and offers the following modes:
- - `"ignore"`: if a document with the specified `_key` value exists already,
- nothing is done and no write operation is carried out. The
- insert operation returns success in this case. This mode does not
- support returning the old document version using `RETURN OLD`. When using
- `RETURN NEW`, `null` is returned in case the document already existed.
- - `"replace"`: if a document with the specified `_key` value exists already,
- it is overwritten with the specified document value. This mode is
- also used when no overwrite mode is specified but the `overwrite`
- flag is set to `true`.
- - `"update"`: if a document with the specified `_key` value exists already,
- it is patched (partially updated) with the specified document value.
- The overwrite mode can be further controlled via the `keepNull` and
- `mergeObjects` parameters.
- - `"conflict"`: if a document with the specified `_key` value exists already,
- return a unique constraint violation error so that the insert operation
- fails. This is also the default behavior in case the overwrite mode is
- not set, and the `overwrite` flag is `false` or not set either.
- schema:
- type: string
- - name: keepNull
- in: query
- required: false
- description: |
- If the intention is to delete existing attributes with the update-insert
- command, set the `keepNull` URL query parameter to `false`. This modifies the
- behavior of the patch command to remove top-level attributes and sub-attributes
- from the existing document that are contained in the patch document with an
- attribute value of `null` (but not attributes of objects that are nested inside
- of arrays). This option controls the update-insert behavior only.
- schema:
- type: boolean
- - name: mergeObjects
- in: query
- required: false
- description: |
- Controls whether objects (not arrays) are merged if present in both, the
- existing and the update-insert document. If set to `false`, the value in the
- patch document overwrites the existing document's value. If set to `true`,
- objects are merged. The default is `true`.
- This option controls the update-insert behavior only.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to add a new entry to the in-memory edge cache if an edge document
- is inserted.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - document
- properties:
- document:
- description: |
- A JSON representation of a single document.
- type: object
- responses:
- '201':
- description: |
- is returned if the documents were created successfully and
- `waitForSync` was `true`.
- '202':
- description: |
- is returned if the documents were created successfully and
- `waitForSync` was `false`.
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of one document. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection specified by `collection` is unknown.
- The response body contains an error document in this case.
- '409':
- description: |
- is returned in the single document case if a document with the
- same qualifiers in an indexed attribute conflicts with an already
- existing document and thus violates that unique constraint. The
- response body contains an error document in this case.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Create a document in a collection named `products`. Note that the
- revision identifier might or might not by equal to the auto-generated
- key.
-name: RestDocumentHandlerPostCreate1
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: true });
-
-var url = "/_api/document/" + cn;
-var body = '{ "Hello": "World" }';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 201);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Create a document in a collection named `products` with a collection-level
- `waitForSync` value of `false`.
-name: RestDocumentHandlerPostAccept1
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: false });
-
-var url = "/_api/document/" + cn;
-var body = '{ "Hello": "World" }';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Create a document in a collection with a collection-level `waitForSync`
- value of `false`, but using the `waitForSync` query parameter.
-name: RestDocumentHandlerPostWait1
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: false });
-
-var url = "/_api/document/" + cn + "?waitForSync=true";
-var body = '{ "Hello": "World" }';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 201);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Unknown collection name
-name: RestDocumentHandlerPostUnknownCollection1
----
-var cn = "products";
-
-var url = "/_api/document/" + cn;
-var body = '{ "Hello": "World" }';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 404);
-
-logJsonResponse(response);
-```
-
-```curl
----
-description: |-
- Illegal document
-name: RestDocumentHandlerPostBadJson1
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var url = "/_api/document/" + cn;
-var body = '{ 1: "World" }';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 400);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Use of returnNew:
-name: RestDocumentHandlerPostReturnNew
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var url = "/_api/document/" + cn + "?returnNew=true";
-var body = '{"Hello":"World"}';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: ''
-name: RestDocumentHandlerPostOverwrite
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: true });
-
-var url = "/_api/document/" + cn;
-var body = '{ "Hello": "World", "_key" : "lock" }';
-var response = logCurlRequest('POST', url, body);
-// insert
-assert(response.code === 201);
-logJsonResponse(response);
-
-body = '{ "Hello": "Universe", "_key" : "lock" }';
-url = "/_api/document/" + cn + "?overwrite=true";
-response = logCurlRequest('POST', url, body);
-// insert same key
-assert(response.code === 201);
-logJsonResponse(response);
-
-db._drop(cn);
-```
-
-#### Replace a document
-
-```openapi
-paths:
- /_api/document/{collection}/{key}:
- put:
- operationId: replaceDocument
- description: |
- Replaces the specified document with the one in the body, provided there is
- such a document and no precondition is violated.
-
- The values of the `_key`, `_id`, and `_rev` system attributes as well as
- attributes used as sharding keys cannot be changed.
-
- If the `If-Match` header is specified and the revision of the
- document in the database is unequal to the given revision, the
- precondition is violated.
-
- If `If-Match` is not given and `ignoreRevs` is `false` and there
- is a `_rev` attribute in the body and its value does not match
- the revision of the document in the database, the precondition is
- violated.
-
- If a precondition is violated, an *HTTP 412* is returned.
-
- If the document exists and can be updated, then an *HTTP 201* or
- an *HTTP 202* is returned (depending on `waitForSync`, see below),
- the `ETag` header field contains the new revision of the document
- and the `Location` header contains a complete URL under which the
- document can be queried.
-
- Cluster only: The replace documents _may_ contain
- values for the collection's pre-defined shard keys. Values for the shard keys
- are treated as hints to improve performance. Should the shard keys
- values be incorrect ArangoDB may answer with a *not found* error.
-
- Optionally, the query parameter `waitForSync` can be used to force
- synchronization of the document replacement operation to disk even in case
- that the `waitForSync` flag had been disabled for the entire collection.
- Thus, the `waitForSync` query parameter can be used to force synchronization
- of just specific operations. To use this, set the `waitForSync` parameter
- to `true`. If the `waitForSync` parameter is not specified or set to
- `false`, then the collection's default `waitForSync` behavior is
- applied. The `waitForSync` query parameter cannot be used to disable
- synchronization for collections that have a default `waitForSync` value
- of `true`.
-
- Unless `silent` is set to `true`, the body of the response contains a
- JSON object with the following attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the new document revision.
-
- If the query parameter `returnOld` is `true`, then
- the complete previous revision of the document
- is returned under the `old` attribute in the result.
-
- If the query parameter `returnNew` is `true`, then
- the complete new document is returned under
- the `new` attribute in the result.
-
- If the document does not exist, then a *HTTP 404* is returned and the
- body of the response contains an error document.
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - document
- properties:
- document:
- description: |
- A JSON representation of a single document.
- type: object
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` in which the document is to be replaced.
- schema:
- type: string
- - name: key
- in: path
- required: true
- description: |
- The document key.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until document has been synced to disk.
- schema:
- type: boolean
- - name: ignoreRevs
- in: query
- required: false
- description: |
- By default, or if this is set to `true`, the `_rev` attributes in
- the given document is ignored. If this is set to `false`, then
- the `_rev` attribute given in the body document is taken as a
- precondition. The document is only replaced if the current revision
- is the one specified.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Return additionally the complete previous revision of the changed
- document under the attribute `old` in the result.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Return additionally the complete new document under the attribute `new`
- in the result.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if the document operation
- succeeds. No meta-data is returned for the replaced document. If the
- operation raises an error, an error object is returned.
-
- You can use this option to save network traffic.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to update an existing entry in the in-memory edge cache if an
- edge document is replaced.
- schema:
- type: boolean
- - name: If-Match
- in: header
- required: false
- description: |
- You can conditionally replace a document based on a target revision id by
- using the `if-match` HTTP header.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '201':
- description: |
- is returned if the document was replaced successfully and
- `waitForSync` was `true`.
- '202':
- description: |
- is returned if the document was replaced successfully and
- `waitForSync` was `false`.
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of a document. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection or the document was not found.
- '409':
- description: |
- is returned if the replace causes a unique constraint violation in
- a secondary index.
- '412':
- description: |
- is returned if the precondition is violated. The response also contains
- the found documents' current revisions in the `_rev` attributes.
- Additionally, the attributes `_id` and `_key` are returned.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Using a document identifier
-name: RestDocumentHandlerUpdateDocument
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest('PUT', url, '{"Hello": "you"}');
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Unknown document identifier
-name: RestDocumentHandlerUpdateDocumentUnknownHandle
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-db.products.remove(document._id);
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest('PUT', url, "{}");
-
-assert(response.code === 404);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Produce a revision conflict
-name: RestDocumentHandlerUpdateDocumentIfMatchOther
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-var document2 = db.products.save({"hello2":"world"});
-var url = "/_api/document/" + document._id;
-var headers = {"If-Match": "\"" + document2._rev + "\""};
-
-var response = logCurlRequest('PUT', url, '{"other":"content"}', headers);
-
-assert(response.code === 412);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-#### Update a document
-
-```openapi
-paths:
- /_api/document/{collection}/{key}:
- patch:
- operationId: updateDocument
- description: |
- Partially updates the document identified by the *document ID*.
- The body of the request must contain a JSON document with the
- attributes to patch (the patch document). All attributes from the
- patch document are added to the existing document if they do not
- yet exist, and overwritten in the existing document if they do exist
- there.
-
- The values of the `_key`, `_id`, and `_rev` system attributes as well as
- attributes used as sharding keys cannot be changed.
-
- Setting an attribute value to `null` in the patch document causes a
- value of `null` to be saved for the attribute by default.
-
- If the `If-Match` header is specified and the revision of the
- document in the database is unequal to the given revision, the
- precondition is violated.
-
- If `If-Match` is not given and `ignoreRevs` is `false` and there
- is a `_rev` attribute in the body and its value does not match
- the revision of the document in the database, the precondition is
- violated.
-
- If a precondition is violated, an *HTTP 412* is returned.
-
- If the document exists and can be updated, then an *HTTP 201* or
- an *HTTP 202* is returned (depending on `waitForSync`, see below),
- the `ETag` header field contains the new revision of the document
- (in double quotes) and the `Location` header contains a complete URL
- under which the document can be queried.
-
- Cluster only: The patch document _may_ contain
- values for the collection's pre-defined shard keys. Values for the shard keys
- are treated as hints to improve performance. Should the shard keys
- values be incorrect ArangoDB may answer with a `not found` error
-
- Optionally, the query parameter `waitForSync` can be used to force
- synchronization of the updated document operation to disk even in case
- that the `waitForSync` flag had been disabled for the entire collection.
- Thus, the `waitForSync` query parameter can be used to force synchronization
- of just specific operations. To use this, set the `waitForSync` parameter
- to `true`. If the `waitForSync` parameter is not specified or set to
- `false`, then the collection's default `waitForSync` behavior is
- applied. The `waitForSync` query parameter cannot be used to disable
- synchronization for collections that have a default `waitForSync` value
- of `true`.
-
- Unless `silent` is set to `true`, the body of the response contains a
- JSON object with the following attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the new document revision.
-
- If the query parameter `returnOld` is `true`, then
- the complete previous revision of the document
- is returned under the `old` attribute in the result.
-
- If the query parameter `returnNew` is `true`, then
- the complete new document is returned under
- the `new` attribute in the result.
-
- If the document does not exist, then a *HTTP 404* is returned and the
- body of the response contains an error document.
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - document
- properties:
- document:
- description: |
- A JSON representation of a document update as an object.
- type: object
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` in which the document is to be updated.
- schema:
- type: string
- - name: key
- in: path
- required: true
- description: |
- The document key.
- schema:
- type: string
- - name: keepNull
- in: query
- required: false
- description: |
- If the intention is to delete existing attributes with the patch
- command, set the `keepNull` URL query parameter to `false`. This modifies the
- behavior of the patch command to remove top-level attributes and sub-attributes
- from the existing document that are contained in the patch document with an
- attribute value of `null` (but not attributes of objects that are nested inside
- of arrays).
- schema:
- type: boolean
- - name: mergeObjects
- in: query
- required: false
- description: |
- Controls whether objects (not arrays) are merged if present in
- both the existing and the patch document. If set to `false`, the
- value in the patch document overwrites the existing document's
- value. If set to `true`, objects are merged. The default is
- `true`.
- schema:
- type: boolean
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until document has been synced to disk.
- schema:
- type: boolean
- - name: ignoreRevs
- in: query
- required: false
- description: |
- By default, or if this is set to `true`, the `_rev` attributes in
- the given document is ignored. If this is set to `false`, then
- the `_rev` attribute given in the body document is taken as a
- precondition. The document is only updated if the current revision
- is the one specified.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Return additionally the complete previous revision of the changed
- document under the attribute `old` in the result.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Return additionally the complete new document under the attribute `new`
- in the result.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if the document operation
- succeeds. No meta-data is returned for the updated document. If the
- operation raises an error, an error object is returned.
-
- You can use this option to save network traffic.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to update an existing entry in the in-memory edge cache if an
- edge document is updated.
- schema:
- type: boolean
- - name: If-Match
- in: header
- required: false
- description: |
- You can conditionally update a document based on a target revision id by
- using the `if-match` HTTP header.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '201':
- description: |
- is returned if the document was updated successfully and
- `waitForSync` was `true`.
- '202':
- description: |
- is returned if the document was updated successfully and
- `waitForSync` was `false`.
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of a document. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection or the document was not found.
- '409':
- description: |
- is returned if the update causes a unique constraint violation in
- a secondary index.
- '412':
- description: |
- is returned if the precondition was violated. The response also contains
- the found documents' current revisions in the `_rev` attributes.
- Additionally, the attributes `_id` and `_key` are returned.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Patches an existing document with new content.
-name: RestDocumentHandlerPatchDocument
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"one":"world"});
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest("PATCH", url, { "hello": "world" });
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-var response2 = logCurlRequest("PATCH", url, { "numbers": { "one": 1, "two": 2, "three": 3, "empty": null } });
-assert(response2.code === 202);
-logJsonResponse(response2);
-var response3 = logCurlRequest("GET", url);
-assert(response3.code === 200);
-logJsonResponse(response3);
-var response4 = logCurlRequest("PATCH", url + "?keepNull=false", { "hello": null, "numbers": { "four": 4 } });
-assert(response4.code === 202);
-logJsonResponse(response4);
-var response5 = logCurlRequest("GET", url);
-assert(response5.code === 200);
-logJsonResponse(response5);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Merging attributes of an object using `mergeObjects`:
-name: RestDocumentHandlerPatchDocumentMerge
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"inhabitants":{"china":1366980000,"india":1263590000,"usa":319220000}});
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest("GET", url);
-assert(response.code === 200);
-logJsonResponse(response);
-
-var response = logCurlRequest("PATCH", url + "?mergeObjects=true", { "inhabitants": {"indonesia":252164800,"brazil":203553000 }});
-assert(response.code === 202);
-
-var response2 = logCurlRequest("GET", url);
-assert(response2.code === 200);
-logJsonResponse(response2);
-
-var response3 = logCurlRequest("PATCH", url + "?mergeObjects=false", { "inhabitants": { "pakistan":188346000 }});
-assert(response3.code === 202);
-logJsonResponse(response3);
-
-var response4 = logCurlRequest("GET", url);
-assert(response4.code === 200);
-logJsonResponse(response4);
-db._drop(cn);
-```
-
-#### Remove a document
-
-```openapi
-paths:
- /_api/document/{collection}/{key}:
- delete:
- operationId: deleteDocument
- description: |
- Unless `silent` is set to `true`, the body of the response contains a
- JSON object with the following attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the document revision.
-
- If the `waitForSync` parameter is not specified or set to `false`,
- then the collection's default `waitForSync` behavior is applied.
- The `waitForSync` query parameter cannot be used to disable
- synchronization for collections that have a default `waitForSync`
- value of `true`.
-
- If the query parameter `returnOld` is `true`, then
- the complete previous revision of the document
- is returned under the `old` attribute in the result.
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` in which the document is to be deleted.
- schema:
- type: string
- - name: key
- in: path
- required: true
- description: |
- The document key.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until deletion operation has been synced to disk.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Return additionally the complete previous revision of the changed
- document under the attribute `old` in the result.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if the document operation
- succeeds. No meta-data is returned for the deleted document. If the
- operation raises an error, an error object is returned.
-
- You can use this option to save network traffic.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to delete an existing entry from the in-memory edge cache and refill it
- with another edge if an edge document is removed.
- schema:
- type: boolean
- - name: If-Match
- in: header
- required: false
- description: |
- You can conditionally remove a document based on a target revision id by
- using the `if-match` HTTP header.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- is returned if the document was removed successfully and
- `waitForSync` was `true`.
- '202':
- description: |
- is returned if the document was removed successfully and
- `waitForSync` was `false`.
- '404':
- description: |
- is returned if the collection or the document was not found.
- The response body contains an error document in this case.
- '412':
- description: |
- is returned if a "If-Match" header or `rev` is given and the found
- document has a different version. The response also contain the found
- document's current revision in the `_rev` attribute. Additionally, the
- attributes `_id` and `_key` are returned.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Using document identifier:
-name: RestDocumentHandlerDeleteDocument
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: true });
-var document = db.products.save({"hello":"world"});
-
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Unknown document identifier:
-name: RestDocumentHandlerDeleteDocumentUnknownHandle
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: true });
-var document = db.products.save({"hello":"world"});
-db.products.remove(document._id);
-
-var url = "/_api/document/" + document._id;
-
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 404);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Revision conflict:
-name: RestDocumentHandlerDeleteDocumentIfMatchOther
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var document = db.products.save({"hello":"world"});
-var document2 = db.products.save({"hello2":"world"});
-var url = "/_api/document/" + document._id;
-var headers = {"If-Match": "\"" + document2._rev + "\""};
-
-var response = logCurlRequest('DELETE', url, "", headers);
-
-assert(response.code === 412);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-#### Document ETags
-
-ArangoDB tries to adhere to the existing HTTP standard as far as
-possible. To this end, results of single document queries have the `ETag`
-HTTP header set to the [document revision](../../concepts/data-structure/documents/_index.md#document-revisions)
-(the value of `_rev` document attribute) enclosed in double quotes.
-
-You can check the revision of a document using the `HEAD` HTTP method.
-
-If you want to query, replace, update, replace, or delete a document, then you
-can use the `If-Match` header to detect conflicts. If the document has changed,
-the operation is aborted and an HTTP `412 Precondition failed` status code is
-returned.
-
-If you obtain a document using `GET` and you want to check whether a newer
-revision is available, then you can use the `If-None-Match` HTTP header. If the
-document is unchanged (same document revision), an HTTP `412 Precondition failed`
-status code is returned.
-
-### Multiple document operations
-
-ArangoDB supports working with documents in bulk. Bulk operations affect a
-*single* collection. Using this API variant allows clients to amortize the
-overhead of single requests over an entire batch of documents. Bulk operations
-are **not guaranteed** to be executed serially, ArangoDB _may_ execute the
-operations in parallel. This can translate into large performance improvements
-especially in a cluster deployment.
-
-ArangoDB continues to process the remaining operations should an error
-occur during the processing of one operation. Errors are returned _inline_ in
-the response body as an error document (see below for more details).
-Additionally, the `X-Arango-Error-Codes` header is set. It contains a
-map of the error codes and how often each kind of error occurred. For
-example, `1200:17,1205:10` means that in 17 cases the error 1200
-("revision conflict") has happened, and in 10 cases the error 1205
-("illegal document handle").
-
-Generally, the bulk operations expect an input array and the result body
-contains a JSON array of the same length.
-
-#### Get multiple documents
-
-```openapi
-paths:
- /_api/document/{collection}#get:
- put:
- operationId: getDocuments
- description: |
- {{* warning */>}}
- The endpoint for getting multiple documents is the same as for replacing
- multiple documents but with an additional query parameter:
- `PUT /_api/document/{collection}?onlyget=true`. This is because a lot of
- software does not support payload bodies in `GET` requests.
- {{* /warning */>}}
-
- Returns the documents identified by their `_key` in the body objects.
- The body of the request _must_ contain a JSON array of either
- strings (the `_key` values to lookup) or search documents.
-
- A search document _must_ contain at least a value for the `_key` field.
- A value for `_rev` _may_ be specified to verify whether the document
- has the same revision value, unless _ignoreRevs_ is set to false.
-
- Cluster only: The search document _may_ contain
- values for the collection's pre-defined shard keys. Values for the shard keys
- are treated as hints to improve performance. Should the shard keys
- values be incorrect ArangoDB may answer with a *not found* error.
-
- The returned array of documents contain three special attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the document revision.
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` from which the documents are to be read.
- schema:
- type: string
- - name: onlyget
- in: query
- required: true
- description: |
- This parameter is required to be `true`, otherwise a replace
- operation is executed!
- schema:
- type: boolean
- - name: ignoreRevs
- in: query
- required: false
- description: |
- Should the value be `true` (the default):
- If a search document contains a value for the `_rev` field,
- then the document is only returned if it has the same revision value.
- Otherwise a precondition failed error is returned.
- schema:
- type: string
- - name: x-arango-allow-dirty-read
- in: header
- required: false
- description: |
- Set this header to `true` to allow the Coordinator to ask any shard replica for
- the data, not only the shard leader. This may result in "dirty reads".
-
- The header is ignored if this operation is part of a Stream Transaction
- (`x-arango-trx-id` header). The header set when creating the transaction decides
- about dirty reads for the entire transaction, not the individual read operations.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - documents
- properties:
- documents:
- description: |
- An array of documents to retrieve.
- type: json
- responses:
- '200':
- description: |
- is returned if no error happened
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of an array of documents. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection was not found.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Reading multiple documents identifier:
-name: RestDocumentHandlerReadMultiDocument
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-db.products.save({"_key":"doc1", "hello":"world"});
-db.products.save({"_key":"doc2", "say":"hi to mom"});
-var url = "/_api/document/products?onlyget=true";
-var body = '["doc1", {"_key":"doc2"}]';
-
-var response = logCurlRequest('PUT', url, body);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-#### Create multiple documents
-
-```openapi
-paths:
- /_api/document/{collection}#multiple:
- post:
- operationId: createDocuments
- description: |
- Creates new documents from the documents given in the body, unless there
- is already a document with the `_key` given. If no `_key` is given, a new
- unique `_key` is generated automatically. The `_id` is automatically
- set in both cases, derived from the collection name and `_key`.
-
- The result body contains a JSON array of the
- same length as the input array, and each entry contains the result
- of the operation for the corresponding input. In case of an error
- the entry is a document with attributes `error` set to `true` and
- errorCode set to the error code that has happened.
-
- {{* info */>}}
- Any `_id` or `_rev` attribute specified in the body is ignored.
- {{* /info */>}}
-
- Unless `silent` is set to `true`, the body of the response contains an
- array of JSON objects with the following attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the document revision.
-
- If the collection parameter `waitForSync` is `false`, then the call
- returns as soon as the documents have been accepted. It does not wait
- until the documents have been synced to disk.
-
- Optionally, the query parameter `waitForSync` can be used to force
- synchronization of the document creation operation to disk even in
- case that the `waitForSync` flag had been disabled for the entire
- collection. Thus, the `waitForSync` query parameter can be used to
- force synchronization of just this specific operations. To use this,
- set the `waitForSync` parameter to `true`. If the `waitForSync`
- parameter is not specified or set to `false`, then the collection's
- default `waitForSync` behavior is applied. The `waitForSync` query
- parameter cannot be used to disable synchronization for collections
- that have a default `waitForSync` value of `true`.
-
- If the query parameter `returnNew` is `true`, then, for each
- generated document, the complete new document is returned under
- the `new` attribute in the result.
-
- Should an error have occurred with some of the documents,
- the `X-Arango-Error-Codes` HTTP header is set. It contains a map of the
- error codes and how often each kind of error occurred. For example,
- `1200:17,1205:10` means that in 17 cases the error 1200 ("revision conflict")
- has happened, and in 10 cases the error 1205 ("illegal document handle").
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` in which the documents are to be created.
- schema:
- type: string
- - name: collection
- in: query
- required: false
- description: |
- The name of the collection. This is only for backward compatibility.
- In ArangoDB versions < 3.0, the URL path was `/_api/document` and
- this query parameter was required. This combination still works, but
- the recommended way is to specify the collection in the URL path.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until document has been synced to disk.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Additionally return the complete new document under the attribute `new`
- in the result.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Additionally return the complete old document under the attribute `old`
- in the result. Only available if the overwrite option is used.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if all document operations
- succeed. No meta-data is returned for the created documents. If any of the
- operations raises an error, an array with the error object(s) is returned.
-
- You can use this option to save network traffic but you cannot map any errors
- to the inputs of your request.
- schema:
- type: boolean
- - name: overwrite
- in: query
- required: false
- description: |
- If set to `true`, the insert becomes a replace-insert. If a document with the
- same `_key` already exists, the new document is not rejected with a unique
- constraint violation error but replaces the old document. Note that operations
- with `overwrite` parameter require a `_key` attribute in the request payload,
- therefore they can only be performed on collections sharded by `_key`.
- schema:
- type: boolean
- - name: overwriteMode
- in: query
- required: false
- description: |
- This option supersedes `overwrite` and offers the following modes:
- - `"ignore"`: if a document with the specified `_key` value exists already,
- nothing is done and no write operation is carried out. The
- insert operation returns success in this case. This mode does not
- support returning the old document version using `RETURN OLD`. When using
- `RETURN NEW`, `null` is returned in case the document already existed.
- - `"replace"`: if a document with the specified `_key` value exists already,
- it is overwritten with the specified document value. This mode is
- also used when no overwrite mode is specified but the `overwrite`
- flag is set to `true`.
- - `"update"`: if a document with the specified `_key` value exists already,
- it is patched (partially updated) with the specified document value.
- The overwrite mode can be further controlled via the `keepNull` and
- `mergeObjects` parameters.
- - `"conflict"`: if a document with the specified `_key` value exists already,
- return a unique constraint violation error so that the insert operation
- fails. This is also the default behavior in case the overwrite mode is
- not set, and the `overwrite` flag is `false` or not set either.
- schema:
- type: string
- - name: keepNull
- in: query
- required: false
- description: |
- If the intention is to delete existing attributes with the update-insert
- command, set the `keepNull` URL query parameter to `false`. This modifies the
- behavior of the patch command to remove top-level attributes and sub-attributes
- from the existing document that are contained in the patch document with an
- attribute value of `null` (but not attributes of objects that are nested inside
- of arrays). This option controls the update-insert behavior only.
- schema:
- type: boolean
- - name: mergeObjects
- in: query
- required: false
- description: |
- Controls whether objects (not arrays) are merged if present in both, the
- existing and the update-insert document. If set to `false`, the value in the
- patch document overwrites the existing document's value. If set to `true`,
- objects are merged. The default is `true`.
- This option controls the update-insert behavior only.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to add new entries to the in-memory edge cache if edge documents are
- inserted.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - documents
- properties:
- documents:
- description: |
- An array of documents to create.
- type: json
- responses:
- '201':
- description: |
- is returned if `waitForSync` was `true` and operations were processed.
- '202':
- description: |
- is returned if `waitForSync` was `false` and operations were processed.
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of an array of documents. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection specified by `collection` is unknown.
- The response body contains an error document in this case.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Insert multiple documents:
-name: RestDocumentHandlerPostMulti1
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var url = "/_api/document/" + cn;
-var body = '[{"Hello":"Earth"}, {"Hello":"Venus"}, {"Hello":"Mars"}]';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Use of returnNew:
-name: RestDocumentHandlerPostMulti2
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var url = "/_api/document/" + cn + "?returnNew=true";
-var body = '[{"Hello":"Earth"}, {"Hello":"Venus"}, {"Hello":"Mars"}]';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Partially illegal documents:
-name: RestDocumentHandlerPostBadJsonMulti
----
-var cn = "products";
-db._drop(cn);
-db._create(cn);
-
-var url = "/_api/document/" + cn;
-var body = '[{ "_key": 111 }, {"_key":"abc"}]';
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-#### Replace multiple documents
-
-```openapi
-paths:
- /_api/document/{collection}:
- put:
- operationId: replaceDocuments
- description: |
- Replaces multiple documents in the specified collection with the
- ones in the body, the replaced documents are specified by the `_key`
- attributes in the body documents.
-
- The values of the `_key`, `_id`, and `_rev` system attributes as well as
- attributes used as sharding keys cannot be changed.
-
- If `ignoreRevs` is `false` and there is a `_rev` attribute in a
- document in the body and its value does not match the revision of
- the corresponding document in the database, the precondition is
- violated.
-
- Cluster only: The replace documents _may_ contain
- values for the collection's pre-defined shard keys. Values for the shard keys
- are treated as hints to improve performance. Should the shard keys
- values be incorrect ArangoDB may answer with a `not found` error.
-
- Optionally, the query parameter `waitForSync` can be used to force
- synchronization of the document replacement operation to disk even in case
- that the `waitForSync` flag had been disabled for the entire collection.
- Thus, the `waitForSync` query parameter can be used to force synchronization
- of just specific operations. To use this, set the `waitForSync` parameter
- to `true`. If the `waitForSync` parameter is not specified or set to
- `false`, then the collection's default `waitForSync` behavior is
- applied. The `waitForSync` query parameter cannot be used to disable
- synchronization for collections that have a default `waitForSync` value
- of `true`.
-
- The body of the response contains a JSON array of the same length
- as the input array with the information about the identifier and the
- revision of the replaced documents. In each element has the following
- attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the new document revision.
-
- In case of an error or violated precondition, an error
- object with the attribute `error` set to `true` and the attribute
- `errorCode` set to the error code is built.
-
- If the query parameter `returnOld` is `true`, then, for each
- generated document, the complete previous revision of the document
- is returned under the `old` attribute in the result.
-
- If the query parameter `returnNew` is `true`, then, for each
- generated document, the complete new document is returned under
- the `new` attribute in the result.
-
- Note that if any precondition is violated or an error occurred with
- some of the documents, the return code is still 201 or 202, but the
- `X-Arango-Error-Codes` HTTP header is set. It contains a map of the
- error codes and how often each kind of error occurred. For example,
- `1200:17,1205:10` means that in 17 cases the error 1200 ("revision conflict")
- has happened, and in 10 cases the error 1205 ("illegal document handle").
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - documents
- properties:
- documents:
- description: |
- A JSON representation of an array of documents.
- Each element has to contain a `_key` attribute.
- type: json
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- This URL parameter is the name of the collection in which the
- documents are replaced.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until the new documents have been synced to disk.
- schema:
- type: boolean
- - name: ignoreRevs
- in: query
- required: false
- description: |
- By default, or if this is set to `true`, the `_rev` attributes in
- the given documents are ignored. If this is set to `false`, then
- any `_rev` attribute given in a body document is taken as a
- precondition. The document is only replaced if the current revision
- is the one specified.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Return additionally the complete previous revision of the changed
- documents under the attribute `old` in the result.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Return additionally the complete new documents under the attribute `new`
- in the result.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if all document operations
- succeed. No meta-data is returned for the replaced documents. If at least one
- operation raises an error, an array with the error object(s) is returned.
-
- You can use this option to save network traffic but you cannot map any errors
- to the inputs of your request.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to update existing entries in the in-memory edge cache if
- edge documents are replaced.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '201':
- description: |
- is returned if `waitForSync` was `true` and operations were processed.
- '202':
- description: |
- is returned if `waitForSync` was `false` and operations were processed.
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of an array of documents. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection was not found.
- tags:
- - Documents
-```
-
-#### Update multiple documents
-
-```openapi
-paths:
- /_api/document/{collection}:
- patch:
- operationId: updateDocuments
- description: |
- Partially updates documents, the documents to update are specified
- by the `_key` attributes in the body objects. The body of the
- request must contain a JSON array of document updates with the
- attributes to patch (the patch documents). All attributes from the
- patch documents are added to the existing documents if they do
- not yet exist, and overwritten in the existing documents if they do
- exist there.
-
- The values of the `_key`, `_id`, and `_rev` system attributes as well as
- attributes used as sharding keys cannot be changed.
-
- Setting an attribute value to `null` in the patch documents causes a
- value of `null` to be saved for the attribute by default.
-
- If `ignoreRevs` is `false` and there is a `_rev` attribute in a
- document in the body and its value does not match the revision of
- the corresponding document in the database, the precondition is
- violated.
-
- Cluster only: The patch document _may_ contain
- values for the collection's pre-defined shard keys. Values for the shard keys
- are treated as hints to improve performance. Should the shard keys
- values be incorrect ArangoDB may answer with a *not found* error
-
- Optionally, the query parameter `waitForSync` can be used to force
- synchronization of the document replacement operation to disk even in case
- that the `waitForSync` flag had been disabled for the entire collection.
- Thus, the `waitForSync` query parameter can be used to force synchronization
- of just specific operations. To use this, set the `waitForSync` parameter
- to `true`. If the `waitForSync` parameter is not specified or set to
- `false`, then the collection's default `waitForSync` behavior is
- applied. The `waitForSync` query parameter cannot be used to disable
- synchronization for collections that have a default `waitForSync` value
- of `true`.
-
- The body of the response contains a JSON array of the same length
- as the input array with the information about the identifier and the
- revision of the updated documents. Each element has the following
- attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the new document revision.
-
- In case of an error or violated precondition, an error
- object with the attribute `error` set to `true` and the attribute
- `errorCode` set to the error code is built.
-
- If the query parameter `returnOld` is `true`, then, for each
- generated document, the complete previous revision of the document
- is returned under the `old` attribute in the result.
-
- If the query parameter `returnNew` is `true`, then, for each
- generated document, the complete new document is returned under
- the `new` attribute in the result.
-
- Note that if any precondition is violated or an error occurred with
- some of the documents, the return code is still 201 or 202, but the
- `X-Arango-Error-Codes` HTTP header is set. It contains a map of the
- error codes and how often each kind of error occurred. For example,
- `1200:17,1205:10` means that in 17 cases the error 1200 ("revision conflict")
- has happened, and in 10 cases the error 1205 ("illegal document handle").
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - documents
- properties:
- documents:
- description: |
- A JSON representation of an array of document updates as objects.
- Each element has to contain a `_key` attribute.
- type: json
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Name of the `collection` in which the documents are to be updated.
- schema:
- type: string
- - name: keepNull
- in: query
- required: false
- description: |
- If the intention is to delete existing attributes with the patch
- command, set the `keepNull` URL query parameter to `false`. This modifies the
- behavior of the patch command to remove top-level attributes and sub-attributes
- from the existing document that are contained in the patch document with an
- attribute value of `null` (but not attributes of objects that are nested inside
- of arrays).
- schema:
- type: boolean
- - name: mergeObjects
- in: query
- required: false
- description: |
- Controls whether objects (not arrays) are merged if present in
- both the existing and the patch document. If set to `false`, the
- value in the patch document overwrites the existing document's
- value. If set to `true`, objects are merged. The default is
- `true`.
- schema:
- type: boolean
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until the new documents have been synced to disk.
- schema:
- type: boolean
- - name: ignoreRevs
- in: query
- required: false
- description: |
- By default, or if this is set to `true`, the `_rev` attributes in
- the given documents are ignored. If this is set to `false`, then
- any `_rev` attribute given in a body document is taken as a
- precondition. The document is only updated if the current revision
- is the one specified.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Return additionally the complete previous revision of the changed
- documents under the attribute `old` in the result.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Return additionally the complete new documents under the attribute `new`
- in the result.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if all document operations
- succeed. No meta-data is returned for the updated documents. If at least one
- operation raises an error, an array with the error object(s) is returned.
-
- You can use this option to save network traffic but you cannot map any errors
- to the inputs of your request.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to update existing entries in the in-memory edge cache if
- edge documents are updated.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '201':
- description: |
- is returned if `waitForSync` was `true` and operations were processed.
- '202':
- description: |
- is returned if `waitForSync` was `false` and operations were processed.
- '400':
- description: |
- is returned if the body does not contain a valid JSON representation
- of an array of documents. The response body contains
- an error document in this case.
- '404':
- description: |
- is returned if the collection was not found.
- tags:
- - Documents
-```
-
-#### Remove multiple documents
-
-```openapi
-paths:
- /_api/document/{collection}:
- delete:
- operationId: deleteDocuments
- description: |
- The body of the request is an array consisting of selectors for
- documents. A selector can either be a string with a key or a string
- with a document identifier or an object with a `_key` attribute. This
- API call removes all specified documents from `collection`.
- If the `ignoreRevs` query parameter is `false` and the
- selector is an object and has a `_rev` attribute, it is a
- precondition that the actual revision of the removed document in the
- collection is the specified one.
-
- The body of the response is an array of the same length as the input
- array. For each input selector, the output contains a JSON object
- with the information about the outcome of the operation. If no error
- occurred, then such an object has the following attributes:
- - `_id`, containing the document identifier with the format `/`.
- - `_key`, containing the document key that uniquely identifies a document within the collection.
- - `_rev`, containing the document revision.
- In case of an error, the object has the `error` attribute set to `true`
- and `errorCode` set to the error code.
-
- If the `waitForSync` parameter is not specified or set to `false`,
- then the collection's default `waitForSync` behavior is applied.
- The `waitForSync` query parameter cannot be used to disable
- synchronization for collections that have a default `waitForSync`
- value of `true`.
-
- If the query parameter `returnOld` is `true`, then
- the complete previous revision of the document
- is returned under the `old` attribute in the result.
-
- Note that if any precondition is violated or an error occurred with
- some of the documents, the return code is still 200 or 202, but the
- `X-Arango-Error-Codes` HTTP header is set. It contains a map of the
- error codes and how often each kind of error occurred. For example,
- `1200:17,1205:10` means that in 17 cases the error 1200 ("revision conflict")
- has happened, and in 10 cases the error 1205 ("illegal document handle").
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - documents
- properties:
- documents:
- description: |
- A JSON representation of an array of document updates as objects.
- Each element has to contain a `_key` attribute.
- type: json
- parameters:
- - name: collection
- in: path
- required: true
- description: |
- Collection from which documents are removed.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Wait until deletion operation has been synced to disk.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Return additionally the complete previous revision of the changed
- document under the attribute `old` in the result.
- schema:
- type: boolean
- - name: silent
- in: query
- required: false
- description: |
- If set to `true`, an empty object is returned as response if all document operations
- succeed. No meta-data is returned for the deleted documents. If at least one of
- the operations raises an error, an array with the error object(s) is returned.
-
- You can use this option to save network traffic but you cannot map any errors
- to the inputs of your request.
- schema:
- type: boolean
- - name: ignoreRevs
- in: query
- required: false
- description: |
- If set to `true`, ignore any `_rev` attribute in the selectors. No
- revision check is performed. If set to `false` then revisions are checked.
- The default is `true`.
- schema:
- type: boolean
- - name: refillIndexCaches
- in: query
- required: false
- description: |
- Whether to delete existing entries from the in-memory edge cache and refill it
- with other edges if edge documents are removed.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- is returned if `waitForSync` was `true`.
- '202':
- description: |
- is returned if `waitForSync` was `false`.
- '404':
- description: |
- is returned if the collection was not found.
- The response body contains an error document in this case.
- tags:
- - Documents
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Using document keys:
-name: RestDocumentHandlerDeleteDocumentKeyMulti
----
-var assertEqual = require("jsunity").jsUnity.assertions.assertEqual;
- var cn = "products";
- db._drop(cn);
- db._create(cn, { waitForSync: true });
-
-var documents = db.products.save( [
- { "_key": "1", "type": "tv" },
- { "_key": "2", "type": "cookbook" }
- ] );
-
- var url = "/_api/document/" + cn;
-
- var body = [ "1", "2" ];
- var response = logCurlRequest('DELETE', url, body);
-
- assert(response.code === 200);
- assertEqual(response.parsedBody, documents);
-
- logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Using document identifiers:
-name: RestDocumentHandlerDeleteDocumentIdentifierMulti
----
-var assertEqual = require("jsunity").jsUnity.assertions.assertEqual;
- var cn = "products";
- db._drop(cn);
- db._create(cn, { waitForSync: true });
-
-var documents = db.products.save( [
- { "_key": "1", "type": "tv" },
- { "_key": "2", "type": "cookbook" }
- ] );
-
- var url = "/_api/document/" + cn;
-
- var body = [ "products/1", "products/2" ];
- var response = logCurlRequest('DELETE', url, body);
-
- assert(response.code === 200);
- assertEqual(response.parsedBody, documents);
-
- logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Using objects with document keys:
-name: RestDocumentHandlerDeleteDocumentObjectMulti
----
-var assertEqual = require("jsunity").jsUnity.assertions.assertEqual;
- var cn = "products";
- db._drop(cn);
- db._create(cn, { waitForSync: true });
-
-var documents = db.products.save( [
- { "_key": "1", "type": "tv" },
- { "_key": "2", "type": "cookbook" }
- ] );
-
- var url = "/_api/document/" + cn;
-
- var body = [ { "_key": "1" }, { "_key": "2" } ];
- var response = logCurlRequest('DELETE', url, body);
-
- assert(response.code === 200);
- assertEqual(response.parsedBody, documents);
-
- logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Unknown documents:
-name: RestDocumentHandlerDeleteDocumentUnknownMulti
----
-var cn = "products";
-db._drop(cn);
-db._drop("other");
-db._create(cn, { waitForSync: true });
-db._create("other", { waitForSync: true });
-
-var documents = db.products.save( [
-{ "_key": "1", "type": "tv" },
-{ "_key": "2", "type": "cookbook" }
-] );
-db.products.remove(documents);
-db.other.save( { "_key": "2" } );
-
-var url = "/_api/document/" + cn;
-
-var body = [ "1", "other/2" ];
-var response = logCurlRequest('DELETE', url, body);
-
-assert(response.code === 202);
-response.parsedBody.forEach(function(doc) {
-assert(doc.error === true);
-assert(doc.errorNum === 1202);
-});
-
-logJsonResponse(response);
-db._drop(cn);
-db._drop("other");
-```
-
-```curl
----
-description: |-
- Check revisions:
-name: RestDocumentHandlerDeleteDocumentRevMulti
----
-var assertEqual = require("jsunity").jsUnity.assertions.assertEqual;
- var cn = "products";
- db._drop(cn);
- db._create(cn, { waitForSync: true });
-
-var documents = db.products.save( [
- { "_key": "1", "type": "tv" },
- { "_key": "2", "type": "cookbook" }
- ] );
-
- var url = "/_api/document/" + cn + "?ignoreRevs=false";
-var body = [
- { "_key": "1", "_rev": documents[0]._rev },
- { "_key": "2", "_rev": documents[1]._rev }
- ];
-
- var response = logCurlRequest('DELETE', url, body);
-
- assert(response.code === 200);
- assertEqual(response.parsedBody, documents);
-
- logJsonResponse(response);
-db._drop(cn);
-```
-
-```curl
----
-description: |-
- Revision conflict:
-name: RestDocumentHandlerDeleteDocumentRevConflictMulti
----
-var cn = "products";
-db._drop(cn);
-db._create(cn, { waitForSync: true });
-
-var documents = db.products.save( [
-{ "_key": "1", "type": "tv" },
-{ "_key": "2", "type": "cookbook" }
-] );
-
-var url = "/_api/document/" + cn + "?ignoreRevs=false";
-var body = [
-{ "_key": "1", "_rev": "non-matching revision" },
-{ "_key": "2", "_rev": "non-matching revision" }
-];
-
-var response = logCurlRequest('DELETE', url, body);
-
-assert(response.code === 202);
-response.parsedBody.forEach(function(doc) {
-assert(doc.error === true);
-assert(doc.errorNum === 1200);
-});
-
-logJsonResponse(response);
-db._drop(cn);
-```
-
-### Read from followers
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.0
-
-In an ArangoDB cluster, all reads and writes are performed via
-the shard leaders. Shard replicas replicate all operations, but are
-only on hot standby to take over in case of a failure. This is to ensure
-consistency of reads and writes and allows giving a certain amount of
-transactional guarantees.
-
-If high throughput is more important than consistency and transactional
-guarantees for you, then you may allow for so-called "dirty reads" or
-"reading from followers", for certain read-only operations. In this case,
-Coordinators are allowed to read not only from
-leader shards but also from follower shards. This has a positive effect,
-because the reads can scale out to all DB-Servers which have copies of
-the data. Therefore, the read throughput is higher. Note however, that you
-still have to go through your Coordinators. To get the desired result, you
-have to have enough Coordinators, load balance your client requests
-across all of them, and then allow reads from followers.
-
-You may observe the following data inconsistencies (dirty reads) when
-reading from followers:
-
-- It is possible to see old, **obsolete revisions** of documents. More
- exactly, it is possible that documents are already updated on the leader shard
- but the updates have not yet been replicated to the follower shard
- from which you are reading.
-
-- It is also possible to see changes to documents that
- **have already happened on a replica**, but are not yet officially
- committed on the leader shard.
-
-When no writes are happening, allowing reading from followers is generally safe.
-
-The following APIs support reading from followers:
-
-- Single document reads (`GET /_api/document` and `HEAD /_api/document`)
-- Batch document reads (`PUT /_api/document?onlyget=true`)
-- Read-only AQL queries (`POST /_api/cursor`)
-- The edge API (`GET /_api/edges`)
-- Read-only Stream Transactions and their sub-operations
- (`POST /_api/transaction/begin` etc.)
-
-The following APIs do not support reading from followers:
-
-- The graph API (`GET /_api/gharial` etc.)
-- JavaScript Transactions (`POST /_api/transaction`)
-
-You need to set the following HTTP header in API requests to ask for reads
-from followers:
-
-```
-x-arango-allow-dirty-read: true
-```
-
-This is in line with the older support to read from followers in the
-[Active Failover](../../deploy/active-failover/_index.md#reading-from-followers)
-deployment mode.
-
-For single requests, you specify this header in the read request.
-For Stream Transactions, the header has to be set on the request that
-creates a read-only transaction.
-
-The `POST /_api/cursor` endpoint also supports a query option that you can set
-instead of the HTTP header:
-
-```json
-{ "query": "...", "options": { "allowDirtyReads": true } }
-```
-
-Every response to a request that could produce dirty reads has
-the following HTTP header:
-
-```
-x-arango-potential-dirty-read: true
-```
diff --git a/site/content/3.10/develop/http-api/graphs/named-graphs.md b/site/content/3.10/develop/http-api/graphs/named-graphs.md
deleted file mode 100644
index e3bdcb3b8e..0000000000
--- a/site/content/3.10/develop/http-api/graphs/named-graphs.md
+++ /dev/null
@@ -1,7141 +0,0 @@
----
-title: HTTP interface for named graphs
-menuTitle: Named graphs
-weight: 5
-description: >-
- The HTTP API for named graphs lets you manage General Graphs, SmartGraphs,
- EnterpriseGraphs, and SatelliteGraphs
----
-The HTTP API for [named graphs](../../../graphs/_index.md#named-graphs) is called _Gharial_.
-
-You can manage all types of ArangoDB's named graphs with Gharial:
-- [General Graphs](../../../graphs/general-graphs/_index.md)
-- [SmartGraphs](../../../graphs/smartgraphs/_index.md)
-- [EnterpriseGraphs](../../../graphs/enterprisegraphs/_index.md)
-- [SatelliteGraphs](../../../graphs/satellitegraphs/_index.md)
-
-The examples use the following example graphs:
-
-[_Social Graph_](../../../graphs/example-graphs.md#social-graph):
-
-
-
-[_Knows Graph_](../../../graphs/example-graphs.md#knows-graph):
-
-
-
-## Management
-
-### List all graphs
-
-```openapi
-paths:
- /_api/gharial:
- get:
- operationId: listGraphs
- description: |
- Lists all graphs stored in this database.
- responses:
- '200':
- description: |
- Is returned if the module is available and the graphs can be listed.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graphs
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- graphs:
- description: |
- A list of all named graphs.
- type: array
- items:
- type: object
- properties:
- graph:
- description: |
- The properties of the named graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialList
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-examples.loadGraph("routeplanner");
-var url = "/_api/gharial";
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-examples.dropGraph("routeplanner");
-```
-
-### Create a graph
-
-```openapi
-paths:
- /_api/gharial:
- post:
- operationId: createGraph
- description: |
- The creation of a graph requires the name of the graph and a
- definition of its edges.
- parameters:
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until everything is synced to disk.
- Changes the success HTTP response status code.
- schema:
- type: boolean
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - name
- properties:
- name:
- description: |
- Name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- isSmart:
- description: |
- Define if the created graph should be smart (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether to create a Disjoint SmartGraph instead of a regular SmartGraph
- (Enterprise Edition only).
- type: boolean
- options:
- description: |
- a JSON object to define options for creating collections within this graph.
- It can contain the following attributes:
- type: object
- properties:
- smartGraphAttribute:
- description: |
- Only has effect in Enterprise Edition and it is required if isSmart is true.
- The attribute name that is used to smartly shard the vertices of a graph.
- Every vertex in this SmartGraph has to have this attribute.
- Cannot be modified later.
- type: string
- satellites:
- description: |
- An array of collection names that is used to create SatelliteCollections
- for a (Disjoint) SmartGraph using SatelliteCollections (Enterprise Edition only).
- Each array element must be a string and a valid collection name.
- The collection type cannot be modified later.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- The number of shards that is used for every collection within this graph.
- Cannot be modified later.
- type: integer
- replicationFactor:
- description: |
- The replication factor used when initially creating collections for this graph.
- Can be set to `"satellite"` to create a SatelliteGraph, which then ignores
- `numberOfShards`, `minReplicationFactor`, and `writeConcern`
- (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- Write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- required:
- - numberOfShards
- - replicationFactor
- responses:
- '201':
- description: |
- Is returned if the graph can be created and `waitForSync` is enabled
- for the `_graphs` collection, or given in the request.
- The response body contains the graph configuration that has been stored.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- graph:
- description: |
- The information about the newly created graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '202':
- description: |
- Is returned if the graph can be created and `waitForSync` is disabled
- for the `_graphs` collection and not given in the request.
- The response body contains the graph configuration that has been stored.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- graph:
- description: |
- The information about the newly created graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '400':
- description: |
- Returned if the request is in a wrong format.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 400
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to create a graph, you need to have at least the following privileges:
- - `Administrate` access on the database.
- - `Read Only` access on every collection used within this graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '409':
- description: |
- Returned if there is a conflict storing the graph. This can occur
- either if a graph with this name already exists, or if there is an
- edge definition with the same edge collection but different `from`
- and `to` vertex collections in any other graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 409
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: |-
- Create a General Graph. This graph type does not make use of any sharding
- strategy and is useful on the single server.
-name: HttpGharialCreate
----
-var graph = require("@arangodb/general-graph");
-if (graph._exists("myGraph")) {
- graph._drop("myGraph", true);
-}
-var url = "/_api/gharial";
-body = {
- name: "myGraph",
- edgeDefinitions: [{
- collection: "edges",
- from: [ "startVertices" ],
- to: [ "endVertices" ]
- }]
-};
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-
-graph._drop("myGraph", true);
-```
-
-```curl
----
-description: |-
- Create a SmartGraph. This graph uses 9 shards and
- is sharded by the "region" attribute.
- Available in the Enterprise Edition only.
-name: HttpGharialCreateSmart
----
-var graph = require("@arangodb/general-graph");
-if (graph._exists("smartGraph")) {
- graph._drop("smartGraph", true);
-}
-var url = "/_api/gharial";
-body = {
- name: "smartGraph",
- edgeDefinitions: [{
- collection: "edges",
- from: [ "startVertices" ],
- to: [ "endVertices" ]
- }],
- orphanCollections: [ "orphanVertices" ],
- isSmart: true,
- options: {
- replicationFactor: 2,
- numberOfShards: 9,
- smartGraphAttribute: "region"
- }
-};
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-
-graph._drop("smartGraph", true);
-```
-
-```curl
----
-description: |-
- Create a disjoint SmartGraph. This graph uses 9 shards and
- is sharded by the "region" attribute.
- Available in the Enterprise Edition only.
- Note that as you are using a disjoint version, you can only
- create edges between vertices sharing the same region.
-name: HttpGharialCreateDisjointSmart
----
-var graph = require("@arangodb/general-graph");
- if (graph._exists("disjointSmartGraph")) {
- graph._drop("disjointSmartGraph", true);
-}
-var url = "/_api/gharial";
-body = {
-name: "disjointSmartGraph",
-edgeDefinitions: [{
-collection: "edges",
-from: [ "startVertices" ],
-to: [ "endVertices" ]
-}],
-orphanCollections: [ "orphanVertices" ],
-isSmart: true,
-options: {
-isDisjoint: true,
-replicationFactor: 2,
-numberOfShards: 9,
-smartGraphAttribute: "region"
-}
-};
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-
-graph._drop("disjointSmartGraph", true);
-```
-
-```curl
----
-description: |-
- Create a SmartGraph with a satellite vertex collection.
- It uses the collection "endVertices" as a satellite collection.
- This collection is cloned to all servers, all other vertex
- collections are split into 9 shards
- and are sharded by the "region" attribute.
- Available in the Enterprise Edition only.
-name: HttpGharialCreateSmartWithSatellites
----
-var graph = require("@arangodb/general-graph");
- if (graph._exists("smartGraph")) {
- graph._drop("smartGraph", true);
-}
-var url = "/_api/gharial";
-body = {
-name: "smartGraph",
-edgeDefinitions: [{
-collection: "edges",
-from: [ "startVertices" ],
-to: [ "endVertices" ]
-}],
-orphanCollections: [ "orphanVertices" ],
-isSmart: true,
-options: {
-replicationFactor: 2,
-numberOfShards: 9,
-smartGraphAttribute: "region",
-satellites: [ "endVertices" ]
-}
-};
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-
-graph._drop("smartGraph", true);
-```
-
-```curl
----
-description: |-
- Create an EnterpriseGraph. This graph uses 9 shards,
- it does not make use of specific sharding attributes.
- Available in the Enterprise Edition only.
-name: HttpGharialCreateEnterprise
----
-var graph = require("@arangodb/general-graph");
- if (graph._exists("enterpriseGraph")) {
- graph._drop("enterpriseGraph", true);
-}
-var url = "/_api/gharial";
-body = {
-name: "enterpriseGraph",
-edgeDefinitions: [{
-collection: "edges",
-from: [ "startVertices" ],
-to: [ "endVertices" ]
-}],
-orphanCollections: [ ],
-isSmart: true,
-options: {
-replicationFactor: 2,
-numberOfShards: 9,
-}
-};
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-
-graph._drop("enterpriseGraph", true);
-```
-
-```curl
----
-description: |-
- Create a SatelliteGraph. A SatelliteGraph does not use
- shards, but uses "satellite" as replicationFactor.
- Make sure to keep this graph small as it is cloned
- to every server.
- Available in the Enterprise Edition only.
-name: HttpGharialCreateSatellite
----
-var graph = require("@arangodb/general-graph");
- if (graph._exists("satelliteGraph")) {
- graph._drop("satelliteGraph", true);
-}
-var url = "/_api/gharial";
-body = {
-name: "satelliteGraph",
-edgeDefinitions: [{
-collection: "edges",
-from: [ "startVertices" ],
-to: [ "endVertices" ]
-}],
-orphanCollections: [ ],
-options: {
-replicationFactor: "satellite"
-}
-};
-
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-
-graph._drop("satelliteGraph", true);
-```
-
-### Get a graph
-
-```openapi
-paths:
- /_api/gharial/{graph}:
- get:
- operationId: getGraph
- description: |
- Selects information for a given graph.
- Returns the edge definitions as well as the orphan collections,
- or returns an error if the graph does not exist.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- responses:
- '200':
- description: |
- Returns the graph if it can be found.
- The result has the following format:
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- graph:
- description: |
- The information about the newly created graph
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialGetGraph
----
-var graph = require("@arangodb/general-graph");
-if (graph._exists("myGraph")) {
- graph._drop("myGraph", true);
-}
-graph._create("myGraph", [{
- collection: "edges",
- from: [ "startVertices" ],
- to: [ "endVertices" ]
-}]);
-var url = "/_api/gharial/myGraph";
-
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-
-graph._drop("myGraph", true);
-```
-
-### Drop a graph
-
-```openapi
-paths:
- /_api/gharial/{graph}:
- delete:
- operationId: deleteGraph
- description: |
- Drops an existing graph object by name.
- Optionally all collections not used by other graphs
- can be dropped as well.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: dropCollections
- in: query
- required: false
- description: |
- Drop the collections of this graph as well. Collections are only
- dropped if they are not used in other graphs.
- schema:
- type: boolean
- responses:
- '202':
- description: |
- Is returned if the graph can be dropped.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - removed
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- removed:
- description: |
- Always `true`.
- type: boolean
- example: true
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to drop a graph, you need to have at least the following privileges:
- - `Administrate` access on the database.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialDrop
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social?dropCollections=true";
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### List vertex collections
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex:
- get:
- operationId: listVertexCollections
- description: |
- Lists all vertex collections within this graph, including orphan collections.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- responses:
- '200':
- description: |
- Is returned if the collections can be listed.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - collections
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- collections:
- description: |
- The list of all vertex collections within this graph.
- Includes the vertex collections used in edge definitions
- as well as orphan collections.
- type: array
- items:
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialListVertex
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/vertex";
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Add a vertex collection
-
-Adding a vertex collection on its own to a graph adds it as an orphan collection.
-If you want to use an additional vertex collection for graph relations, add it
-by [adding a new edge definition](#add-an-edge-definition) or
-[modifying an existing edge definition](#replace-an-edge-definition) instead.
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex:
- post:
- operationId: addVertexCollection
- description: |
- Adds a vertex collection to the set of orphan collections of the graph.
- If the collection does not exist, it is created.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - collection
- properties:
- collection:
- description: |
- The name of the vertex collection to add to the graph definition.
- type: string
- options:
- description: |
- A JSON object to set options for creating vertex collections.
- type: object
- properties:
- satellites:
- description: |
- An array of collection names that is used to create SatelliteCollections
- for a (Disjoint) SmartGraph using SatelliteCollections (Enterprise Edition only).
- Each array element must be a string and a valid collection name.
- The collection type cannot be modified later.
- type: array
- items:
- type: string
- responses:
- '201':
- description: |
- Is returned if the collection can be created and `waitForSync` is enabled
- for the `_graphs` collection, or given in the request.
- The response body contains the graph configuration that has been stored.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '202':
- description: |
- Is returned if the collection can be created and `waitForSync` is disabled
- for the `_graphs` collection, or given in the request.
- The response body contains the graph configuration that has been stored.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- graph:
- description: |
- The information about the newly created graph
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '400':
- description: |
- Returned if the request is in an invalid format.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 400
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to modify a graph, you need to have at least the following privileges:
- - `Administrate` access on the database.
- - `Read Only` access on every collection used within this graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialAddVertexCol
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/vertex";
-body = {
- collection: "otherVertices"
-};
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Remove a vertex collection
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex/{collection}:
- delete:
- operationId: deleteVertexCollection
- description: |
- Removes a vertex collection from the list of the graph's
- orphan collections. It can optionally delete the collection if it is
- not used in any other graph.
-
- You cannot remove vertex collections that are used in one of the
- edge definitions of the graph. You need to modify or remove the
- edge definition first in order to fully remove a vertex collection from
- the graph.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the vertex collection.
- schema:
- type: string
- - name: dropCollection
- in: query
- required: false
- description: |
- Drop the collection in addition to removing it from the graph.
- The collection is only dropped if it is not used in other graphs.
- schema:
- type: boolean
- responses:
- '200':
- description: |
- Returned if the vertex collection was removed from the graph successfully
- and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- graph:
- description: |
- The information about the newly created graph
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- graph:
- description: |
- The information about the newly created graph
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '400':
- description: |
- Returned if the vertex collection is still used in an edge definition.
- In this case it cannot be removed from the graph yet, it has to be
- removed from the edge definition first.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 400
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to drop a vertex, you need to have at least the following privileges:
- - `Administrate` access on the database.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: |-
- You can remove vertex collections that are not used in any edge definition:
-name: HttpGharialRemoveVertexCollection
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-var g = examples.loadGraph("social");
-g._addVertexCollection("otherVertices");
-var url = "/_api/gharial/social/vertex/otherVertices";
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-db._drop("otherVertices");
-```
-
-```curl
----
-description: |-
- You cannot remove vertex collections that are used in edge definitions:
-name: HttpGharialRemoveVertexCollectionFailed
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-var g = examples.loadGraph("social");
-var url = "/_api/gharial/social/vertex/male";
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 400);
-
-logJsonResponse(response);
-db._drop("male");
-db._drop("female");
-db._drop("relation");
-examples.dropGraph("social");
-```
-
-### List edge collections
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge:
- get:
- operationId: listEdgeCollections
- description: |
- Lists all edge collections within this graph.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- responses:
- '200':
- description: |
- Is returned if the edge definitions can be listed.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - collections
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- collections:
- description: |
- A list of all edge collections used in the edge definitions
- of this graph.
- type: array
- items:
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialListEdge
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/edge";
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Add an edge definition
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge:
- post:
- operationId: createEdgeDefinition
- description: |
- Adds an additional edge definition to the graph.
-
- This edge definition has to contain a `collection` and an array of
- each `from` and `to` vertex collections. An edge definition can only
- be added if this definition is either not used in any other graph, or
- it is used with exactly the same definition. For example, it is not
- possible to store a definition "e" from "v1" to "v2" in one graph, and
- "e" from "v2" to "v1" in another graph, but both can have "e" from
- "v1" to "v2".
-
- Additionally, collection creation options can be set.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- The name of the edge collection to be used.
- type: string
- from:
- description: |
- One or many vertex collections that can contain source vertices.
- type: array
- items:
- type: string
- to:
- description: |
- One or many vertex collections that can contain target vertices.
- type: array
- items:
- type: string
- options:
- description: |
- A JSON object to set options for creating collections within this
- edge definition.
- type: object
- properties:
- satellites:
- description: |
- An array of collection names that is used to create SatelliteCollections
- for a (Disjoint) SmartGraph using SatelliteCollections (Enterprise Edition only).
- Each array element must be a string and a valid collection name.
- The collection type cannot be modified later.
- type: array
- items:
- type: string
- responses:
- '201':
- description: |
- Returned if the definition can be added successfully and
- `waitForSync` is enabled for the `_graphs` collection.
- The response body contains the graph configuration that has been stored.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '202':
- description: |
- Returned if the definition can be added successfully and
- `waitForSync` is disabled for the `_graphs` collection.
- The response body contains the graph configuration that has been stored.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '400':
- description: |
- Returned if the edge definition can not be added.
- This can be because it is ill-formed, or if there is an
- edge definition with the same edge collection but different `from`
- and `to` vertex collections in any other graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 400
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to modify a graph, you need to have at least the following privileges:
- - `Administrate` access on the database.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialAddEdgeCol
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/edge";
-body = {
- collection: "works_in",
- from: ["female", "male"],
- to: ["city"]
-};
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Replace an edge definition
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}:
- put:
- operationId: replaceEdgeDefinition
- description: |
- Change one specific edge definition.
- This modifies all occurrences of this definition in all graphs known to your database.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection used in the edge definition.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: dropCollections
- in: query
- required: false
- description: |
- Drop the edge collection in addition to removing it from the graph.
- The collection is only dropped if it is not used in other graphs.
- schema:
- type: boolean
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- The name of the edge collection to be used.
- type: string
- from:
- description: |
- One or many vertex collections that can contain source vertices.
- type: array
- items:
- type: string
- to:
- description: |
- One or many vertex collections that can contain target vertices.
- type: array
- items:
- type: string
- options:
- description: |
- A JSON object to set options for modifying collections within this
- edge definition.
- type: object
- properties:
- satellites:
- description: |
- An array of collection names that is used to create SatelliteCollections
- for a (Disjoint) SmartGraph using SatelliteCollections (Enterprise Edition only).
- Each array element must be a string and a valid collection name.
- The collection type cannot be modified later.
- type: array
- items:
- type: string
- responses:
- '201':
- description: |
- Returned if the request was successful and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '400':
- description: |
- Returned if the new edge definition is ill-formed and cannot be used.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 400
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to drop a vertex, you need to have at least the following privileges:
- - `Administrate` access on the database.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found, or if no edge definition
- with this name is found in the graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialReplaceEdgeCol
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/edge/relation";
-body = {
- collection: "relation",
- from: ["female", "male", "animal"],
- to: ["female", "male", "animal"]
-};
-var response = logCurlRequest('PUT', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Remove an edge definition
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}:
- delete:
- operationId: deleteEdgeDefinition
- description: |
- Remove one edge definition from the graph. This only removes the
- edge collection from the graph definition. The vertex collections of the
- edge definition become orphan collections but otherwise remain untouched
- and can still be used in your queries.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection used in the edge definition.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: dropCollections
- in: query
- required: false
- description: |
- Drop the edge collection in addition to removing it from the graph.
- The collection is only dropped if it is not used in other graphs.
- schema:
- type: boolean
- responses:
- '201':
- description: |
- Returned if the edge definition can be removed from the graph
- and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '202':
- description: |
- Returned if the edge definition can be removed from the graph and
- `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - graph
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- graph:
- description: |
- The information about the modified graph.
- type: object
- required:
- - name
- - edgeDefinitions
- - orphanCollections
- - numberOfShards
- - _id
- - _rev
- - replicationFactor
- - isSmart
- - isDisjoint
- - isSatellite
- properties:
- name:
- description: |
- The name of the graph.
- type: string
- edgeDefinitions:
- description: |
- An array of definitions for the relations of the graph.
- Each has the following type:
- type: array
- items:
- type: object
- required:
- - collection
- - from
- - to
- properties:
- collection:
- description: |
- Name of the edge collection, where the edges are stored in.
- type: string
- from:
- description: |
- List of vertex collection names.
- Edges in collection can only be inserted if their _from is in any of the collections here.
- type: array
- items:
- type: string
- to:
- description: |
- List of vertex collection names.
-
- Edges in collection can only be inserted if their _to is in any of the collections here.
- type: array
- items:
- type: string
- orphanCollections:
- description: |
- An array of additional vertex collections.
- Documents in these collections do not have edges within this graph.
- type: array
- items:
- type: string
- numberOfShards:
- description: |
- Number of shards created for every new collection in the graph.
- type: integer
- _id:
- description: |
- The internal id value of this graph.
- type: string
- _rev:
- description: |
- The revision of this graph. Can be used to make sure to not override
- concurrent modifications to this graph.
- type: string
- replicationFactor:
- description: |
- The replication factor used for every new collection in the graph.
- For SatelliteGraphs, it is the string `"satellite"` (Enterprise Edition only).
- type: integer
- writeConcern:
- description: |
- The default write concern for new collections in the graph.
- It determines how many copies of each shard are required to be
- in sync on the different DB-Servers. If there are less than these many copies
- in the cluster, a shard refuses to write. Writes to shards with enough
- up-to-date copies succeed at the same time, however. The value of
- `writeConcern` cannot be greater than `replicationFactor`.
- For SatelliteGraphs, the `writeConcern` is automatically controlled to equal the
- number of DB-Servers and the attribute is not available. _(cluster only)_
- type: integer
- isSmart:
- description: |
- Whether the graph is a SmartGraph (Enterprise Edition only).
- type: boolean
- isDisjoint:
- description: |
- Whether the graph is a Disjoint SmartGraph (Enterprise Edition only).
- type: boolean
- smartGraphAttribute:
- description: |
- Name of the sharding attribute in the SmartGraph case (Enterprise Edition only).
- type: string
- isSatellite:
- description: |
- Flag if the graph is a SatelliteGraph (Enterprise Edition only) or not.
- type: boolean
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to drop a vertex, you need to have at least the following privileges:
- - `Administrate` access on the database.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found,
- or if no edge definition with this name is found in the graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialEdgeDefinitionRemove
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/edge/relation";
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-db._drop("relation");
-examples.dropGraph("social");
-```
-
-## Vertices
-
-### Create a vertex
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex/{collection}:
- post:
- operationId: createVertex
- description: |
- Adds a vertex to the given collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the vertex collection the vertex should be inserted into.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Define if the response should contain the complete
- new version of the document.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - vertex
- properties:
- vertex:
- description: |
- The body has to be the JSON object to be stored.
- type: object
- responses:
- '201':
- description: |
- Returned if the vertex can be added and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- vertex:
- description: |
- The internal attributes for the vertex.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- new:
- description: |
- The complete newly written vertex document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- vertex:
- description: |
- The internal attributes generated while storing the vertex.
- Does not include any attribute given in request body.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- new:
- description: |
- The complete newly written vertex document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to insert vertices into the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned if no graph with this name can be found.
- Or if a graph is found but this collection is not part of the graph.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialAddVertex
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/vertex/male";
-body = {
- name: "Francis"
-}
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Get a vertex
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex/{collection}/{vertex}:
- get:
- operationId: getVertex
- description: |
- Gets a vertex from the given collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the vertex collection the vertex belongs to.
- schema:
- type: string
- - name: vertex
- in: path
- required: true
- description: |
- The `_key` attribute of the vertex.
- schema:
- type: string
- - name: rev
- in: query
- required: false
- description: |
- Must contain a revision.
- If this is set a document is only returned if
- it has exactly this revision.
- Also see if-match header as an alternative to this.
- schema:
- type: string
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is returned,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an query parameter `rev`.
- schema:
- type: string
- - name: if-none-match
- in: header
- required: false
- description: |
- If the "If-None-Match" header is given, then it must contain exactly one ETag. The document is returned,
- only if it has a different revision as the given ETag. Otherwise a HTTP 304 is returned.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- Returned if the vertex can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- vertex:
- description: |
- The complete vertex.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '304':
- description: |
- Returned if the if-none-match header is given and the
- currently stored vertex still has this revision value.
- So there was no update between the last time the vertex
- was fetched by the caller.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 304
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to update vertices in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Read Only` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name could be found.
- - This collection is not part of the graph.
- - The vertex does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialGetVertex
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/vertex/female/alice";
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Update a vertex
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex/{collection}/{vertex}:
- patch:
- operationId: updateVertex
- description: |
- Updates the data of the specific vertex in the collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the vertex collection the vertex belongs to.
- schema:
- type: string
- - name: vertex
- in: path
- required: true
- description: |
- The `_key` attribute of the vertex.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: keepNull
- in: query
- required: false
- description: |
- Define if values set to `null` should be stored.
- By default (`true`), the given documents attribute(s) are set to `null`.
- If this parameter is set to `false`, top-level attribute and sub-attributes with
- a `null` value in the request are removed from the document (but not attributes
- of objects that are nested inside of arrays).
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Define if a presentation of the deleted document should
- be returned within the response object.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Define if a presentation of the new document should
- be returned within the response object.
- schema:
- type: boolean
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is updated,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - vertex
- properties:
- vertex:
- description: |
- The body has to contain a JSON object containing exactly the attributes that should be overwritten, all other attributes remain unchanged.
- type: object
- responses:
- '200':
- description: |
- Returned if the vertex can be updated, and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- vertex:
- description: |
- The internal attributes for the vertex.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- new:
- description: |
- The complete newly written vertex document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten vertex document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful, and `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- vertex:
- description: |
- The internal attributes for the vertex.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- new:
- description: |
- The complete newly written vertex document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten vertex document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to update vertices in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The vertex to update does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialModifyVertex
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-body = {
- age: 26
-}
-var url = "/_api/gharial/social/vertex/female/alice";
-var response = logCurlRequest('PATCH', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Replace a vertex
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex/{collection}/{vertex}:
- put:
- operationId: replaceVertex
- description: |
- Replaces the data of a vertex in the collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the vertex collection the vertex belongs to.
- schema:
- type: string
- - name: vertex
- in: path
- required: true
- description: |
- The `_key` attribute of the vertex.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: keepNull
- in: query
- required: false
- description: |
- Define if values set to `null` should be stored.
- By default (`true`), the given documents attribute(s) are set to `null`.
- If this parameter is set to `false`, top-level attribute and sub-attributes with
- a `null` value in the request are removed from the document (but not attributes
- of objects that are nested inside of arrays).
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Define if a presentation of the deleted document should
- be returned within the response object.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Define if a presentation of the new document should
- be returned within the response object.
- schema:
- type: boolean
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is updated,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - vertex
- properties:
- vertex:
- description: |
- The body has to be the JSON object to be stored.
- type: object
- responses:
- '200':
- description: |
- Returned if the vertex can be replaced, and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- vertex:
- description: |
- The internal attributes for the vertex.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- new:
- description: |
- The complete newly written vertex document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten vertex document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '202':
- description: |
- Returned if the vertex can be replaced, and `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - vertex
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- vertex:
- description: |
- The internal attributes for the vertex.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- new:
- description: |
- The complete newly written vertex document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten vertex document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to replace vertices in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The vertex to replace does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialReplaceVertex
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-body = {
- name: "Alice Cooper",
- age: 26
-}
-var url = "/_api/gharial/social/vertex/female/alice";
-var response = logCurlRequest('PUT', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Remove a vertex
-
-```openapi
-paths:
- /_api/gharial/{graph}/vertex/{collection}/{vertex}:
- delete:
- operationId: deleteVertex
- description: |
- Removes a vertex from the collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the vertex collection the vertex belongs to.
- schema:
- type: string
- - name: vertex
- in: path
- required: true
- description: |
- The `_key` attribute of the vertex.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Define if a presentation of the deleted document should
- be returned within the response object.
- schema:
- type: boolean
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is updated,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- Returned if the vertex can be removed.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - removed
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- removed:
- description: |
- Is set to true if the remove was successful.
- type: boolean
- old:
- description: |
- The complete deleted vertex document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - removed
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- removed:
- description: |
- Is set to true if the remove was successful.
- type: boolean
- old:
- description: |
- The complete deleted vertex document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to delete vertices in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The vertex to remove does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialDeleteVertex
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/vertex/female/alice";
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-## Edges
-
-### Create an edge
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}:
- post:
- operationId: createEdge
- description: |
- Creates a new edge in the specified collection.
- Within the body the edge has to contain a `_from` and `_to` value referencing to valid vertices in the graph.
- Furthermore, the edge has to be valid according to the edge definitions.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection the edge belongs to.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Define if the response should contain the complete
- new version of the document.
- schema:
- type: boolean
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - _from
- - _to
- properties:
- _from:
- description: |
- The source vertex of this edge. Has to be valid within
- the used edge definition.
- type: string
- _to:
- description: |
- The target vertex of this edge. Has to be valid within
- the used edge definition.
- type: string
- responses:
- '201':
- description: |
- Returned if the edge can be created and `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- edge:
- description: |
- The internal attributes for the edge.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- new:
- description: |
- The complete newly written edge document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- edge:
- description: |
- The internal attributes for the edge.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- new:
- description: |
- The complete newly written edge document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '400':
- description: |
- Returned if the input document is invalid.
- This can for instance be the case if the `_from` or `_to` attribute is missing
- or malformed.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 400
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to insert edges into the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in any of the following cases:
- - No graph with this name can be found.
- - The edge collection is not part of the graph.
- - The vertex collection referenced in the `_from` or `_to` attribute is not part of the graph.
- - The vertex collection is part of the graph, but does not exist.
- - `_from` or `_to` vertex does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialAddEdge
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-require("internal").db._drop("relation");
-require("internal").db._drop("female");
-require("internal").db._drop("male");
-examples.loadGraph("social");
-var url = "/_api/gharial/social/edge/relation";
-body = {
- type: "friend",
- _from: "female/alice",
- _to: "female/diana"
-};
-var response = logCurlRequest('POST', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Get an edge
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}/{edge}:
- get:
- operationId: getEdge
- description: |
- Gets an edge from the given collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection the edge belongs to.
- schema:
- type: string
- - name: edge
- in: path
- required: true
- description: |
- The `_key` attribute of the edge.
- schema:
- type: string
- - name: rev
- in: query
- required: false
- description: |
- Must contain a revision.
- If this is set a document is only returned if
- it has exactly this revision.
- Also see if-match header as an alternative to this.
- schema:
- type: string
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is returned,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: if-none-match
- in: header
- required: false
- description: |
- If the "If-None-Match" header is given, then it must contain exactly one ETag. The document is returned,
- only if it has a different revision as the given ETag. Otherwise a HTTP 304 is returned.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- Returned if the edge can be found.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- edge:
- description: |
- The complete edge.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '304':
- description: |
- Returned if the if-none-match header is given and the
- currently stored edge still has this revision value.
- So there was no update between the last time the edge
- was fetched by the caller.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 304
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to update vertices in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Read Only` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The edge does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialGetEdge
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var any = require("@arangodb").db.relation.any();
-var url = "/_api/gharial/social/edge/relation/" + any._key;
-var response = logCurlRequest('GET', url);
-
-assert(response.code === 200);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Update an edge
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}/{edge}:
- patch:
- operationId: updateEdge
- description: |
- Partially modify the data of the specific edge in the collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection the edge belongs to.
- schema:
- type: string
- - name: edge
- in: path
- required: true
- description: |
- The `_key` attribute of the vertex.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: keepNull
- in: query
- required: false
- description: |
- Define if values set to `null` should be stored.
- By default (`true`), the given documents attribute(s) are set to `null`.
- If this parameter is set to `false`, top-level attribute and sub-attributes with
- a `null` value in the request are removed from the document (but not attributes
- of objects that are nested inside of arrays).
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Define if a presentation of the deleted document should
- be returned within the response object.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Define if a presentation of the new document should
- be returned within the response object.
- schema:
- type: boolean
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is updated,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - edge
- properties:
- edge:
- description: |
- The body has to contain a JSON object containing exactly the attributes that should be overwritten, all other attributes remain unchanged.
- type: object
- responses:
- '200':
- description: |
- Returned if the edge can be updated, and `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- edge:
- description: |
- The internal attributes for the edge.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- new:
- description: |
- The complete newly written edge document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten edge document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- edge:
- description: |
- The internal attributes for the edge.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- new:
- description: |
- The complete newly written edge document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten edge document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to update edges in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The edge to update does not exist.
- - Either `_from` or `_to` vertex does not exist (if updated).
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialPatchEdge
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var any = require("@arangodb").db.relation.any();
-var url = "/_api/gharial/social/edge/relation/" + any._key;
-body = {
- since: "01.01.2001"
-}
-var response = logCurlRequest('PATCH', url, body);
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Replace an edge
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}/{edge}:
- put:
- operationId: replaceEdge
- description: |
- Replaces the data of an edge in the collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection the edge belongs to.
- schema:
- type: string
- - name: edge
- in: path
- required: true
- description: |
- The `_key` attribute of the vertex.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: keepNull
- in: query
- required: false
- description: |
- Define if values set to `null` should be stored.
- By default (`true`), the given documents attribute(s) are set to `null`.
- If this parameter is set to `false`, top-level attribute and sub-attributes with
- a `null` value in the request are removed from the document (but not attributes
- of objects that are nested inside of arrays).
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Define if a presentation of the deleted document should
- be returned within the response object.
- schema:
- type: boolean
- - name: returnNew
- in: query
- required: false
- description: |
- Define if a presentation of the new document should
- be returned within the response object.
- schema:
- type: boolean
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is updated,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- requestBody:
- content:
- application/json:
- schema:
- type: object
- required:
- - _from
- - _to
- properties:
- _from:
- description: |
- The source vertex of this edge. Has to be valid within
- the used edge definition.
- type: string
- _to:
- description: |
- The target vertex of this edge. Has to be valid within
- the used edge definition.
- type: string
- responses:
- '201':
- description: |
- Returned if the request was successful but `waitForSync` is `true`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 201
- edge:
- description: |
- The internal attributes for the edge
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- new:
- description: |
- The complete newly written edge document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten edge document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - edge
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- edge:
- description: |
- The internal attributes for the edge
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- new:
- description: |
- The complete newly written edge document.
- Includes all written attributes in the request body
- and all internal attributes generated by ArangoDB.
- Only present if `returnNew` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- old:
- description: |
- The complete overwritten edge document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to replace edges in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The edge to replace does not exist.
- - Either `_from` or `_to` vertex does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialPutEdge
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var any = require("@arangodb").db.relation.any();
-var url = "/_api/gharial/social/edge/relation/" + any._key;
-body = {
- type: "divorced",
- _from: "female/alice",
- _to: "male/bob"
-}
-var response = logCurlRequest('PUT', url, body);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
-
-### Remove an edge
-
-```openapi
-paths:
- /_api/gharial/{graph}/edge/{collection}/{edge}:
- delete:
- operationId: deleteEdge
- description: |
- Removes an edge from the collection.
- parameters:
- - name: graph
- in: path
- required: true
- description: |
- The name of the graph.
- schema:
- type: string
- - name: collection
- in: path
- required: true
- description: |
- The name of the edge collection the edge belongs to.
- schema:
- type: string
- - name: edge
- in: path
- required: true
- description: |
- The `_key` attribute of the edge.
- schema:
- type: string
- - name: waitForSync
- in: query
- required: false
- description: |
- Define if the request should wait until synced to disk.
- schema:
- type: boolean
- - name: returnOld
- in: query
- required: false
- description: |
- Define if a presentation of the deleted document should
- be returned within the response object.
- schema:
- type: boolean
- - name: if-match
- in: header
- required: false
- description: |
- If the "If-Match" header is given, then it must contain exactly one ETag. The document is updated,
- if it has the same revision as the given ETag. Otherwise a HTTP 412 is returned. As an alternative
- you can supply the ETag in an attribute rev in the URL.
- schema:
- type: string
- - name: x-arango-trx-id
- in: header
- required: false
- description: |
- To make this operation a part of a Stream Transaction, set this header to the
- transaction ID returned by the `POST /_api/transaction/begin` call.
- schema:
- type: string
- responses:
- '200':
- description: |
- Returned if the edge can be removed.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - removed
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 200
- removed:
- description: |
- Is set to true if the remove was successful.
- type: boolean
- old:
- description: |
- The complete deleted edge document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '202':
- description: |
- Returned if the request was successful but `waitForSync` is `false`.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - removed
- properties:
- error:
- description: |
- A flag indicating that no error occurred.
- type: boolean
- example: false
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 202
- removed:
- description: |
- Is set to true if the remove was successful.
- type: boolean
- old:
- description: |
- The complete deleted edge document.
- Includes all attributes stored before this operation.
- Only present if `returnOld` is `true`.
- type: object
- required:
- - _id
- - _key
- - _rev
- - _from
- - _to
- properties:
- _id:
- description: |
- The _id value of the stored data.
- type: string
- _key:
- description: |
- The _key value of the stored data.
- type: string
- _rev:
- description: |
- The _rev value of the stored data.
- type: string
- _from:
- description: |
- The _from value of the stored data.
- type: string
- _to:
- description: |
- The _to value of the stored data.
- type: string
- '403':
- description: |
- Returned if your user has insufficient rights.
- In order to delete vertices in the graph, you need to have at least the following privileges:
- - `Read Only` access on the database.
- - `Write` access on the given collection.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 403
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '404':
- description: |
- Returned in the following cases:
- - No graph with this name can be found.
- - This collection is not part of the graph.
- - The edge to remove does not exist.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 404
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- '412':
- description: |
- Returned if if-match header is given, but the stored documents revision is different.
- content:
- application/json:
- schema:
- type: object
- required:
- - error
- - code
- - errorNum
- - errorMessage
- properties:
- error:
- description: |
- A flag indicating that an error occurred.
- type: boolean
- example: true
- code:
- description: |
- The HTTP response status code.
- type: integer
- example: 412
- errorNum:
- description: |
- ArangoDB error number for the error that occurred.
- type: integer
- errorMessage:
- description: |
- A descriptive error message.
- type: string
- tags:
- - Graphs
-```
-
-**Examples**
-
-```curl
----
-description: ''
-name: HttpGharialDeleteEdge
----
-var examples = require("@arangodb/graph-examples/example-graph.js");
-examples.dropGraph("social");
-examples.loadGraph("social");
-var any = require("@arangodb").db.relation.any();
-var url = "/_api/gharial/social/edge/relation/" + any._key;
-var response = logCurlRequest('DELETE', url);
-
-assert(response.code === 202);
-
-logJsonResponse(response);
-examples.dropGraph("social");
-```
diff --git a/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md b/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md
deleted file mode 100644
index 6cb6217cb0..0000000000
--- a/site/content/3.10/develop/integrations/arangodb-datasource-for-apache-spark.md
+++ /dev/null
@@ -1,372 +0,0 @@
----
-title: ArangoDB Datasource for Apache Spark
-menuTitle: Datasource for Apache Spark
-weight: 10
-description: >-
- ArangoDB Datasource for Apache Spark allows batch reading and writing Spark DataFrame data
-aliases:
-- arangodb-spark-connector
-- arangodb-spark-connector/getting-started
-- arangodb-spark-connector/reference
-- arangodb-spark-connector/reference/java
-- arangodb-spark-connector/reference/scala
----
-ArangoDB Datasource for Apache Spark allows batch reading and writing Spark DataFrame data from and to ArangoDB, by implementing the Spark Data Source V2 API.
-
-Reading tasks are parallelized according to the number of shards of the related ArangoDB collection, and the writing ones - depending on the source DataFrame partitions. The network traffic is load balanced across the available DB Coordinators.
-
-Filter predicates and column selections are pushed down to the DB by dynamically generating AQL queries, which will fetch only the strictly required data, thus saving network and computational resources both on the Spark and the DB side.
-
-The connector is usable from all the Spark supported client languages: Scala, Python, Java, and R.
-
-This library works with all the non-EOLed [ArangoDB versions](https://www.arangodb.com/subscriptions/end-of-life-notice/).
-
-## Supported versions
-
-There are several variants of this library, each one compatible with different Spark and Scala versions:
-
-- `com.arangodb:arangodb-spark-datasource-3.3_2.12` (Spark 3.3, Scala 2.12)
-- `com.arangodb:arangodb-spark-datasource-3.3_2.13` (Spark 3.3, Scala 2.13)
-- `com.arangodb:arangodb-spark-datasource-3.4_2.12` (Spark 3.4, Scala 2.12) (compatible with Spark `3.4.2+`)
-- `com.arangodb:arangodb-spark-datasource-3.4_2.13` (Spark 3.4, Scala 2.13) (compatible with Spark `3.4.2+`)
-- `com.arangodb:arangodb-spark-datasource-3.5_2.12` (Spark 3.5, Scala 2.12)
-- `com.arangodb:arangodb-spark-datasource-3.5_2.13` (Spark 3.5, Scala 2.13)
-
-The following variants are no longer supported:
-
-- `com.arangodb:arangodb-spark-datasource-2.4_2.11` (Spark 2.4, Scala 2.11)
-- `com.arangodb:arangodb-spark-datasource-2.4_2.12` (Spark 2.4, Scala 2.12)
-- `com.arangodb:arangodb-spark-datasource-3.1_2.12` (Spark 3.1, Scala 2.12)
-- `com.arangodb:arangodb-spark-datasource-3.2_2.12` (Spark 3.2, Scala 2.12)
-- `com.arangodb:arangodb-spark-datasource-3.2_2.13` (Spark 3.2, Scala 2.13)
-
-Since version `1.7.0`, due to [breaking changes](https://github.com/apache/spark/commit/ad29290a02fb94a958fd21e301100338c9f5b82a#diff-b25c8acff88c1b4850c6642e80845aac4fb882c664795c3b0aa058e37ed732a0L42-R52)
-in Spark `3.4.2`, `arangodb-spark-datasource-3.4` is not compatible anymore with Spark versions `3.4.0` and `3.4.1`.
-
-In the following sections the `${sparkVersion}` and `${scalaVersion}` placeholders refer to the Spark and Scala versions.
-
-## Setup
-
-To import ArangoDB Datasource for Apache Spark in a Maven project:
-
-```xml
-
-
- com.arangodb
- arangodb-spark-datasource-${sparkVersion}_${scalaVersion}
- x.y.z
-
-
-```
-
-To use in an external Spark cluster, submit your application with the following parameter:
-
-```sh
---packages="com.arangodb:arangodb-spark-datasource-${sparkVersion}_${scalaVersion}:x.y.z"
-```
-
-## General Configuration
-
-- `user`: db user, `root` by default
-- `password`: db password
-- `endpoints`: list of Coordinators, e.g. `c1:8529,c2:8529` (required)
-- `acquireHostList`: acquire the list of all known hosts in the cluster (`true` or `false`), `false` by default
-- `protocol`: communication protocol (`vst`, `http`, or `http2`), `http2` by default
-- `contentType`: content type for driver communication (`json` or `vpack`), `json` by default
-- `timeout`: driver connect and request timeout in ms, `300000` by default
-- `ssl.enabled`: ssl secured driver connection (`true` or `false`), `false` by default
-- `ssl.verifyHost`: whether TLS hostname verification is enabled, `true` by default
-- `ssl.cert.value`: Base64 encoded certificate
-- `ssl.cert.type`: certificate type, `X.509` by default
-- `ssl.cert.alias`: certificate alias name, `arangodb` by default
-- `ssl.algorithm`: trust manager algorithm, `SunX509` by default
-- `ssl.keystore.type`: keystore type, `jks` by default
-- `ssl.protocol`: SSLContext protocol, `TLS` by default
-
-### SSL
-
-To use TLS secured connections to ArangoDB, set `ssl.enabled` to `true` and either:
-- provide a Base64 encoded certificate as the `ssl.cert.value` configuration entry and optionally set `ssl.*` or
-- start the Spark driver and workers with a properly configured [JVM default TrustStore](https://spark.apache.org/docs/latest/security.html#ssl-configuration)
-
-### Supported deployment topologies
-
-The connector can work with a single server, a cluster and active failover deployments of ArangoDB.
-
-## Batch Read
-
-The connector implements support for batch reading from an ArangoDB collection.
-
-```scala
-val df: DataFrame = spark.read
- .format("com.arangodb.spark")
- .options(options) // Map[String, String]
- .schema(schema) // StructType
- .load()
-```
-
-The connector can read data from:
-- a collection
-- an AQL cursor (query specified by the user)
-
-When reading data from a **collection**, the reading job is split into many Spark tasks, one for each shard in the ArangoDB source collection. The resulting Spark DataFrame has the same number of partitions as the number of shards in the ArangoDB collection, each one containing data from the respective collection shard. The reading tasks consist of AQL queries that are load balanced across all the available ArangoDB Coordinators. Each query is related to only one shard, therefore it will be executed locally in the DB-Server holding the related shard.
-
-When reading data from an **AQL cursor**, the reading job cannot be partitioned or parallelized, so it will be less scalable. This mode can be used for reading data coming from different tables, i.e. resulting from an AQL traversal query.
-
-**Example**
-
-```scala
-val spark: SparkSession = SparkSession.builder()
- .appName("ArangoDBSparkDemo")
- .master("local[*]")
- .config("spark.driver.host", "127.0.0.1")
- .getOrCreate()
-
-val df: DataFrame = spark.read
- .format("com.arangodb.spark")
- .options(Map(
- "password" -> "test",
- "endpoints" -> "c1:8529,c2:8529,c3:8529",
- "table" -> "users"
- ))
- .schema(new StructType(
- Array(
- StructField("likes", ArrayType(StringType, containsNull = false)),
- StructField("birthday", DateType, nullable = true),
- StructField("gender", StringType, nullable = false),
- StructField("name", StructType(
- Array(
- StructField("first", StringType, nullable = true),
- StructField("last", StringType, nullable = false)
- )
- ), nullable = true)
- )
- ))
- .load()
-
-usersDF.filter(col("birthday") === "1982-12-15").show()
-```
-
-### Read Configuration
-
-- `database`: database name, `_system` by default
-- `table`: datasource ArangoDB collection name, ignored if `query` is specified. Either `table` or `query` is required.
-- `query`: custom AQL read query. If set, `table` will be ignored. Either `table` or `query` is required.
-- `batchSize`: reading batch size, `10000` by default
-- `sampleSize`: sample size prefetched for schema inference, only used if read schema is not provided, `1000` by default
-- `fillBlockCache`: specifies whether the query should store the data it reads in the RocksDB block cache (`true` or `false`), `false` by default
-- `stream`: specifies whether the query should be executed lazily, `true` by default
-- `ttl`: cursor time to live in seconds, `30` by default
-- `mode`: allows setting a mode for dealing with corrupt records during parsing:
- - `PERMISSIVE` : win case of a corrupted record, the malformed string is put into a field configured by
- `columnNameOfCorruptRecord`, and sets malformed fields to null. To keep corrupt records, a user can set a string
- type field named `columnNameOfCorruptRecord` in a user-defined schema. If a schema does not have the field, it drops
- corrupt records during parsing. When inferring a schema, it implicitly adds the `columnNameOfCorruptRecord` field in
- an output schema
- - `DROPMALFORMED`: ignores the whole corrupted records
- - `FAILFAST`: throws an exception in case of corrupted records
-- `columnNameOfCorruptRecord`: allows renaming the new field having malformed string created by the `PERMISSIVE` mode
-
-### Predicate and Projection Pushdown
-
-The connector can convert some Spark SQL filter predicates into AQL predicates and push their execution down to the data source. In this way, ArangoDB can apply the filters and return only the matching documents.
-
-The following filter predicates (implementations of `org.apache.spark.sql.sources.Filter`) are pushed down:
-- `And`
-- `Or`
-- `Not`
-- `EqualTo`
-- `EqualNullSafe`
-- `IsNull`
-- `IsNotNull`
-- `GreaterThan`
-- `GreaterThanOrEqualFilter`
-- `LessThan`
-- `LessThanOrEqualFilter`
-- `StringStartsWithFilter`
-- `StringEndsWithFilter`
-- `StringContainsFilter`
-- `InFilter`
-
-Furthermore, the connector will push down the subset of columns required by the Spark query, so that only the relevant documents fields will be returned.
-
-Predicate and projection pushdowns are only performed while reading an ArangoDB collection (set by the `table` configuration parameter). In case of a batch read from a custom query (set by the `query` configuration parameter), no pushdown optimizations are performed.
-
-### Read Resiliency
-
-The data of each partition is read using an AQL cursor. If any error occurs, the read task of the related partition will fail. Depending on the Spark configuration, the task could be retried.
-
-## Batch Write
-
-The connector implements support for batch writing to ArangoDB collection.
-
-```scala
-import org.apache.spark.sql.DataFrame
-
-val df: DataFrame = //...
-df.write
- .format("com.arangodb.spark")
- .mode(SaveMode.Append)
- .options(Map(
- "password" -> "test",
- "endpoints" -> "c1:8529,c2:8529,c3:8529",
- "table" -> "users"
- ))
- .save()
-```
-
-Write tasks are load balanced across the available ArangoDB Coordinators. The data saved into the ArangoDB is sharded according to the related target collection definition and is different from the Spark DataFrame partitioning.
-
-### SaveMode
-
-On writing, `org.apache.spark.sql.SaveMode` is used to specify the expected behavior in case the target collection already exists.
-
-The following save modes are supported:
-- `Append`: the target collection is created, if it does not exist.
-- `Overwrite`: the target collection is created, if it does not exist, otherwise it is truncated. Use it in combination with the
- `confirmTruncate` write configuration parameter.
-
-Save modes `ErrorIfExists` and `Ignore` behave the same as `Append`.
-
-Use the `overwriteMode` write configuration parameter to specify the document overwrite behavior (if a document with the same `_key` already exists).
-
-### Write Configuration
-
-- `database`: database name, `_system` by default
-- `table`: target ArangoDB collection name (required)
-- `batchSize`: writing batch size, `10000` by default
-- `byteBatchSize`: byte batch size threshold, only considered for `contentType=json`, `8388608` by default (8 MB)
-- `table.shards`: number of shards of the created collection (in case of the `Append` or `Overwrite` SaveMode)
-- `table.type`: type (`document` or `edge`) of the created collection (in case of the `Append` or `Overwrite` SaveMode), `document` by default
-- `waitForSync`: specifies whether to wait until the documents have been synced to disk (`true` or `false`), `false` by default
-- `confirmTruncate`: confirms to truncate table when using the `Overwrite` SaveMode, `false` by default
-- `overwriteMode`: configures the behavior in case a document with the specified `_key` value already exists. It is only considered for `Append` SaveMode.
- - `ignore` (default for SaveMode other than `Append`): it will not be written
- - `replace`: it will be overwritten with the specified document value
- - `update`: it will be patched (partially updated) with the specified document value. The overwrite mode can be
- further controlled via the `keepNull` and `mergeObjects` parameter. `keepNull` will also be automatically set to
- `true`, so that null values are kept in the saved documents and not used to remove existing document fields (as for
- default ArangoDB upsert behavior).
- - `conflict` (default for the `Append` SaveMode): return a unique constraint violation error so that the insert operation fails
-- `mergeObjects`: in case `overwriteMode` is set to `update`, controls whether objects (not arrays) will be merged.
- - `true` (default): objects will be merged
- - `false`: existing document fields will be overwritten
-- `keepNull`: in case `overwriteMode` is set to `update`
- - `true` (default): `null` values are saved within the document (by default)
- - `false`: `null` values are used to delete the corresponding existing attributes
-- `retry.maxAttempts`: max attempts for retrying write requests in case they are idempotent, `10` by default
-- `retry.minDelay`: min delay in ms between write requests retries, `0` by default
-- `retry.maxDelay`: max delay in ms between write requests retries, `0` by default
-
-### Write Resiliency
-
-The data of each partition is saved in batches using the ArangoDB API for
-[inserting multiple documents](../http-api/documents.md#multiple-document-operations).
-This operation is not atomic, therefore some documents could be successfully written to the database, while others could fail. To make the job more resilient to temporary errors (i.e. connectivity problems), in case of failure the request will be retried (with another Coordinator), if the provided configuration allows idempotent requests, namely:
-- the schema of the dataframe has a **not nullable** `_key` field and
-- `overwriteMode` is set to one of the following values:
- - `replace`
- - `ignore`
- - `update` with `keep.null=true`
-
-A failing batch-saving request is retried once for every Coordinator. After that, if still failing, the write task for the related partition is aborted. According to the Spark configuration, the task can be retried and rescheduled on a different executor, if the provided write configuration allows idempotent requests (as described above).
-
-If a task ultimately fails and is aborted, the entire write job will be aborted as well. Depending on the `SaveMode` configuration, the following cleanup operations will be performed:
-- `Append`: no cleanup is performed and the underlying data source may require manual cleanup.
- `DataWriteAbortException` is thrown.
-- `Overwrite`: the target collection will be truncated.
-- `ErrorIfExists`: the target collection will be dropped.
-- `Ignore`: if the collection did not exist before, it will be dropped; otherwise, nothing will be done.
-
-### Write requirements
-
-When writing to an edge collection (`table.type=edge`), the schema of the Dataframe being written must have:
-- a non nullable string field named `_from`, and
-- a non nullable string field named `_to`
-
-### Write Limitations
-
-- Batch writes are not performed atomically, so sometimes (i.e. in case of `overwrite.mode: conflict`) several documents in the batch may be written and others may return an exception (i.e. due to a conflicting key).
-- Writing records with the `_key` attribute is only allowed on collections sharded by `_key`.
-- In case of the `Append` save mode, failed jobs cannot be rolled back and the underlying data source may require manual cleanup.
-- Speculative execution of tasks only works for idempotent write configurations. See [Write Resiliency](#write-resiliency) for more details.
-- Speculative execution of tasks can cause concurrent writes to the same documents, resulting in write-write conflicts or lock timeouts
-
-## Mapping Configuration
-
-Serialization and deserialization of Spark Dataframe Row to and from JSON (or Velocypack) can be customized using the following options:
-- `ignoreNullFields`: whether to ignore null fields during serialization, `false` by default (only supported in Spark 3.x)
-
-## Supported Spark data types
-
-The following Spark SQL data types (subtypes of `org.apache.spark.sql.types.Filter`) are supported for reading, writing and filter pushdown.
-
-- Numeric types:
- - `ByteType`
- - `ShortType`
- - `IntegerType`
- - `LongType`
- - `FloatType`
- - `DoubleType`
-
-- String types:
- - `StringType`
-
-- Boolean types:
- - `BooleanType`
-
-- Datetime types:
- - `TimestampType`
- - `DateType`
-
-- Complex types:
- - `ArrayType`
- - `MapType` (only with key type `StringType`)
- - `StructType`
-
-## Connect to the ArangoGraph Insights Platform
-
-To connect to SSL secured deployments using X.509 Base64 encoded CA certificate (ArangoGraph):
-
-```scala
-val options = Map(
- "database" -> "",
- "user" -> "",
- "password" -> "",
- "endpoints" -> ":",
- "ssl.cert.value" -> "",
- "ssl.enabled" -> "true",
- "table" -> ""
-)
-
-// read
-val myDF = spark.read
- .format("com.arangodb.spark")
- .options(options)
- .load()
-
-// write
-import org.apache.spark.sql.DataFrame
-val df: DataFrame = //...
-df.write
- .format("com.arangodb.spark")
- .options(options)
- .save()
-```
-
-## Current limitations
-
-- For `contentType=vpack`, implicit deserialization casts don't work well, i.e.
- reading a document having a field with a numeric value whereas the related
- read schema requires a string value for such a field.
-- Dates and timestamps fields are interpreted to be in a UTC time zone.
-- In read jobs using `stream=true` (default), possible AQL warnings are only
- logged at the end of each read task (BTS-671).
-- Spark SQL `DecimalType` fields are not supported in write jobs when using `contentType=json`.
-- Spark SQL `DecimalType` values are written to the database as strings.
-- `byteBatchSize` is only considered for `contentType=json` (DE-226)
-
-## Demo
-
-Check out our [demo](https://github.com/arangodb/arangodb-spark-datasource/tree/main/demo)
-to learn more about ArangoDB Datasource for Apache Spark.
diff --git a/site/content/3.10/develop/satellitecollections.md b/site/content/3.10/develop/satellitecollections.md
deleted file mode 100644
index d470d6bb2b..0000000000
--- a/site/content/3.10/develop/satellitecollections.md
+++ /dev/null
@@ -1,139 +0,0 @@
----
-title: SatelliteCollections
-menuTitle: SatelliteCollections
-weight: 250
-description: >-
- Collections synchronously replicated to all servers, available in the Enterprise Edition
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-When doing joins in an ArangoDB cluster data has to be exchanged between different servers.
-
-Joins are executed on a Coordinator. It prepares an execution plan
-and executes it. When executing, the Coordinator contacts all shards of the
-starting point of the join and ask for their data. The DB-Servers carrying
-out this operation loads all its local data and then ask the cluster for
-the other part of the join. Again, this is distributed to all involved shards
-of this join part.
-
-In sum this results in much network traffic and slow results depending of the
-amount of data that has to be sent throughout the cluster.
-
-SatelliteCollections are collections that are intended to address this issue.
-They facilitate the synchronous replication and replicate all their data
-to all DB-Servers that are part of the cluster.
-
-This enables the DB-Servers to execute that part of any join locally.
-
-This greatly improves performance for such joins at the costs of increased
-storage requirements and poorer write performance on this data.
-
-To create a SatelliteCollection set the `replicationFactor` of this collection
-to "satellite".
-
-Using arangosh:
-
-```js
-db._create("satellite", {"replicationFactor": "satellite"});
-```
-
-## A full example
-
-```js
-arangosh> var explain = require("@arangodb/aql/explainer").explain
-arangosh> db._create("satellite", {"replicationFactor": "satellite"})
-arangosh> db._create("nonsatellite", {numberOfShards: 8})
-arangosh> db._create("nonsatellite2", {numberOfShards: 8})
-```
-
-Let's analyze a normal join not involving SatelliteCollections:
-
-```js
-arangosh> explain("FOR doc in nonsatellite FOR doc2 in nonsatellite2 RETURN 1")
-
-Query string:
- FOR doc in nonsatellite FOR doc2 in nonsatellite2 RETURN 1
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 4 CalculationNode DBS 1 - LET #2 = 1 /* json expression */ /* const assignment */
- 2 EnumerateCollectionNode DBS 0 - FOR doc IN nonsatellite /* full collection scan */
- 12 RemoteNode COOR 0 - REMOTE
- 13 GatherNode COOR 0 - GATHER
- 6 ScatterNode COOR 0 - SCATTER
- 7 RemoteNode DBS 0 - REMOTE
- 3 EnumerateCollectionNode DBS 0 - FOR doc2 IN nonsatellite2 /* full collection scan */
- 8 RemoteNode COOR 0 - REMOTE
- 9 GatherNode COOR 0 - GATHER
- 5 ReturnNode COOR 0 - RETURN #2
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 scatter-in-cluster
- 3 remove-unnecessary-remote-scatter
-```
-
-All shards involved querying the `nonsatellite` collection fan out via the
-Coordinator to the shards of `nonsatellite`. In sum, 8 shards open 8 connections
-to the Coordinator, asking for the results of the `nonsatellite2` join. The Coordinator
-fans out to the 8 shards of `nonsatellite2`. So there us quite some
-network traffic.
-
-Let's now have a look at the same using SatelliteCollections:
-
-```js
-arangosh> db._query("FOR doc in nonsatellite FOR doc2 in satellite RETURN 1")
-
-Query string:
- FOR doc in nonsatellite FOR doc2 in satellite RETURN 1
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 4 CalculationNode DBS 1 - LET #2 = 1 /* json expression */ /* const assignment */
- 2 EnumerateCollectionNode DBS 0 - FOR doc IN nonsatellite /* full collection scan */
- 3 EnumerateCollectionNode DBS 0 - FOR doc2 IN satellite /* full collection scan, satellite */
- 8 RemoteNode COOR 0 - REMOTE
- 9 GatherNode COOR 0 - GATHER
- 5 ReturnNode COOR 0 - RETURN #2
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 scatter-in-cluster
- 3 remove-unnecessary-remote-scatter
- 4 remove-satellite-joins
-```
-
-In this scenario all shards of `nonsatellite` are contacted. However,
-as the join is a satellite join all shards can do the join locally
-as the data is replicated to all servers reducing the network overhead
-dramatically.
-
-## Caveats
-
-The cluster automatically keeps all SatelliteCollections on all servers in sync
-by facilitating the synchronous replication. This means that writes are executed
-on the leader only, and this server coordinates replication to the followers.
-If a follower doesn't answer in time (due to network problems, temporary shutdown, etc.)
-it may be removed as a follower. This is being reported to the Agency.
-
-The follower (once back in business) then periodically checks the Agency and knows
-that it is out of sync. It then automatically catches up. This may take a while
-depending on how much data has to be synced. When doing a join involving the `satellite`
-you can specify how long the DB-Server is allowed to wait for sync until the query
-is being aborted. See [Cursors](http-api/queries/aql-queries.md#create-a-cursor) for details.
-
-During network failure there is also a minimal chance that a query was properly
-distributed to the DB-Servers but that a previous satellite write could not be
-replicated to a follower and the leader dropped the follower. The follower however
-only checks every few seconds if it is really in sync so it might indeed deliver
-stale results.
diff --git a/site/content/3.10/develop/smartjoins.md b/site/content/3.10/develop/smartjoins.md
deleted file mode 100644
index fd44d18d56..0000000000
--- a/site/content/3.10/develop/smartjoins.md
+++ /dev/null
@@ -1,308 +0,0 @@
----
-title: SmartJoins
-menuTitle: SmartJoins
-weight: 255
-description: >-
- SmartJoins allow to execute co-located join operations among identically
- sharded collections
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-## Cluster joins without being smart
-
-When doing joins in an ArangoDB cluster, data has to be exchanged between different servers.
-Joins between different collections in a cluster normally require roundtrips between the
-shards of these collections for fetching the data. Requests are routed through an extra
-Coordinator hop.
-
-For example, with two collections `c1` and `c2` with 4 shards each, the Coordinator
-initially contacts the 4 shards of `c1`. In order to perform the join, the DB-Server nodes
-which manage the actual data of `c1` need to pull the data from the other collection, `c2`.
-This causes extra roundtrips via the Coordinator, which then pulls the data for `c2`
-from the responsible shards:
-
-```js
-arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1");
-
-Query String:
- FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 3 EnumerateCollectionNode DBS 0 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
- 14 RemoteNode COOR 0 - REMOTE
- 15 GatherNode COOR 0 - GATHER
- 8 ScatterNode COOR 0 - SCATTER
- 9 RemoteNode DBS 0 - REMOTE
- 7 IndexNode DBS 0 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
- 10 RemoteNode COOR 0 - REMOTE
- 11 GatherNode COOR 0 - GATHER
- 6 ReturnNode COOR 0 - RETURN doc1
-```
-
-This is the general query execution, and it makes sense if there is no further
-information available about how the data is actually distributed to the individual
-shards. It works in case `c1` and `c2` have a different amount of shards, or use
-different shard keys or strategies. However, it comes with the additional cost of
-having to do 4 x 4 requests to perform the join.
-
-## Sharding two collections identically using distributeShardsLike
-
-In the specific case that the two collections have the same number of shards, the
-data of the two collections can be co-located on the same server for the same shard
-key values. In this case, the extra hop via the Coordinator is not necessary.
-
-The query optimizer removes the extra hop for the join in case it can prove
-that data for the two collections is co-located.
-
-The first step is thus to make the two collections shard their data alike. This can
-be achieved by making the `distributeShardsLike` attribute of one of the collections
-refer to the other collection.
-
-Here is an example setup for this, using arangosh:
-
-```js
-arangosh> db._create("c1", {numberOfShards: 4, shardKeys: ["_key"]});
-arangosh> db._create("c2", {shardKeys: ["_key"], distributeShardsLike: "c1"});
-```
-
-Now the collections `c1` and `c2` not only have the same shard keys, but they
-also locate their data for the same shard keys values on the same server.
-
-Let's check how the data actually gets distributed now. We first confirm that the
-two collections have 4 shards each, which in this example are evenly distributed
-across two servers:
-
-```js
-arangosh> db.c1.shards(true)
-{
- "s2011661" : [
- "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
- ],
- "s2011662" : [
- "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
- ],
- "s2011663" : [
- "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
- ],
- "s2011664" : [
- "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
- ]
-}
-
-arangosh> db.c2.shards(true)
-{
- "s2011666" : [
- "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
- ],
- "s2011667" : [
- "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
- ],
- "s2011668" : [
- "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
- ],
- "s2011669" : [
- "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
- ]
-}
-```
-
-Because we have told both collections that distribute their data alike, their
-shards are now also populated alike:
-
-```js
-arangosh> for (i = 0; i < 100; ++i) {
- db.c1.insert({ _key: "test" + i });
- db.c2.insert({ _key: "test" + i });
-}
-
-arangosh> db.c1.count(true);
-{
- "s2011664" : 22,
- "s2011661" : 21,
- "s2011663" : 27,
- "s2011662" : 30
-}
-
-arangosh> db.c2.count(true);
-{
- "s2011669" : 22,
- "s2011666" : 21,
- "s2011668" : 27,
- "s2011667" : 30
-}
-```
-
-We can see that shard 1 of `c1` ("s2011664") has the same number of documents as
-shard 1 of `c2` ("s20116692), that shard 2 of `c1` ("s2011661") has the same
-number of documents as shard2 of `c2` ("s2011666") etc.
-Additionally, we can see from the shard-to-server distribution above that the
-corresponding shards from `c1` and `c2` always reside on the same node.
-This is a precondition for running joins locally, and thanks to the effects of
-`distributeShardsLike` it is now satisfied!
-
-## SmartJoins using distributeShardsLike
-
-With the two collections in place like this, an AQL query that uses a FILTER condition
-that refers from the shard key of the one collection to the shard key of the other collection
-and compares the two shard key values by equality is eligible for the query
-optimizer's "smart-joins" optimization:
-
-```js
-arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1");
-
-Query String:
- FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 3 EnumerateCollectionNode DBS 0 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
- 7 IndexNode DBS 0 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
- 10 RemoteNode COOR 0 - REMOTE
- 11 GatherNode COOR 0 - GATHER
- 6 ReturnNode COOR 0 - RETURN doc1
-```
-
-As can be seen above, the extra hop via the Coordinator is gone here, which means
-less cluster-internal traffic and a faster response time.
-
-SmartJoins also work if the shard key of the second collection is not `_key`,
-and even for non-unique shard key values, e.g.:
-
-```js
-arangosh> db._create("c1", {numberOfShards: 4, shardKeys: ["_key"]});
-arangosh> db._create("c2", {shardKeys: ["parent"], distributeShardsLike: "c1"});
-arangosh> db.c2.ensureIndex({ type: "hash", fields: ["parent"] });
-arangosh> for (i = 0; i < 100; ++i) {
- db.c1.insert({ _key: "test" + i });
- for (j = 0; j < 10; ++j) {
- db.c2.insert({ parent: "test" + i });
- }
-}
-
-arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1");
-
-Query String:
- FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 3 EnumerateCollectionNode DBS 2000 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
- 7 IndexNode DBS 2000 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
- 10 RemoteNode COOR 2000 - REMOTE
- 11 GatherNode COOR 2000 - GATHER
- 6 ReturnNode COOR 2000 - RETURN doc1
-```
-
-{{< tip >}}
-All above examples used two collections only. SmartJoins also work when joining
-more than two collections/Views which have the same data distribution enforced via their
-`distributeShardsLike` attribute and using the shard keys as the join criteria as shown above.
-{{< /tip >}}
-
-## SmartJoins using smartJoinAttribute
-
-In case the join on the second collection must be performed on a non-shard key
-attribute, there is the option to specify a `smartJoinAttribute` for the collection.
-Note that for this case, setting `distributeShardsLike` is still required here, and that
-only a single `shardKeys` attribute can be used.
-The single attribute name specified in the `shardKeys` attribute for the collection must end
-with a colon character then.
-
-This `smartJoinAttribute` must be populated for all documents in the collection,
-and must always contain a string value. The value of the `_key` attribute for each
-document must consist of the value of the `smartJoinAttribute`, a colon character
-and then some other user-defined key component.
-
-The setup thus becomes:
-
-```js
-arangosh> db._create("c1", {numberOfShards: 4, shardKeys: ["_key"]});
-arangosh> db._create("c2", {shardKeys: ["_key:"], smartJoinAttribute: "parent", distributeShardsLike: "c1"});
-arangosh> db.c2.ensureIndex({ type: "hash", fields: ["parent"] });
-arangosh> for (i = 0; i < 100; ++i) {
- db.c1.insert({ _key: "test" + i });
- db.c2.insert({ _key: "test" + i + ":" + "ownKey" + i, parent: "test" + i });
-}
-```
-
-Failure to populate the `smartJoinAttribute` with a string or not at all leads
-to a document being rejected on insert, update or replace. Similarly, failure to
-prefix a document's `_key` attribute value with the value of the `smartJoinAttribute`
-also leads to the document being rejected:
-
-```js
-arangosh> db.c2.insert({ parent: 123 });
-JavaScript exception in file './js/client/modules/@arangodb/arangosh.js' at 99,7: ArangoError 4008: SmartJoin attribute not given or invalid
-
-arangosh> db.c2.insert({ _key: "123:test1", parent: "124" });
-JavaScript exception in file './js/client/modules/@arangodb/arangosh.js' at 99,7: ArangoError 4007: shard key value must be prefixed with the value of the SmartJoin attribute
-```
-
-The join can now be performed via the collection's `smartJoinAttribute`:
-
-```js
-arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1")
-
-Query String:
- FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 3 EnumerateCollectionNode DBS 101 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
- 7 IndexNode DBS 101 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
- 10 RemoteNode COOR 101 - REMOTE
- 11 GatherNode COOR 101 - GATHER
- 6 ReturnNode COOR 101 - RETURN doc1
-```
-
-## Restricting SmartJoins to a single shard
-
-If a FILTER condition is used on one of the shard keys, the optimizer also tries
-to restrict the queries to just the required shards:
-
-```js
-arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == 'test' && doc1._key == doc2.value RETURN doc1");
-
-Query String:
-FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == 'test' && doc1._key == doc2.value
-RETURN doc1
-
-Execution plan:
-Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 8 IndexNode DBS 1 - FOR doc1 IN c1 /* primary index scan, shard: s2010246 */
- 7 IndexNode DBS 1 - FOR doc2 IN c2 /* primary index scan, scan only, shard: s2010253 */
-12 RemoteNode COOR 1 - REMOTE
-13 GatherNode COOR 1 - GATHER
- 6 ReturnNode COOR 1 - RETURN doc1
-```
-
-## Limitations
-
-The SmartJoins optimization is currently triggered only for data selection queries,
-but not for any data-manipulation operations such as INSERT, UPDATE, REPLACE, REMOVE
-or UPSERT, neither traversals or subqueries.
-
-It is only applied when joining two collections with an identical
-sharding setup. This requires all involved but one collection to be created
-with its `distributeShardsLike` attribute pointing to the collection that is
-the exception. All collections forming a View must be sharded in the same way,
-otherwise the View is not eligible.
-
-It is restricted to be used with simple shard key attributes (such as `_key`, `productId`),
-but not with nested attributes (e.g. `name.first`). There should be exactly one shard
-key attribute defined for each collection.
-
-Finally, the SmartJoins optimization requires that the involved collections are
-joined on their shard key attributes (or `smartJoinAttribute`) using an equality
-comparison.
-
-## SmartJoins with Views
-
-Views of the `arangosearch` type are eligible for SmartJoins, provided that
-their underlying collections are eligible too.
diff --git a/site/content/3.10/get-started/on-premises-installation.md b/site/content/3.10/get-started/on-premises-installation.md
deleted file mode 100644
index 5dda1d48f9..0000000000
--- a/site/content/3.10/get-started/on-premises-installation.md
+++ /dev/null
@@ -1,99 +0,0 @@
----
-title: Install ArangoDB on-premises # TODO: distinguish between local and on-premises server deployments?
-menuTitle: On-premises installation
-weight: 40
-description: >-
- How to download and install ArangoDB for using it locally or self-hosting it
- on your own hardware
----
-## Installation
-
-Head to [arangodb.com/download](https://www.arangodb.com/download/),
-select your operating system, and download ArangoDB. You may also follow
-the instructions on how to install with a package manager, if available.
-
-If you installed a binary package under Linux, the server is
-automatically started.
-
-If you installed ArangoDB under Windows as a service, the server is
-automatically started. Otherwise, run the `arangod.exe` located in the
-installation folder's `bin` directory. You may have to run it as administrator
-to grant it write permissions to `C:\Program Files`.
-
-For more in-depth information on how to install ArangoDB, as well as available
-startup parameters, installation in a cluster and so on, see
-[Installation](../operations/installation/_index.md) and
-[Deploy](../deploy/_index.md).
-
-
-
-## Securing the Installation
-
-The default installation contains one database `_system` and a user
-named `root`.
-
-Debian-based packages and the Windows installer ask for a
-password during the installation process. Red-Hat based packages
-set a random password. For all other installation packages, you need to
-execute the following:
-
-```
-shell> arango-secure-installation
-```
-
-This commands asks for a root password and sets it.
-
-{{< warning >}}
-The password that is set for the root user during the installation of the ArangoDB
-package has **no effect** in case of deployments done with the _ArangoDB Starter_.
-See [Securing Starter Deployments](../operations/security/securing-starter-deployments.md) instead.
-{{< /warning >}}
-
-
diff --git a/site/content/3.10/get-started/set-up-a-cloud-instance.md b/site/content/3.10/get-started/set-up-a-cloud-instance.md
deleted file mode 100644
index 1973721015..0000000000
--- a/site/content/3.10/get-started/set-up-a-cloud-instance.md
+++ /dev/null
@@ -1,165 +0,0 @@
----
-title: Use ArangoDB in the Cloud
-menuTitle: Set up a cloud instance
-weight: 35
-description: >-
- This quick start guide covers the basics from creating an ArangoGraph account to
- setting up and accessing your first ArangoGraph deployment
----
-For general information about the ArangoGraph Insights Platform, see
-[dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-For guides and reference documentation, see the [ArangoGraph](../arangograph/_index.md) documentation.
-
-## Prerequisites
-
-Please have following information at hand for registration:
-
-- An **email address**, required for email verification.
-
-If you use a public email service provider (e.g. Hotmail), make sure to have
-the following information at hand as well:
-
-- A **mobile phone number**, required for SMS verification
-
-{{< info >}}
-One mobile phone number is associated with one account and cannot be
-used for multiple accounts.
-{{< /info >}}
-
-## How to Create a New Account
-
-1. Go to [dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-2. Click the __Start Free__ button or click the __Sign Up__ link in the top
- right corner.
-
- 
-
-3. Review the terms & conditions and privacy policy and click __I accept__.
-4. Select the type of sign up you would like to use (GitHub, Google, or
- email address).
- - For GitHub, Google, and Microsoft please follow the on-screen instructions.
- - For the email address option, type your desired email address in the
- email field and type a strong password in the password field.
-
- {{< image src="../../images/arangograph-create-account.png" alt="ArangoGraph Sign up" style="max-height: 50vh">}}
-
- Click the __Sign up__ button. You will receive a verification email. In that
- mail, click the __Verify my email address__ link or button.
- It opens a page in the ArangoGraph Insights Platform that says __Welcome back!__
-5. Click the __Log in__ button to continue and login.
-6. If you signed up with an email address of a public email service provider (e.g. Hotmail),
- a form appears asking for your mobile phone number. Enter the country code
- and the number of the mobile phone you want to use for this account.
- For company email addresses, this step is skipped.
-7. If you had to enter your phone number in the previous step, a verification
- code is sent via SMS to the mobile number you entered. Enter the
- verification code.
-8. Fill out a form with your first and last name, and company
- name, and then press the __Save__ button.
-9. An organization with a default project is now prepared for you.
- Once that is completed, you are redirected to the
- [ArangoGraph dashboard](https://dashboard.arangodb.cloud).
-
-## Get a Deployment up and Running
-
-1. The first card in the ArangoGraph Dashboard has a dropdown menu to select a cloud
- provider and region. Pick one and click __Create deployment__. You can also select
- your intended use-case.
-
- 
-
- You can also [create a deployment](../arangograph/deployments/_index.md#how-to-create-a-new-deployment)
- manually, if you want fine-grained configuration options.
-2. The new deployment is displayed in the list of deployments for the
- respective project (here: _Avocado_).
-
- 
-
- It takes a couple of minutes before the deployment can be used. The status
- is changed from __Bootstrapping__ to __OK__ eventually and you also
- receive an email when it is ready.
-
- {{< image src="../../images/arangograph-deployment-ready-email.png" alt="ArangoGraph Deployment Ready Email" style="max-height: 50vh">}}
-
-3. Click the name of the deployment (or the __Open deployment details__ link in
- the email) to view the deployment details.
-
- 
-
-4. Click the __Open database UI__ button to open the ArangoDB web interface.
-
-5. You can install example datasets and follow the accompanying guides to get
- started with ArangoDB and its query language. In the ArangoGraph dashboard, click
- the __Examples__ tab of the deployment. Click __Install__ for one of the
- examples to let ArangoGraph create a separate database and import the dataset.
- Click __Guide__ for instructions on how to access and run queries against
- this data.
-
- 
-
- 
-
-## General Hierarchy
-
-The ArangoGraph Insights Platform supports multi-tenant setups via organizations.
-You can create your own organization(s) and invite collaborators or join
-existing ones via invites. Your organization contains projects.
-Your projects hold your deployments.
-
-- [**Organizations**](../arangograph/organizations/_index.md)
- represent (commercial) entities such as companies.
- You can be part of multiple organizations with a single user account.
- - [**Projects**](../arangograph/projects.md)
- represent organizational units such as teams or applications.
- - [**Deployments**](../arangograph/deployments/_index.md)
- are the actual instances of ArangoDB clusters.
-
-When you sign up for ArangoGraph, an organization and a default project are
-automatically created for you. What is still missing is a deployment.
-
-## Take the Tour
-
-In the top right corner you find the __User toolbar__. Click the icon with the
-question mark to bring up the help menu and choose __Start tour__. This guided
-tour walks you through the creation of a deployment and shows you how to load
-example datasets and manage projects and deployments.
-
-
-
-Alternatively, follow the steps of the linked guides:
-- [Create a new project](../arangograph/projects.md#how-to-create-a-new-project) (optional)
-- [Create a new deployment](../arangograph/deployments/_index.md#how-to-create-a-new-deployment)
-- [Install a new certificate](../arangograph/security-and-access-control/x-509-certificates.md) (optional)
-- [Access your deployment](../arangograph/deployments/_index.md#how-to-access-your-deployment)
-- [Delete your deployment](../arangograph/deployments/_index.md#how-to-delete-a-deployment)
-
-## Free-to-Try vs. Paid
-
-The ArangoGraph Insights Platform comes with a free-to-try tier that lets you test
-the ArangoDB Cloud for free for 14 days. It includes one project and one small
-deployment of 4GB, local backups, and one notebook for learning and data science.
-After the trial period, your deployment is automatically deleted.
-
-You can unlock all features in ArangoGraph at any time by adding
-your billing details and at least one payment method. See:
-- [ArangoGraph Packages](../arangograph/organizations/_index.md#arangograph-packages)
-- [How to add billing details to organizations](../arangograph/organizations/billing.md#how-to-add-billing-details)
-- [How to add a payment method to an organization](../arangograph/organizations/billing.md#how-to-add-a-payment-method)
-
-## Managed Cloud Service vs. On-premises Comparison: Key Differences
-
-The ArangoGraph Insights Platform aims to make all features of the ArangoDB
-[Enterprise Edition](../about-arangodb/features/enterprise-edition.md) available to you, but
-there are a few key differences:
-
-- Encryption (both at rest & network traffic) is always on and cannot be
- disabled for security reasons.
-- Foxx services are not allowed to call out to the internet by default for
- security reasons, but can be enabled on request.
- Incoming calls to Foxx services are fully supported.
-- LDAP authentication is not supported.
-- Datacenter-to-Datacenter Replication (DC2DC) is not available in a
- managed form.
-
-For more information, see the [comparison between on-premises editions and the managed cloud service](https://www.arangodb.com/subscriptions/).
diff --git a/site/content/3.10/graphs/_index.md b/site/content/3.10/graphs/_index.md
deleted file mode 100644
index b22e55f098..0000000000
--- a/site/content/3.10/graphs/_index.md
+++ /dev/null
@@ -1,431 +0,0 @@
----
-title: Graphs
-menuTitle: Graphs
-weight: 75
-description: >-
- Graphs let you represent things and the relationships between them using
- vertices and edges, to naturally model knowledge, social networks, cash flows,
- supply chains, and other information webs, and to extract valuable insights by
- analyzing this connected data
-aliases:
- - graphs/first-steps
----
-Graphs are information networks comprised of **nodes** and **relations**. Nodes
-can represent objects, entities, abstract concepts, or ideas. Relations between
-nodes can represent physical and social connections, temporal and causal
-relationships, flows of information, energy, and material, interactions and
-transactions, dependency and hierarchy, as well as similarity and relatedness of
-any kind.
-
-
-
-For example, you can represent people by nodes and their friendships by
-relations. This lets you form a graph that is a social network in this case.
-
-
-
-The specific terms to refer to nodes and relations in a graph vary depending
-on the field or context, but they are conceptually the same. In computer science
-and mathematics, the terms **vertices** (singular: vertex) and **edges** are
-commonly used to refer to nodes and relations, respectively. In information
-science and data analysis, they are referred to as _entities_ and _connection_.
-In social sciences, they are often called _actors_ and _ties_ or _links_.
-They may also be called _points_ and _arcs_.
-
-Using graphs with vertices to represent things and edges to define how they
-relate to one another is a very expressive data model. It lets you represent
-a wide variety of information in a compact and intuitive way. It lets you model
-complex relationships and interactions of basically everything.
-
-
-
-Graphs are commonly directed (_digraphs_), which means that each edge goes from
-one vertex to another vertex in a specific direction. This lets you model
-directional relationships, such as cause and effect or the flow of material,
-energy, or information. In undirected graphs, edges don't have a direction and
-the relationship between two vertices is considered to be the same in both
-directions. For example, a friendship is a symmetrical relationships. If _Mary_
-is a friend of _John_, then _John_ is equally a friend of _Mary_. On the other
-hand, _Mary_ may subscribe to what _John_ posts online, but this does not
-automatically make _John_ a subscriber of _Mary_'s posts. It is an asymmetrical
-relationship in graph terms. These two types of graphs have different properties
-and different algorithms exist to analyze the data.
-
-{{< info >}}
-New to graphs? [Take our free graph course for freshers](https://www.arangodb.com/arangodb-graph-course/)
-and get from zero knowledge to advanced query techniques.
-{{< /info >}}
-
-## Graph model
-
-Graph database systems like ArangoDB can store graphs and provide means to query
-the connected data.
-
-ArangoDB's graph model is that of a **property graph**. Every record, whether
-vertex or edge, can have an arbitrary number of properties. Each document is a
-fully-fledged JSON object and has a unique identifier.
-This is different to the RDF graph model, where information is broken down into
-triples of a subject, a predicate, and an object and where each triple is stored
-separately, without an identifier for each statement.
-
-Furthermore, ArangoDB's graph model can be classified as a **labeled** property
-graph because you can group related edges using edge collections, with the
-collection name being the label, but you can also use a property to assign one
-or more types to an edge. You can also organize vertices in different
-collections based on the types of entities.
-
-Edges can only be stored in **edge collections**. Vertices are stored in
-**document collections** which are also referred to as **vertex collections**
-in the context of graphs. You can technically also use edges as vertices but
-the usefulness is limited.
-
-Edges in ArangoDB are always directed. Every edge document has special `_from`
-and `_to` attributes to reference one other document in each of the two
-attributes.
-
-Vertices are referenced by their document identifiers. For example,
-a friendship edge that connects _Mary_ with _John_ could look like
-`{"_from": "Person/Mary", "_to": "Person/John", "_id": "isFriendOf/1234"}`.
-Using this directed graph model means that relations you create with edges are
-not reciprocal but you may create edges for the reverse direction (another edge
-from _John_ to _Mary_), or you can utilize ArangoDB's ability to follow edges
-in the opposite direction (**inbound** instead of **outbound**) or ignore the
-direction and follow them in both directions (**any**) as if it were an
-undirected graph.
-
-You can query graphs with ArangoDB's query language, see
-[Graphs in AQL](../aql/graphs/_index.md).
-
-## Comparison to relational database systems
-
-In relational database management systems (RDBMS), you have the construct of
-a relation table to store *m:n* relations between two data tables.
-An edge collection is somewhat similar to these relation tables.
-Vertex collections resemble the data tables with the objects to connect.
-
-While simple graph queries with a fixed number of hops via the relation table
-may be doable in RDBMSes with SQL using several nested joins, graph databases
-can handle an arbitrary and variable number of these hops over edge collections
-which is called **traversal**.
-
-Moreover, edges in one edge collection may point to vertices in different
-vertex collections. It is common to have attributes attached to edges, like a
-*label* naming the type of connection.
-
-Edges have a direction, with their relations stored in the special `_from` and
-`_to` attributes pointing *from* one document *to* another document.
-In queries, you can define in which directions the edge relations may be followed
-(`OUTBOUND`: `_from` → `_to`, `INBOUND`: `_from` ← `_to`, `ANY`: `_from` ↔ `_to`).
-
-## Supported graph algorithms
-
-- [Traversal](../aql/graphs/traversals.md)
- - following edges in outbound, inbound, or any direction
- - variable traversal depth between a defined minimum and maximum
- - breadth-first, depth-first, and weighted traversals
- - optionally with prune conditions
-- [Shortest Path](../aql/graphs/shortest-path.md)
-- [All Shortest Paths](../aql/graphs/all-shortest-paths.md)
-- [k Shortest Paths](../aql/graphs/k-shortest-paths.md)
-- [k Paths](../aql/graphs/k-paths.md)
-- [Distributed Iterative Graph Processing (Pregel)](../data-science/pregel/_index.md)
- - Page Rank
- - Seeded Page Rank
- - Single-Source Shortest Path (SSSP)
- - Connected Components
- - Weakly Connected Components (WCC)
- - Strongly Connected Components (SCC)
- - Hyperlink-Induced Topic Search (HITS)
- - Effective Closeness Vertex Centrality
- - LineRank Vertex Centrality
- - Label Propagation Community Detection
- - Speaker-Listener Label Propagation (SLPA) Community Detection
-
-## Managed and unmanaged graphs
-
-You can use vertex and edge collections directly, using them as an unmanaged
-**anonymous graph**. In queries, you need to specify the involved collections
-for graph operations like traversals.
-
-You can also create a managed **named graph** to define a set of vertex and
-edge collections along with the allowed relations. In queries, you only need to
-specify the graph instead of the individual vertex and edge collections. There
-are additional integrity checks when using the named graph interfaces.
-
-Named graphs ensure graph integrity, both when inserting or removing edges or
-vertices. You won't encounter dangling edges, even if you use the same vertex
-collection in several named graphs. This involves more operations inside the
-database system, which come at a cost. Therefore, anonymous graphs may be faster
-in many operations. You can choose between no integrity guarantees, additional
-effort to implement consistency checks in your application code, and server-side
-integrity checks at a performance cost.
-
-### Named Graphs
-
-Named graphs are completely managed by ArangoDB, ensuring data consistency if the
-named graph interfaces and not the raw document manipulation interfaces are used.
-
-The following types of named graphs exist:
-- [General Graphs](general-graphs/_index.md)
-- [SmartGraphs](smartgraphs/_index.md)
-- [EnterpriseGraphs](enterprisegraphs/_index.md)
-- [SatelliteGraphs](satellitegraphs/_index.md)
-
-Selecting the optimal type of named graph in ArangoDB can help you achieve
-the best performance and scalability for your data-intensive applications.
-
-Which collections are used within a named graph is defined via
-**edge definitions**. They describe which edge collections connect which
-vertex collections. This is defined separately for the *from* and the *to*
-per edge collection. A named graph can have one or more edge definitions.
-
-The underlying collections of named graphs are still accessible using the
-standard collection and document APIs. However, the graph modules add an
-additional layer on top of these collections to provide integrity guarantees by
-doing the following:
-
-- Execute all modifications transactionally
-- Check that vertices references by edges in the `_from` and `_to` attributes
- actually exist
-- Only allow to reference vertices from collections as specified by the
- definition of the graph
-- Delete edges when a connected vertex is deleted to avoid dangling edges
-- Prohibit to use an edge collections in an edge definition with a different
- set of *from* and *to* vertex collections than an existing edge definition
- of any graph
-- Depending on the named graph type, there can be additional restrictions to
- ensure a well-formed graph
-
-Your edge collections will only contain valid edges and you will never have
-loose ends. These guarantees are lost if you access the collections in any other
-way than the graph modules. For example, if you delete documents from your
-vertex collections directly, the edges pointing to them remain in place.
-Note that existing inconsistencies in your data are not corrected when you create
-a named graph. Therefore, make sure you start with sound data as otherwise there
-could be dangling edges after all. The graph modules guarantee to not introduce
-new inconsistencies only.
-
-You can create and manage named graphs in the following ways:
-- With the [web interface](../components/web-interface/graphs.md)
- in the **GRAPHS** section
-- In _arangosh_ using the respective graph-related modules of the
- JavaScript API (see the above links of the named graph types)
-- Using the [Gharial HTTP API](../develop/http-api/graphs/named-graphs.md)
-
-#### When to use General Graphs
-
-The General Graph is the basic graph type in ArangoDB, suitable for small-scale
-graph use cases. Data in this type is randomly distributed across all configured
-machines, making it easy to set up. However, this approach may result in
-suboptimal query performance due to random data distribution.
-
-{{< tip >}}
-General graphs are the easiest way to get started, no special configuration required.
-{{< /tip >}}
-
-
-
-#### When to use SmartGraphs
-
-The SmartGraphs further optimize data distribution by allowing you to define a
-property called `smartGraphAttribute`. This property leverages your application's
-knowledge about the graph's interconnected communities to improve data
-organization and query performance.
-
-{{< tip >}}
-For the best query performance, especially in highly interconnected graph
-scenarios, use SmartGraphs. Organize your data efficiently using the
-`smartGraphAttribute`.
-{{< /tip >}}
-
-
-
-#### When to use EnterpriseGraphs
-
-The EnterpriseGraphs are designed for large-scale graph use cases in enterprise
-environments. While data is also randomly sharded, this graph type ensures that
-all edges adjacent to a vertex are co-located on the same server. This
-optimization significantly improves query performance by reducing network hops.
-
-{{< tip >}}
-If you need improved query execution without manual data distribution, consider
-using EnterpriseGraphs.
-{{< /tip >}}
-
-
-
-#### When to use SatelliteGraphs
-
-SatelliteGraphs replicate one or more graphs to all machines within a cluster
-so queries can be executed locally. All vertices and edges are available on
-every node for maximum data locality, therefore no network hops are required
-to traverse the graph.
-
-{{< tip >}}
-When using SatelliteGraphs, expect slower write performance because the data is
-replicated across DB-Servers. For a more efficient option that doesn't replicate
-all graph data to every server in your cluster, consider SmartGraphs.
-{{< /tip >}}
-
-### Anonymous graphs
-
-An anonymous graph is the graph that your data implicitly defines by edges that
-reference vertices and that you directly use by defining the vertex and edge
-collections for graph operations such as traversals and path finding algorithms
-in queries. You can also work with [edges](working-with-edges.md) directly.
-
-Anonymous graphs don't have edge definitions describing which vertex collection
-is connected by which edge collection. The graph model has to be maintained by
-the client-side code. This gives you more freedom than the strict named graphs
-such as the ability to let an edge reference documents from any collections in
-the current database.
-
-{{% comment %}}
-## Graph use cases
-
-many problem domains and solve them with semantic queries and graph analytics.
-use cases with rough data model
-
-information extraction (high-level)
-{{% /comment %}}
-
-## Model data with graphs
-
-Graphs can have different structures, called **topologies**. The topology
-describes how the vertices and edges are arranged by classifying the pattern of
-connections. Some relevant classes are:
-
-- Cyclic: a graph that contains at least one path that starts and ends at the
- same node. An edge can also originate from and point to the same vertex.
-- Acyclic: a graph that contains no cycles
-- Tree: a directed acyclic graph (DAG) without cycles and exactly one path
- between any two vertices in the graph
-- Dense: a graph with edges between most pairs of vertices
-- Sparse: a graph where only few pairs of vertices are connected by edges
-
-The topology for your graphs will vary depending on your data and requirements
-but you always have a degree of freedom when modeling the data.
-
-### What information should be stored in edges and what in vertices
-
-The main objects in your data model, such as users, groups, or articles, are
-usually considered to be vertices. For each type of object, a document collection
-should store the individual entities. Entities can be connected by edges to
-express and classify relations between vertices. It often makes sense to have
-an edge collection per relation type.
-
-ArangoDB does not require you to store your data in graph structures with edges
-and vertices. You can also decide to embed attributes such as which groups a
-user is part of or store `_id`s of documents in another document instead of
-connecting the documents with edges. It can be a meaningful performance
-optimization for *1:n* relationships if your data is not focused on relations
-and you don't need graph traversal with varying depth. It usually means
-to introduce some redundancy and possibly inconsistencies if you embed data, but
-it can be an acceptable tradeoff.
-
-**Vertices**:
-Assume you have two vertex collections, `Users` and `Groups`. Documents in the
-`Groups` collection contain the attributes of the group, i.e. when it was founded,
-its subject, and so on. Documents in the `Users` collection contain the data
-specific to a user, like name, birthday, hobbies, et cetera.
-
-**Edges**:
-You can use an edge collection to store relations between users and groups.
-Since multiple users may be in an arbitrary number of groups, this is an **m:n**
-relation. The edge collection can be called `UsersInGroups` to store edges like
-with `_from` pointing to `Users/John` and `_to` pointing to
-`Groups/BowlingGroupHappyPin`. This makes the user **John** a member of the group
-**Bowling Group Happy Pin**. You can store additional properties in document
-attributes to qualify the relation further, like the permissions of **John** in
-this group, the date when John joined the group, and so on.
-
-
-
-As a rule of thumb, if you use documents and their attributes in a sentence,
-nouns would typically be vertices, and the verbs the edges.
-You can see this in the [Knows Graph](example-graphs.md#knows-graph):
-
- Alice knows Bob, who in term knows Charlie.
-
-The advantages of using graphs is that you are not limited to a fixed number of
-**m:n** relations for a document, but you can have an arbitrary number of
-relations. Edges can be traversed in both directions, so it is easy to determine
-all groups a user is in, but also to find out which members a certain group has.
-You can also interconnect users to create a social network.
-
-Using the graph data model, dealing with data that has lots of relations stays
-manageable and can be queried in very flexible ways, whereas it would hard to
-handle it in a relational database system.
-
-### Multiple edge collections vs. `FILTER`s on edge document attributes
-
-If you want to only traverse edges of a specific type, there are two ways to
-achieve this.
-
-The first is to use an attribute in the edge document, e.g. `type`, where you
-specify a differentiator for the edge, like `"friends"`, `"family"`, `"married"`,
-or `"workmates"`, so that you can later use `FILTER e.type = "friends"` in
-queries if you only want to follow the friend edges.
-
-Another way, which may be more efficient in some cases, is to use different
-edge collections for different types of edges. You could have `friend_edges`,
-`family_edges`, `married_edges`, and `workmate_edges` as edge collections.
-You can then limit the query to a subset of the edge and vertex collections.
-To only follow friend edges, you would specify `friend_edges` as sole edge collection.
-
-Both approaches have advantages and disadvantages. `FILTER` operations on edge
-attributes are more costly at query time because a condition needs to be checked
-for every traversed edge, which may become a bottleneck. If the set of edges is
-restricted by only using a certain edge collection, then other types of edges
-are not traversed in the first place and there is no need to check for a `type`
-attribute with `FILTER`. On the other hand, using a `type` attribute allows you
-to update edges more easily and you can even assign multiple types to a single
-edge.
-
-The multiple edge collections approach is limited by the number of collections
-that can be used in one query, see [Known limitations for AQL queries](../aql/fundamentals/limitations.md).
-Every collection used in a query requires some resources inside of ArangoDB and
-the number is therefore limited to cap the resource requirements. You may also
-have constraints on other edge attributes, such as a persistent index with a
-unique constraint, which requires the documents to be in a single collection for
-the uniqueness guarantee, and it may thus not be possible to store the different
-types of edges in multiple edge collections.
-
-In conclusion, if your edges have about a dozen different types, you can choose
-the approach with multiple edge collections. Otherwise, the `FILTER` approach is
-preferred. You can still use `FILTER` operations on edges as needed if you choose
-the former approach. It merely removes the need of a `FILTER` on the `type`,
-everything else can stay the same.
-
-{{% comment %}}
-embedded vs. joins vs. graphs
-
-acl/rbac
-dependencies
-product hierarchies
-...
-{{% /comment %}}
-
-### Example graphs
-
-For example data that you can use for learning graphs, see
-[Example graphs](example-graphs.md).
-
-{{% comment %}}
-## Query graphs
-
-Explain traversal, pattern matching, shortest paths, pregel
-direction, depth, order, conditions, weights?
-combine with geo, search, ...
-{{% /comment %}}
-
-## Back up and restore graph
-
-For backups of your graph data, you can use [_arangodump_](../components/tools/arangodump/_index.md)
-to create the backup, and [_arangorestore_](../components/tools/arangorestore/_index.md) to
-restore a backup. However, note the following:
-
-- You need to include the `_graphs` system collection if you want to back up
- named graphs as the graph definitions are stored in this collection.
-- You need to back up all vertex and edge collections your graph consists of.
- A partial dump/restore may not work.
diff --git a/site/content/3.10/graphs/enterprisegraphs/_index.md b/site/content/3.10/graphs/enterprisegraphs/_index.md
deleted file mode 100644
index 27db0dfcee..0000000000
--- a/site/content/3.10/graphs/enterprisegraphs/_index.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-title: EnterpriseGraphs
-menuTitle: EnterpriseGraphs
-weight: 100
-description: >-
- EnterpriseGraphs enable you to manage graphs at scale with automated sharding
- key selection
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-This chapter describes the `enterprise-graph` module, a specialized version of
-[SmartGraphs](../smartgraphs/_index.md).
-It will give a vast performance benefit for all graphs sharded
-in an ArangoDB Cluster, reducing network hops substantially.
-
-In terms of querying there is no difference between SmartGraphs and EnterpriseGraphs.
-For graph querying please refer to [AQL Graph Operations](../../aql/graphs/_index.md)
-and [General Graph Functions](../general-graphs/functions.md) sections.
-
-Creating and modifying the underlying collections of an EnterpriseGraph are
-also similar to SmartGraphs. For a detailed API reference, please refer
-to [Enterprise Graphs Management](management.md).
-
-Coming from the Community Edition?
-See [how to migrate](getting-started.md#migrating-from-community-edition)
-from a `general-graph` to an `enterprise-graph`.
-
-## How EnterpriseGraphs work
-
-The creation and usage of EnterpriseGraphs are similar to [SmartGraphs](../smartgraphs/getting-started.md).
-However, the latter requires the selection of an appropriate sharding key.
-This is known as the `smartGraphAttribute`, a value that is stored in every vertex,
-which ensures data co-location of all vertices sharing this attribute and their
-immediate edges.
-
-EnterpriseGraphs come with a concept of "random sharding", meaning that the
-sharding key is randomly selected while ensuring that all vertices with the
-same sharding key and their adjacent edges are co-located on the same servers,
-whenever possible. This approach provides significant advantages as it
-minimizes the impact of having suboptimal sharding keys defined when creating
-the graph.
-
-This means that, when using EnterpriseGraphs, the `smartGraphAttribute` is
-**not** required. As a consequence, you **cannot** define `_key` values on
-edges.
-
-## EnterpriseGraphs using SatelliteCollections
-
-EnterpriseGraphs are capable of using [SatelliteCollections](../../develop/satellitecollections.md)
-within their graph definition. Therefore, edge definitions defined between
-EnterpriseCollections and SatelliteCollections can be created. As SatelliteCollections
-(and the edge collections between EnterpriseGraph collections and SatelliteCollections)
-are globally replicated to each participating DB-Server, (weighted) graph
-traversal and (k-)shortest path(s) query can partially be executed locally on
-each DB-Server. This means a larger part of the query can be executed fully
-local whenever data from the SatelliteCollections is required.
diff --git a/site/content/3.10/graphs/enterprisegraphs/getting-started.md b/site/content/3.10/graphs/enterprisegraphs/getting-started.md
deleted file mode 100644
index 1997e74ea5..0000000000
--- a/site/content/3.10/graphs/enterprisegraphs/getting-started.md
+++ /dev/null
@@ -1,314 +0,0 @@
----
-title: Getting Started with EnterpriseGraphs
-menuTitle: Getting Started
-weight: 5
-description: >-
- This chapter walks you through the first steps you need to follow to create an EnterpriseGraph
----
-EnterpriseGraphs **cannot use existing collections**. When switching to
-EnterpriseGraph from an existing dataset, you have to import the data into a
-fresh EnterpriseGraph.
-
-When creating an EnterpriseGraph, you cannot have different number of shards per
-collection. To preserve the sharding pattern, the `_from` and `_to` attributes
-of the edges cannot be modified.
-You can define any `_key` value on vertices, including existing ones.
-
-## Migrating from Community Edition
-
-When migrating from the Community Edition to the Enterprise Edition, you can
-bring data from existing collections using the command-line tools `arangoexport`
-and `arangoimport`.
-
-`arangoexport` allows you to export collections to formats like `JSON`, `JSONL`, or `CSV`.
-For this particular case, it is recommended to export data to `JSONL` format.
-Once the data is exported, you need to exclude
-the `_key` values from edges. The `enterprise-graph` module does not allow
-custom `_key` values on edges. This is necessary for the initial data replication
-when using `arangoimport` because these values are immutable.
-
-### Migration by Example
-
-Let us assume you have a `general-graph` in ArangoDB
-that you want to migrate over to be an `enterprise-graph` to benefit from
-the sharding strategy. In this example, the graph has only two collections:
-`old_vertices` which is a document collection and `old_edges` which is the
-corresponding edge collection.
-
-**Export `general-graph` data**
-
-The first step is to export the raw data of those
-collections using `arangoexport`:
-
-```sh
-arangoexport --type jsonl --collection old_vertices --output-directory docOutput --overwrite true
-arangoexport --type jsonl --collection old_edges --output-directory docOutput --overwrite true
-```
-
-Note that the `JSONL` format type is being used in the migration process
-as it is more flexible and can be used with larger datasets.
-The `JSON` type is limited in amount of documents, as it cannot be parsed line
-by line. The `CSV` and `TSV` formats are also fine,
-but require to define the list of attributes to export. `JSONL` exports data
-as is, and due to its line based layout, can be processed line by line and
-therefore has no artificial restrictions on the data.
-
-After this step, two files are generated in the `docOutput` folder, that
-should look like this:
-
-1. `docOutput/old_vertices.jsonl`:
- ```
- {"_key":"Alice","_id":"old_vertices/Alice","_rev":"_edwXFGm---","attribute1":"value1"}
- {"_key":"Bob","_id":"old_vertices/Bob","_rev":"_edwXFGm--_","attribute1":"value2"}
- {"_key":"Charly","_id":"old_vertices/Charly","_rev":"_edwXFGm--B","attribute1":"value3"}
- ```
-
-2. `docOutput/old_edges.jsonl`:
- ```
- {"_key":"121","_id":"old_edges/121","_from":"old_vertices/Bob","_to":"old_vertices/Charly","_rev":"_edwW20----","attribute2":"value2"}
- {"_key":"122","_id":"old_edges/122","_from":"old_vertices/Charly","_to":"old_vertices/Alice","_rev":"_edwW20G---","attribute2":"value3"}
- {"_key":"120","_id":"old_edges/120","_from":"old_vertices/Alice","_to":"old_vertices/Bob","_rev":"_edwW20C---","attribute2":"value1"}
- ```
-
-**Create new Graph**
-
-The next step is to set up an empty EnterpriseGraph and configure it
-according to your preferences.
-
-{{< info >}}
-You are free to change `numberOfShards`, `replicationFactor`, or even collection names
-at this point.
-{{< /info >}}
-
-Please follow the instructions on how to create an EnterpriseGraph
-[using the Web Interface](#create-an-enterprisegraph-using-the-web-interface)
-or [using _arangosh_](#create-an-enterprisegraph-using-arangosh).
-
-**Import data while keeping collection names**
-
-This example describes a 1:1 migration while keeping the original graph intact
-and just changing the sharding strategy.
-
-The empty collections that are now in the target ArangoDB cluster,
-have to be filled with data.
-All vertices can be imported without any change:
-
-```sh
-arangoimport --collection old_vertices --file docOutput/old_vertices.jsonl
-```
-
-On the edges, EnterpriseGraphs disallow storing the `_key` value, so this attribute
-needs to be removed on import:
-
-```
-arangoimport --collection old_edges --file docOutput/old_edges.jsonl --remove-attribute "_key"
-```
-
-After this step, the graph has been migrated.
-
-**Import data while changing collection names**
-
-This example describes a scenario in which the collections names have changed,
-assuming that you have renamed `old_vertices` to `vertices`.
-
-For the vertex data this change is not relevant, the `_id` values are adjusted
-automatically, so you can import the data again, and just target the new
-collection name:
-
-```sh
-arangoimport --collection vertices --file docOutput/old_vertices.jsonl
-```
-
-For the edges you need to apply more changes, as they need to be rewired.
-To make the change of vertex collection, you need to set
-`--overwrite-collection-prefix` to `true`.
-
-To migrate the graph and also change to new collection names, run the following
-command:
-
-```sh
-arangoimport --collection edges --file docOutput/old_edges.jsonl --remove-attribute "_key" --from-collection-prefix "vertices" --to-collection-prefix "vertices" --overwrite-collection-prefix true
-```
-
-Note that:
-- You have to remove the `_key` value as it is disallowed for EnterpriseGraphs.
-- Because you have changed the name of the `_from` collection, you need
- to provide a `--from-collection-prefix`. The same is true for the `_to` collection,
- so you also need to provide a `--to-collection-prefix`.
-- To make the actual name change to the vertex collection, you need to
- allow `--overwrite-collection-prefix`. If this option is not enabled, only values
- without a collection name prefix are changed. This is helpful if your data is not
- exported by ArangoDB in the first place.
-
-This mechanism does not provide the option to selectively replace
-collection names. It only allows replacing all collection names on `_from`
-respectively `_to`.
-This means that, even if you use different collections in `_from` and `_to`,
-their names are modified based on the prefix that is specified.
-
-Consider the following example where `_to` points to a vertex in a different collection,
-`users_vertices/Bob`. When using `--to-collection-prefix "vertices"` to rename
-the collections, all collection names on the `_to` side are renamed to
-`vertices` as this transformation solely allows for the replacement of all
-collection names within the edge attribute.
-
-```json
-{"_key":"121", "_from":"old_vertices/Bob", "_to":"old_vertices/Charly", ... }
-{"_key":"122", "_from":"old_vertices/Charly", "_to":"old_vertices/Alice", ... }
-{"_key":"120", "_from":"old_vertices/Alice", "_to":"users_vertices/Bob", ... }
-```
-
-## Collections in EnterpriseGraphs
-
-In contrast to General Graphs, you **cannot use existing collections**.
-When switching from an existing dataset, you have to import the data into a
-fresh EnterpriseGraph.
-
-The creation of an EnterpriseGraph graph requires the name of the graph and a
-definition of its edges. All collections used within the creation process are
-automatically created by the `enterprise-graph` module. Make sure to only use
-non-existent collection names for both vertices and edges.
-
-## Create an EnterpriseGraph using the web interface
-
-The web interface (also called Web UI) allows you to easily create and manage
-EnterpriseGraphs. To get started, follow the steps outlined below.
-
-1. In the web interface, navigate to the **GRAPHS** section.
-2. To add a new graph, click **Add Graph**.
-3. In the **Create Graph** dialog that appears, select the
- **EnterpriseGraph** tab.
-4. Fill in all required fields:
- - For **Name**, enter a name for the EnterpriseGraph.
- - For **Shards**, enter the number of parts to split the graph into.
- - Optional: For **Replication factor**, enter the total number of
- desired copies of the data in the cluster.
- - Optional: For **Write concern**, enter the total number of copies
- of the data in the cluster required for each write operation.
- - Optional: For **SatelliteCollections**, insert vertex collections
- that are being used in your edge definitions. These collections are
- then created as satellites, and thus replicated to all DB-Servers.
- - For **Edge definition**, insert a single non-existent name to define
- the relation of the graph. This automatically creates a new edge
- collection, which is displayed in the **COLLECTIONS** section of the
- left sidebar menu.
- {{< tip >}}
- To define multiple relations, press the **Add relation** button.
- To remove a relation, press the **Remove relation** button.
- {{< /tip >}}
- - For **fromCollections**, insert a list of vertex collections
- that contain the start vertices of the relation.
- - For **toCollections**, insert a list of vertex collections that
- contain the end vertices of the relation.
- {{< tip >}}
- Insert only non-existent collection names. Collections are automatically
- created during the graph setup and are displayed in the
- **Collections** tab of the left sidebar menu.
- {{< /tip >}}
- - For **Orphan collections**, insert a list of vertex collections
- that are part of the graph but not used in any edge definition.
-5. Click **Create**.
-6. Click the card of the newly created graph use the functions of the Graph
- Viewer to visually interact with the graph and manage the graph data.
-
-
-
-## Create an EnterpriseGraph using *arangosh*
-
-Compared to SmartGraphs, the option `isSmart: true` is required but the
-`smartGraphAttribute` is forbidden.
-
-```js
----
-name: enterpriseGraphCreateGraphHowTo1
-description: ''
-type: cluster
----
-var graph_module = require("@arangodb/enterprise-graph");
-var graph = graph_module._create("myGraph", [], [], {isSmart: true, numberOfShards: 9});
-graph;
-~graph_module._drop("myGraph");
-```
-
-### Add vertex collections
-
-The **collections must not exist** when creating the EnterpriseGraph. The EnterpriseGraph
-module creates them for you automatically to set up the sharding for all
-these collections correctly. If you create collections via the EnterpriseGraph
-module and remove them from the graph definition, then you may re-add them
-without trouble however, as they have the correct sharding.
-
-```js
----
-name: enterpriseGraphCreateGraphHowTo2
-description: ''
-type: cluster
----
-~var graph_module = require("@arangodb/enterprise-graph");
-~var graph = graph_module._create("myGraph", [], [], {isSmart: true, numberOfShards: 9});
-graph._addVertexCollection("shop");
-graph._addVertexCollection("customer");
-graph._addVertexCollection("pet");
-graph = graph_module._graph("myGraph");
-~graph_module._drop("myGraph", true);
-```
-
-### Define relations on the Graph
-
-Adding edge collections works the same as with General Graphs, but again, the
-collections are created by the EnterpriseGraph module to set up sharding correctly
-so they must not exist when creating the EnterpriseGraph (unless they have the
-correct sharding already).
-
-```js
----
-name: enterpriseGraphCreateGraphHowTo3
-description: ''
-type: cluster
----
-~var graph_module = require("@arangodb/enterprise-graph");
-~var graph = graph_module._create("myGraph", [], [], {isSmart: true, numberOfShards: 9});
-~graph._addVertexCollection("shop");
-~graph._addVertexCollection("customer");
-~graph._addVertexCollection("pet");
-var rel = graph_module._relation("isCustomer", ["shop"], ["customer"]);
-graph._extendEdgeDefinitions(rel);
-graph = graph_module._graph("myGraph");
-~graph_module._drop("myGraph", true);
-```
-
-### Create an EnterpriseGraph using SatelliteCollections
-
-When creating a collection, you can decide whether it's a SatelliteCollection
-or not. For example, a vertex collection can be satellite as well.
-SatelliteCollections don't require sharding as the data is distributed
-globally on all DB-Servers. The `smartGraphAttribute` is also not required.
-
-In addition to the attributes you would set to create a EnterpriseGraph, there is an
-additional attribute `satellites` you can optionally set. It needs to be an array of
-one or more collection names. These names can be used in edge definitions
-(relations) and these collections are created as SatelliteCollections.
-However, all vertex collections on one side of the relation have to be of
-the same type - either all satellite or all smart. This is because `_from`
-and `_to` can have different types based on the sharding pattern.
-
-In this example, both vertex collections are created as SatelliteCollections.
-
-{{< info >}}
-When providing a satellite collection that is not used in a relation,
-it is not created. If you create the collection in a following
-request, only then the option counts.
-{{< /info >}}
-
-```js
----
-name: enterpriseGraphCreateGraphHowTo4
-description: ''
-type: cluster
----
-var graph_module = require("@arangodb/enterprise-graph");
-var rel = graph_module._relation("isCustomer", "shop", "customer")
-var graph = graph_module._create("myGraph", [rel], [], {satellites: ["shop", "customer"], isSmart: true, numberOfShards: 9});
-graph;
-~graph_module._drop("myGraph", true);
-```
diff --git a/site/content/3.10/graphs/example-graphs.md b/site/content/3.10/graphs/example-graphs.md
deleted file mode 100644
index 300154d268..0000000000
--- a/site/content/3.10/graphs/example-graphs.md
+++ /dev/null
@@ -1,263 +0,0 @@
----
-title: Example graphs
-menuTitle: Example graphs
-weight: 110
-description: >-
- How to use the example graphs built into ArangoDB
----
-ArangoDB comes with a set of easy-to-understand graphs for demonstration
-purposes.
-
-- In the web interface, navigate to the **GRAPHS** section, click the
- **Add Graph** card, go to the **Examples** tab, and click the **Create** button of one of the listed graphs.
-
-- In _arangosh_, run `require("@arangodb/graph-examples/example-graph").loadGraph("");`
- with `` substituted by the name of an example graph listed below.
-
-You can visually explore the created graphs in the
-[Graph viewer](../components/web-interface/graphs.md#graph-viewer)
-of the web interface.
-
-You can take a look at the script that creates the example graphs on
-[GitHub](https://github.com/arangodb/arangodb/blob/devel/js/common/modules/%40arangodb/graph-examples/example-graph.js)
-for reference about how to manage graphs programmatically.
-
-## Knows Graph
-
-The `knows` graph is a set of persons knowing each other:
-
-
-
-The graph consists of a `persons` vertex collection connected via a `knows`
-edge collection.
-There are five persons, *Alice*, *Bob*, *Charlie*, *Dave*, and *Eve*.
-They have the following directed relations:
-
-- *Alice* knows *Bob*
-- *Bob* knows *Charlie*
-- *Bob* knows *Dave*
-- *Eve* knows *Alice*
-- *Eve* knows *Bob*
-
-Example of how to create the graph, inspect its vertices and edges, and delete
-it again:
-
-```js
----
-name: graph_create_knows_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("knows_graph");
-db.persons.toArray()
-db.knows.toArray();
-examples.dropGraph("knows_graph");
-```
-
-**Note:** With the default traversal depth of 2 of the graph viewer, you may not
-see all edges of this graph by default.
-
-## Traversal Graph
-
-The `traversalGraph` has been designed to demonstrate filters in traversals.
-It has some labels to filter on it. The graph's vertices are in a collection
-called `circles`, and it has an edge collection `edges` to connect them.
-
-
-
-Circles have unique numeric labels. Edges have two boolean attributes
-(`theFalse` always being `false`, `theTruth` always being `true`) and a label
-sorting *B* - *D* to the left side, *G* - *K* to the right side.
-Left and right side split into paths - at *B* and *G*, which are each direct
-neighbors of the root-node *A*. Starting from *A*, the graph has a depth of 3 on
-all its paths.
-
-```js
----
-name: graph_create_traversal_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("traversalGraph");
-db.circles.toArray();
-db.edges.toArray();
-examples.dropGraph("traversalGraph");
-```
-
-**Note:** With the default traversal depth of 2 of the graph viewer, you may not
-see all edges of this graph by default.
-
-## k Shortest Paths Graph
-
-The vertices in the `kShortestPathsGraph` graph are train stations of cities in
-Europe and North America. The edges represent train connections between them,
-with the travel time for both directions as edge weight.
-
-
-
-See the [k Shortest Paths page](../aql/graphs/k-shortest-paths.md) for query examples.
-
-```js
----
-name: graph_create_kshortestpaths_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-examples.dropGraph("kShortestPathsGraph");
-```
-
-## Mps Graph
-
-The `mps_graph` has been created to demonstrate shortest path algorithms and
-the abbreviation stands for **m**ultiple **p**ath **s**earch.
-
-The example graph consists of vertices in the `mps_verts` collection and edges
-in the `mps_edges` collection. It is a simple traversal graph with start node
-*A* and end node *C*.
-
-
-
-With the [Shortest Path](../aql/graphs/shortest-path.md) algorithm, you either
-get the shortest path *A* - *B* - *C* or *A* - *D* - *C*. With the
-[All Shortest Paths](../aql/graphs/all-shortest-paths.md) algorithm, both
-shortest paths are returned.
-
-Example of how to create the graph, inspect its vertices and edges, and delete
-it again:
-
-```js
----
-name: graph_create_mps_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("mps_graph");
-db.mps_verts.toArray();
-db.mps_edges.toArray();
-examples.dropGraph("mps_graph");
-```
-
-## World Graph
-
-The `worldCountry` graph has as node structure as follows:
-
-world → continent → country → capital
-
-
-
-In some cases, edge directions aren't forward. Therefore, it may get displayed
-disjunct in the graph viewer.
-
-You can create the graph as a named graph using the name `worldCountry`, or as
-an anonymous graph (vertex and edge collections only) using the name
-`worldCountryUnManaged`.
-
-```js
----
-name: graph_create_world_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("worldCountry");
-db.worldVertices.toArray();
-db.worldEdges.toArray();
-examples.dropGraph("worldCountry");
-var g = examples.loadGraph("worldCountryUnManaged");
-examples.dropGraph("worldCountryUnManaged");
-```
-
-## Social Graph
-
-The `social` graph is a set of persons and their relations. The graph has
-`female` and `male` persons as vertices in two vertex collections.
-The edges are their connections and stored in the `relation` edge collection.
-
-
-
-Example of how to create the graph, inspect its vertices and edges, and delete
-it again:
-
-```js
----
-name: graph_create_social_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("social");
-db.female.toArray()
-db.male.toArray()
-db.relation.toArray()
-examples.dropGraph("social");
-```
-
-## City Graph
-
-The `routeplanner` graph is a set of european cities and their fictional
-traveling distances as connections. The graph has the cities as vertices in
-multiple vertex collections (`germanCity` and `frenchCity`). The edges are their
-interconnections in several edge collections (`frenchHighway`, `germanHighway`,
-`internationalHighway`).
-
-
-
-Example of how to create the graph, inspect its edges and vertices, and delete
-it again:
-
-```js
----
-name: graph_create_cities_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("routeplanner");
-db.frenchCity.toArray();
-db.germanCity.toArray();
-db.germanHighway.toArray();
-db.frenchHighway.toArray();
-db.internationalHighway.toArray();
-examples.dropGraph("routeplanner");
-```
-
-## Connected Components Graph
-
-A small example graph comprised of `components` (vertices) and `connections`
-(edges). Good for trying out Pregel algorithms such as Weakly Connected
-Components (WCC).
-
-Also see:
-- [Distributed Iterative Graph Processing (Pregel)](../data-science/pregel/_index.md)
-- [Pregel HTTP API](../develop/http-api/pregel.md)
-
-
-
-```js
----
-name: graph_create_connectedcomponentsgraph_sample
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("connectedComponentsGraph");
-db.components.toArray();
-db.connections.toArray();
-examples.dropGraph("connectedComponentsGraph");
-```
-
-## Higher volume graph examples
-
-All of the above examples are rather small to make them easy to comprehend and
-demonstrate how graphs work in ArangoDB. However, there are several, freely
-available datasets on the web that are a lot bigger.
-
-You can find a collection of datasets with import scripts on
-[GitHub](https://github.com/arangodb/example-datasets).
-
-Another huge graph is the [Pokec social network](https://snap.stanford.edu/data/soc-pokec.html)
-from Slovakia. See this [blogpost](https://www.arangodb.com/2015/06/multi-model-benchmark/)
-for details and an import script.
-
-## More examples
-
- - [AQL Example Queries on an Actors and Movies Database](../aql/examples-and-query-patterns/actors-and-movies-dataset-queries.md)
diff --git a/site/content/3.10/graphs/general-graphs/_index.md b/site/content/3.10/graphs/general-graphs/_index.md
deleted file mode 100644
index d7c072c47e..0000000000
--- a/site/content/3.10/graphs/general-graphs/_index.md
+++ /dev/null
@@ -1,107 +0,0 @@
----
-title: General Graphs
-menuTitle: General Graphs
-weight: 85
-description: >-
- The basic type of a named graph is called a General Graph and there is a
- JavaScript module for working with these graphs
----
-This chapter describes the [general-graph](../_index.md) module.
-It allows you to define a graph that is spread across several edge and document
-collections.
-This allows you to structure your models in line with your domain and group
-them logically in collections giving you the power to query them in the same
-graph queries.
-There is no need to include the referenced collections within the query, this
-module will handle it for you.
-
-New to ArangoDB? Take the free
-[ArangoDB Graph Course](https://www.arangodb.com/arangodb-graph-course)
-for freshers.
-
-## Getting started
-
-### Create a General Graph using the web interface
-
-The web interface (also called Web UI) allows you to easily create and manage
-General Graphs. To get started, follow the steps outlined below.
-
-1. In the web interface, navigate to the **GRAPHS** section.
-2. To add a new graph, click **Add Graph**.
-3. In the **Create Graph** dialog that appears, select the
- **GeneralGraph** tab.
-4. Fill in the following fields:
- - For **Name**, enter a name for the General Graph.
- - For **Shards**, enter the number of parts to split the graph into.
- - For **Replication factor**, enter the total number of
- desired copies of the data in the cluster.
- - For **Write concern**, enter the total number of copies
- of the data in the cluster required for each write operation.
-5. Define the relation(s) on the General Graph:
- - For **Edge definition**, insert a single non-existent name to define
- the relation of the graph. This automatically creates a new edge
- collection, which is displayed in the **COLLECTIONS** section of the
- left sidebar menu.
- {{< tip >}}
- To define multiple relations, press the **Add relation** button.
- To remove a relation, press the **Remove relation** button.
- {{< /tip >}}
- - For **fromCollections**, insert a list of vertex collections
- that contain the start vertices of the relation.
- - For **toCollections**, insert a list of vertex collections that
- contain the end vertices of the relation.
-6. If you want to use vertex collections that are part of the graph
- but not used in any edge definition, you can insert them via
- **Orphan collections**.
-7. Click **Create**.
-8. Click the card of the newly created graph and use the functions of the Graph
- Viewer to visually interact with the graph and manage the graph data.
-
-
-
-### Create a General Graph using *arangosh*
-
-**Create a graph**
-
-```js
----
-name: generalGraphCreateGraphHowTo1
-description: ''
----
-var graph_module = require("@arangodb/general-graph");
-var graph = graph_module._create("myGraph");
-graph;
-~graph_module._drop("myGraph", true);
-```
-
-**Add some vertex collections**
-
-```js
----
-name: generalGraphCreateGraphHowTo2
-description: ''
----
-~var graph_module = require("@arangodb/general-graph");
-~var graph = graph_module._create("myGraph");
-graph._addVertexCollection("shop");
-graph._addVertexCollection("customer");
-graph._addVertexCollection("pet");
-graph = graph_module._graph("myGraph");
-~graph_module._drop("myGraph", true);
-```
-
-**Define relations on the Graph**
-
-```js
----
-name: generalGraphCreateGraphHowTo3
-description: ''
----
-~var graph_module = require("@arangodb/general-graph");
-~var graph = graph_module._create("myGraph");
-~graph._addVertexCollection("pet");
-var rel = graph_module._relation("isCustomer", ["shop"], ["customer"]);
-graph._extendEdgeDefinitions(rel);
-graph = graph_module._graph("myGraph");
-~graph_module._drop("myGraph", true);
-```
diff --git a/site/content/3.10/graphs/general-graphs/functions.md b/site/content/3.10/graphs/general-graphs/functions.md
deleted file mode 100644
index 87fb731922..0000000000
--- a/site/content/3.10/graphs/general-graphs/functions.md
+++ /dev/null
@@ -1,938 +0,0 @@
----
-title: Graph Functions
-menuTitle: Functions
-weight: 10
-description: >-
- Utility functions available for named graphs
----
-This chapter describes [various functions on a graph](../_index.md).
-A lot of these accept a vertex (or edge) example as parameter as defined in the next section.
-
-Examples explain the API using the [City Graph](../example-graphs.md#city-graph):
-
-
-
-## Definition of examples
-
-For many of the following functions *examples* can be passed in as a parameter.
-*Examples* are used to filter the result set for objects that match the conditions.
-These *examples* can have the following values:
-
-- `null`, there is no matching executed all found results are valid.
-- A *string*, only results are returned, which `_id` equal the value of the string
-- An example *object*, defining a set of attributes.
- Only results having these attributes are matched.
-- A *list* containing example *objects* and/or *strings*.
- All results matching at least one of the elements in the list are returned.
-
-## Get vertices from edges
-
-### Get the source vertex of an edge
-
-`graph._fromVertex(edgeId)`
-
-Returns the vertex defined with the attribute `_from` of the edge with `edgeId` as its `_id`.
-
-- `edgeId` (required): `_id` attribute of the edge
-
-**Examples**
-
-```js
----
-name: generalGraphGetFromVertex
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("social");
-var any = require("@arangodb").db.relation.any();
-graph._fromVertex("relation/" + any._key);
-~examples.dropGraph("social");
-```
-
-### Get the target vertex of an edge
-
-`graph._toVertex(edgeId)`
-
-Returns the vertex defined with the attribute `_to` of the edge with `edgeId` as its `_id`.
-
-- `edgeId` (required): `_id` attribute of the edge
-
-**Examples**
-
-```js
----
-name: generalGraphGetToVertex
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("social");
-var any = require("@arangodb").db.relation.any();
-graph._toVertex("relation/" + any._key);
-~examples.dropGraph("social");
-```
-
-## _neighbors
-
-Get all neighbors of the vertices defined by the example.
-
-`graph._neighbors(vertexExample, options)`
-
-The function accepts an id, an example, a list of examples or even an empty
-example as parameter for vertexExample.
-The complexity of this method is `O(n * m^x)` with `n` being the vertices defined by the
-`vertexExample` parameter, the average amount of neighbors `m`, and the maximal depths `x`.
-Hence, the default call has a complexity of `O(n * m)`;
-
-- `vertexExample` (optional): See [Definition of examples](#definition-of-examples)
-- `options` (optional): An object defining further options. Can have the following values:
- - `direction`: The direction of the edges. Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `edgeExamples`: Filter the edges, see [Definition of examples](#definition-of-examples)
- - `neighborExamples`: Filter the neighbor vertices, see [Definition of examples](#definition-of-examples)
- - `edgeCollectionRestriction`: One or a list of edge-collection names that should be
- considered to be on the path.
- - `vertexCollectionRestriction`: One or a list of vertex-collection names that should be
- considered on the intermediate vertex steps.
- - `minDepth`: Defines the minimal number of intermediate steps to neighbors (default is 1).
- - `maxDepth`: Defines the maximal number of intermediate steps to neighbors (default is 1).
-
-**Examples**
-
-A route planner example, all neighbors of capitals.
-
-```js
----
-name: generalGraphModuleNeighbors1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._neighbors({isCapital : true});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, all outbound neighbors of Hamburg.
-
-```js
----
-name: generalGraphModuleNeighbors2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._neighbors('germanCity/Hamburg', {direction : 'outbound', maxDepth : 2});
-~examples.dropGraph("routeplanner");
-```
-
-## _commonNeighbors
-
-Get all common neighbors of the vertices defined by the examples.
-
-`graph._commonNeighbors(vertex1Example, vertex2Examples, optionsVertex1, optionsVertex2)`
-
-This function returns the intersection of `graph_module._neighbors(vertex1Example, optionsVertex1)`
-and `graph_module._neighbors(vertex2Example, optionsVertex2)`.
-For parameter documentation see [_neighbors](#_neighbors).
-
-The complexity of this method is **O(n\*m^x)** with *n* being the maximal amount of vertices
-defined by the parameters vertexExamples, *m* the average amount of neighbors and *x* the
-maximal depths.
-Hence the default call would have a complexity of **O(n\*m)**;
-
-**Examples**
-
-A route planner example, all common neighbors of capitals.
-
-```js
----
-name: generalGraphModuleCommonNeighbors1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._commonNeighbors({isCapital : true}, {isCapital : true});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, all common outbound neighbors of Hamburg with any other location
-which have a maximal depth of 2 :
-
-```js
----
-name: generalGraphModuleCommonNeighbors2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._commonNeighbors(
- 'germanCity/Hamburg',
- {},
- {direction : 'outbound', maxDepth : 2},
- {direction : 'outbound', maxDepth : 2});
-~examples.dropGraph("routeplanner");
-```
-
-## _countCommonNeighbors
-
-Get the amount of common neighbors of the vertices defined by the examples.
-
-`graph._countCommonNeighbors(vertex1Example, vertex2Examples, optionsVertex1, optionsVertex2)`
-
-Similar to [_commonNeighbors](#_commonneighbors) but returns count instead of the elements.
-
-**Examples**
-
-A route planner example, all common neighbors of capitals.
-
-```js
----
-name: generalGraphModuleCommonNeighborsAmount1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-var example = { isCapital: true };
-var options = { includeData: true };
-graph._countCommonNeighbors(example, example, options, options);
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, all common outbound neighbors of Hamburg with any other location
-which have a maximal depth of 2 :
-
-```js
----
-name: generalGraphModuleCommonNeighborsAmount2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-var options = { direction: 'outbound', maxDepth: 2, includeData: true };
-graph._countCommonNeighbors('germanCity/Hamburg', {}, options, options);
-~examples.dropGraph("routeplanner");
-```
-
-## _commonProperties
-
-Get the vertices of the graph that share common properties.
-
-`graph._commonProperties(vertex1Example, vertex2Examples, options)`
-
-The function accepts an id, an example, a list of examples or even an empty
-example as parameter for vertex1Example and vertex2Example.
-
-The complexity of this method is **O(n)** with *n* being the maximal amount of vertices
-defined by the parameters vertexExamples.
-
-- `vertex1Examples` (optional): Filter the set of source vertices, see [Definition of examples](#definition-of-examples)
-
-- `vertex2Examples` (optional): Filter the set of vertices compared to, see [Definition of examples](#definition-of-examples)
-- options (optional) An object defining further options. Can have the following values:
- - `vertex1CollectionRestriction` : One or a list of vertex-collection names that should be
- searched for source vertices.
- - `vertex2CollectionRestriction` : One or a list of vertex-collection names that should be
- searched for compare vertices.
- - `ignoreProperties` : One or a list of attribute names of a document that should be ignored.
-
-**Examples**
-
-A route planner example, all locations with the same properties:
-
-```js
----
-name: generalGraphModuleProperties1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._commonProperties({}, {});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, all cities which share same properties except for population.
-
-```js
----
-name: generalGraphModuleProperties2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._commonProperties({}, {}, { ignoreProperties: 'population' });
-~examples.dropGraph("routeplanner");
-```
-
-## _countCommonProperties
-
-Get the amount of vertices of the graph that share common properties.
-
-`graph._countCommonProperties(vertex1Example, vertex2Examples, options)`
-
-Similar to [_commonProperties](#_commonproperties) but returns count instead of
-the objects.
-
-**Examples**
-
-A route planner example, all locations with the same properties:
-
-```js
----
-name: generalGraphModuleAmountProperties1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._countCommonProperties({}, {});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, all German cities which share same properties except for population.
-
-```js
----
-name: generalGraphModuleAmountProperties2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._countCommonProperties({}, {}, {
- vertex1CollectionRestriction: 'germanCity',
- vertex2CollectionRestriction: 'germanCity',
- ignoreProperties: 'population'
-});
-~examples.dropGraph("routeplanner");
-```
-
-## _paths
-
-The _paths function returns all paths of a graph.
-
-`graph._paths(options)`
-
-This function determines all available paths in a graph.
-
-The complexity of this method is **O(n\*n\*m)** with *n* being the amount of vertices in
-the graph and *m* the average amount of connected edges;
-
-- `options` (optional): An object containing options, see below:
- - `direction`: The direction of the edges. Possible values are `"any"`,
- `"inbound"`, and `"outbound"` (default).
- - `followCycles` (optional): If set to `true` the query follows cycles in the graph,
- default is false.
- - `minLength` (optional): Defines the minimal length a path must
- have to be returned (default is 0).
- - `maxLength` (optional): Defines the maximal length a path must
- have to be returned (default is 10).
-
-**Examples**
-
-Return all paths of the graph "social":
-
-```js
----
-name: generalGraphModulePaths1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("social");
-g._paths();
-~examples.dropGraph("social");
-```
-
-Return all inbound paths of the graph "social" with a maximal
-length of 1 and a minimal length of 2:
-
-```js
----
-name: generalGraphModulePaths2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("social");
-g._paths({ direction: 'inbound', minLength: 1, maxLength: 2 });
-~examples.dropGraph("social");
-```
-
-## _shortestPath
-
-The _shortestPath function returns all shortest paths of a graph.
-
-`graph._shortestPath(startVertexExample, endVertexExample, options)`
-
-This function determines all shortest paths in a graph.
-The function accepts an id, an example, a list of examples
-or even an empty example as parameter for
-start and end vertex.
-The length of a path is by default the amount of edges from one start vertex to
-an end vertex. The option weight allows the user to define an edge attribute
-representing the length.
-
-- `startVertexExample` (optional): An example for the desired start Vertices (see [Definition of examples](#definition-of-examples)).
-- `endVertexExample` (optional): An example for the desired end Vertices (see [Definition of examples](#definition-of-examples)).
-- `options` (optional): An object containing options, see below:
- - `direction`: The direction of the edges as a string.
- Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `edgeCollectionRestriction`: One or multiple edge
- collection names. Only edges from these collections will be considered for the path.
- - `startVertexCollectionRestriction`: One or multiple vertex
- collection names. Only vertices from these collections will be considered as
- start vertex of a path.
- - `endVertexCollectionRestriction`: One or multiple vertex
- collection names. Only vertices from these collections will be considered as
- end vertex of a path.
- - `weight`: The name of the attribute of
- the edges containing the length as a string.
- - `defaultWeight`: Only used with the option `weight`.
- If an edge does not have the attribute named as defined in option `weight` this default
- is used as length.
- If no default is supplied the default would be positive Infinity so the path could
- not be calculated.
-
-**Examples**
-
-A route planner example, shortest path from all german to all french cities:
-
-```js
----
-name: generalGraphModuleShortestPaths1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("routeplanner");
-g._shortestPath({}, {}, {
- weight: 'distance',
- endVertexCollectionRestriction: 'frenchCity',
- startVertexCollectionRestriction: 'germanCity'
-});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, shortest path from Hamburg and Cologne to Lyon:
-
-```js
----
-name: generalGraphModuleShortestPaths2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("routeplanner");
-g._shortestPath([
- {_id: 'germanCity/Cologne'},
- {_id: 'germanCity/Munich'}
-], 'frenchCity/Lyon', { weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _distanceTo
-
-The _distanceTo function returns all paths and there distance within a graph.
-
-`graph._distanceTo(startVertexExample, endVertexExample, options)`
-
-This function is a wrapper of [graph._shortestPath](#_shortestpath).
-It does not return the actual path but only the distance between two vertices.
-
-**Examples**
-
-A route planner example, shortest distance from all german to all french cities:
-
-```js
----
-name: generalGraphModuleDistanceTo1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("routeplanner");
-g._distanceTo({}, {}, {
- weight: 'distance',
- endVertexCollectionRestriction: 'frenchCity',
- startVertexCollectionRestriction: 'germanCity'
-});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, shortest distance from Hamburg and Cologne to Lyon:
-
-```js
----
-name: generalGraphModuleDistanceTo2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var g = examples.loadGraph("routeplanner");
-g._distanceTo([
- {_id: 'germanCity/Cologne'},
- {_id: 'germanCity/Munich'}
-], 'frenchCity/Lyon', { weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _absoluteEccentricity
-
-Get the
-[eccentricity](http://en.wikipedia.org/wiki/Distance_%28graph_theory%29)
-of the vertices defined by the examples.
-
-`graph._absoluteEccentricity(vertexExample, options)`
-
-The function accepts an id, an example, a list of examples or even an empty
-example as parameter for vertexExample.
-
-- `vertexExample` (optional): Filter the vertices, see [Definition of examples](#definition-of-examples)
-- `options` (optional): An object defining further options. Can have the following values:
- - `direction`: The direction of the edges. Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `edgeCollectionRestriction` : One or a list of edge-collection names that should be
- considered to be on the path.
- - `startVertexCollectionRestriction` : One or a list of vertex-collection names that should be
- considered for source vertices.
- - `endVertexCollectionRestriction` : One or a list of vertex-collection names that should be
- considered for target vertices.
- - `weight`: The name of the attribute of the edges containing the weight.
- - `defaultWeight`: Only used with the option `weight`.
- If an edge does not have the attribute named as defined in option `weight` this default
- is used as weight.
- If no default is supplied the default would be positive infinity so the path and
- hence the eccentricity cannot be calculated.
-
-**Examples**
-
-A route planner example, the absolute eccentricity of all locations.
-
-```js
----
-name: generalGraphModuleAbsEccentricity1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteEccentricity({});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the absolute eccentricity of all locations.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleAbsEccentricity2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteEccentricity({}, { weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the absolute eccentricity of all cities regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleAbsEccentricity3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteEccentricity({}, {
- startVertexCollectionRestriction: 'germanCity',
- direction: 'outbound',
- weight: 'distance'
-});
-~examples.dropGraph("routeplanner");
-```
-
-## _eccentricity
-
-Get the normalized
-[eccentricity](http://en.wikipedia.org/wiki/Distance_%28graph_theory%29)
-of the vertices defined by the examples.
-
-`graph._eccentricity(vertexExample, options)`
-
-Similar to [_absoluteEccentricity](#_absoluteeccentricity) but returns a normalized result.
-
-**Examples**
-
-A route planner example, the eccentricity of all locations.
-
-```js
----
-name: generalGraphModuleEccentricity2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._eccentricity();
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the weighted eccentricity.
-
-```js
----
-name: generalGraphModuleEccentricity3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._eccentricity({ weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _absoluteCloseness
-
-Get the
-[closeness](http://en.wikipedia.org/wiki/Centrality#Closeness_centrality)
-of the vertices defined by the examples.
-
-`graph._absoluteCloseness(vertexExample, options)`
-
-The function accepts an id, an example, a list of examples or even an empty
-example as parameter for `vertexExample`.
-
-- `vertexExample` (optional): Filter the vertices, see [Definition of examples](#definition-of-examples)
-- options (optional) An object defining further options. Can have the following values:
- - `direction`: The direction of the edges. Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `edgeCollectionRestriction` : One or a list of edge-collection names that should be
- considered to be on the path.
- - `startVertexCollectionRestriction` : One or a list of vertex-collection names that should be
- considered for source vertices.
- - `endVertexCollectionRestriction` : One or a list of vertex-collection names that should be
- considered for target vertices.
- - `weight`: The name of the attribute of the edges containing the weight.
- - `defaultWeight`: Only used with the option `weight`.
- If an edge does not have the attribute named as defined in option `weight` this default
- is used as weight.
- If no default is supplied the default would be positive infinity so the path and
- hence the closeness cannot be calculated.
-
-**Examples**
-
-A route planner example, the absolute closeness of all locations.
-
-```js
----
-name: generalGraphModuleAbsCloseness1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteCloseness({});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the absolute closeness of all locations.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleAbsCloseness2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteCloseness({}, { weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the absolute closeness of all German Cities regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleAbsCloseness3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteCloseness({}, {
- startVertexCollectionRestriction: 'germanCity',
- direction: 'outbound',
- weight: 'distance'
-});
-~examples.dropGraph("routeplanner");
-```
-
-## _closeness
-
-Get the normalized
-[closeness](http://en.wikipedia.org/wiki/Centrality#Closeness_centrality)
-of graphs vertices.
-
-`graph._closeness(options)`
-
-Similar to [_absoluteCloseness](#_absolutecloseness) but returns a normalized value.
-
-**Examples**
-
-A route planner example, the normalized closeness of all locations.
-
-```js
----
-name: generalGraphModuleCloseness1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._closeness();
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the closeness of all locations.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleCloseness2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._closeness({ weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the closeness of all cities regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleCloseness3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._closeness({ direction: 'outbound', weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _absoluteBetweenness
-
-Get the
-[betweenness](http://en.wikipedia.org/wiki/Betweenness_centrality)
-of all vertices in the graph.
-
-`graph._absoluteBetweenness(vertexExample, options)`
-
-- `vertexExample` (optional): Filter the vertices, see [Definition of examples](#definition-of-examples)
-- `options` (optional): An object defining further options. Can have the following values:
- - `direction`: The direction of the edges. Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `weight`: The name of the attribute of the edges containing the weight.
- - `defaultWeight`: Only used with the option `weight`.
- If an edge does not have the attribute named as defined in option `weight` this default
- is used as weight.
- If no default is supplied the default would be positive infinity so the path and
- hence the betweenness cannot be calculated.
-
-**Examples**
-
-A route planner example, the absolute betweenness of all locations.
-
-```js
----
-name: generalGraphModuleAbsBetweenness1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteBetweenness({});
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the absolute betweenness of all locations.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleAbsBetweenness2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteBetweenness({ weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the absolute betweenness of all cities regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleAbsBetweenness3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._absoluteBetweenness({ direction: 'outbound', weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _betweenness
-
-Get the normalized
-[betweenness](http://en.wikipedia.org/wiki/Betweenness_centrality)
-of graphs vertices.
-
-`graph_module._betweenness(options)`
-
-Similar to [_absoluteBetweenness](#_absolutebetweenness) but returns normalized values.
-
-**Examples**
-
-A route planner example, the betweenness of all locations.
-
-```js
----
-name: generalGraphModuleBetweenness1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._betweenness();
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the betweenness of all locations.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleBetweenness2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._betweenness({ weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the betweenness of all cities regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleBetweenness3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._betweenness({ direction: 'outbound', weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _radius
-
-Get the
-[radius](http://en.wikipedia.org/wiki/Eccentricity_%28graph_theory%29)
-of a graph.
-
-- `options` (optional): An object defining further options. Can have the following values:
- - `direction`: The direction of the edges. Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `weight`: The name of the attribute of the edges containing the weight.
- - `defaultWeight`: Only used with the option `weight`.
- If an edge does not have the attribute named as defined in option `weight` this default
- is used as weight.
- If no default is supplied the default would be positive infinity so the path and
- hence the radius cannot be calculated.
-
-**Examples**
-
-A route planner example, the radius of the graph.
-
-```js
----
-name: generalGraphModuleRadius1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._radius();
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the radius of the graph.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleRadius2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._radius({ weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the radius of the graph regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleRadius3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._radius({ direction: 'outbound', weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-## _diameter
-
-Get the
-[diameter](http://en.wikipedia.org/wiki/Eccentricity_%28graph_theory%29)
-of a graph.
-
-`graph._diameter(graphName, options)`
-
-- `options` (optional): An object defining further options. Can have the following values:
- - `direction`: The direction of the edges. Possible values are `"outbound"`, `"inbound"`, and `"any"` (default).
- - `weight`: The name of the attribute of the edges containing the weight.
- - `defaultWeight`: Only used with the option `weight`.
- If an edge does not have the attribute named as defined in option `weight` this default
- is used as weight.
- If no default is supplied the default would be positive infinity so the path and
- hence the radius cannot be calculated.
-
-**Examples**
-
-A route planner example, the diameter of the graph.
-
-```js
----
-name: generalGraphModuleDiameter1
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._diameter();
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the diameter of the graph.
-This considers the actual distances.
-
-```js
----
-name: generalGraphModuleDiameter2
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._diameter({ weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
-
-A route planner example, the diameter of the graph regarding only
-outbound paths.
-
-```js
----
-name: generalGraphModuleDiameter3
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("routeplanner");
-graph._diameter({ direction: 'outbound', weight: 'distance' });
-~examples.dropGraph("routeplanner");
-```
diff --git a/site/content/3.10/graphs/satellitegraphs/_index.md b/site/content/3.10/graphs/satellitegraphs/_index.md
deleted file mode 100644
index 3d2c9b9a7b..0000000000
--- a/site/content/3.10/graphs/satellitegraphs/_index.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: SatelliteGraphs
-menuTitle: SatelliteGraphs
-weight: 90
-description: >-
- Graphs synchronously replicated to all servers, available in the Enterprise Edition
----
-Introduced in: v3.7.0
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-## What is a SatelliteGraph?
-
-_SatelliteGraphs_ are a specialized _named graph_ type available for cluster
-deployments. Their underlying collections are synchronously replicated to all
-DB-Servers that are part of the cluster, which enables DB-Servers to execute
-graph traversals locally. This includes shortest path and k-shortest paths
-computations, and possibly even joins with traversals. They greatly improve
-the performance of such queries.
-
-They are the natural extension of the [SatelliteCollections](../../develop/satellitecollections.md)
-concept to graphs. The same benefits and caveats apply.
-
-
-
-## Why use a SatelliteGraph?
-
-When doing queries in an ArangoDB cluster, data has to be exchanged between
-different cluster nodes if the data is sharded and therefore residing
-on multiple nodes. In particular graph traversals are usually executed on a
-Coordinator, because they need global information. This results in a lot of
-network traffic and potentially slow query execution.
-
-Take a permission management use case for example, where you have a permissions
-graph as well as a large, sharded collection of documents. You probably want to
-determine quickly if a user, group or device has permission to access certain
-information from that large collection. You would do this by traversing the
-graph to figure out the permissions and then join it with the large collection.
-With SatelliteGraphs, the entire permissions graph is available on all
-DB-Servers. Thus, traversals can be executed locally. A traversal can even be
-executed on multiple DB-Servers independently, so that the traversal results
-are then available locally on every node, which means that the subsequent join
-operation can also be executed without talking to other DB-Servers.
-
-## When to use SatelliteGraphs?
-
-While General Graphs are available in all Editions, the Enterprise Edition
-offers two more _named graph_ types to achieve single-server alike query
-execution times for graph queries in cluster deployments.
-
-- **General Graphs**:
- The underlying collections of managed graphs can be sharded to distribute the
- data across multiple DB-Servers. However, General Graphs do not enforce or
- maintain special sharding properties of the collections. The document
- distribution is arbitrary and data locality tends to be low. On the positive
- side, it is possible to combine arbitrary sets of existing collections.
- If the graph data is on a single shard, then graph queries can be executed
- locally, but the results still need to be communicated to other nodes.
-
-- **SmartGraphs**:
- Shard the data based on an attribute value, so that documents with the same
- value are stored on the same DB-Server. This can improve data locality and
- reduce the number of network hops between cluster nodes depending on the
- graph layout and traversal query. It is suitable for large scale graphs,
- because the graph data gets sharded to distribute it across multiple
- DB-Servers. Use SmartGraphs instead of General Graphs whenever possible for
- a performance boost.
-
-- **SatelliteGraph**:
- Make the entire graph available on all DB-Servers using synchronous
- replication. All vertices and edges will be available on every node for
- maximum data locality. No network hops are required to traverse the graph.
- The graph data must fit on each node, therefore it will typically be a small
- to medium sized graph. The performance will be the highest if you are not
- permanently updating the graph's structure and content because every change
- needs to be replicated to all other DB-Servers.
-
-With SatelliteGraphs, the performance of writes into the affected graph collections
-will become slower due the fact that the graph data is replicated to the
-participating DB-Servers. Also, as writes are performed on every DB-Server, it
-will allocate more storage distributed around the whole cluster environment.
-
-If you want to distribute a very large graph and you don't want to replicate
-all graph data to all DB-Servers that are part of your cluster, then you should
-consider [SmartGraphs](../smartgraphs/_index.md) instead.
diff --git a/site/content/3.10/graphs/smartgraphs/_index.md b/site/content/3.10/graphs/smartgraphs/_index.md
deleted file mode 100644
index 3d15be6c58..0000000000
--- a/site/content/3.10/graphs/smartgraphs/_index.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-title: SmartGraphs
-menuTitle: SmartGraphs
-weight: 95
-description: >-
- SmartGraphs enable you to manage graphs at scale using value-based sharding
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-SmartGraphs are specifically targeted at graphs that need scalability and
-high performance. The way SmartGraphs use the ArangoDB cluster sharding makes it
-extremely useful for distributing data across multiple servers with minimal
-network latency.
-
-Most graphs have one feature - a value that is stored in every vertex - that
-divides the entire graph into several smaller subgraphs. These subgraphs have a
-large amount of edges that only connect vertices in the same subgraph and only
-have few edges connecting vertices from other subgraphs. If this feature is
-known, SmartGraphs can make use if it.
-
-Examples for such graphs are:
-
-- **Social Networks**\
- Typically the feature here is the region/country users live in. Every user has
- more contacts in the same region/country than in other regions/countries.
-
-- **Transport Systems**\
- For transport systems, the common feature is the region/country. There are
- many local connections, but only a few go across countries.
-
-- **E-Commerce**\
- In this case, the category of products is a good feature. Products of the same
- category are often bought together.
-
-In terms of querying there is no difference between SmartGraphs and General Graphs.
-For graph querying please refer to [AQL Graph Operations](../../aql/graphs/_index.md)
-and [General Graph Functions](../general-graphs/functions.md) sections.
-The optimizer is clever enough to identify
-whether it is a SmartGraph or not.
-
-Do the hands-on
-[ArangoDB SmartGraphs Tutorial](https://www.arangodb.com/using-smartgraphs-arangodb/)
-to learn more.
-
-## How SmartGraphs work
-
-Typically, when you shard your data with ArangoDB the goal is to have an even
-distribution of data across multiple servers. This approach allows you to scale
-out your data at a rather high speed in most cases. However, since one of the
-best features of ArangoDB is fast graph traversals, this sort of distribution
-can start causing problems if your data grows exponentially.
-
-Instead of traveling across every server before returning data, SmartGraphs use
-a clever and optimized way of moving data through the cluster so that you retain
-the scalability as well as the performance of graph traversals in ArangoDB.
-
-The examples below illustrate the difference in how data is sharded in the
-cluster for both scenarios. Let's take a closer look at it.
-
-### Random data distribution
-
-The natural distribution of data for graphs that handle large datasets involves
-a series of highly interconnected nodes with many edges running between them.
-
-
-
-_The orange line indicates an example graph traversal. Notice how it touches nodes on every server._
-
-Once you start connecting the nodes to each other, it becomes clear that the
-graph traversal might need to travel across every server before returning
-results. This sort of distribution results in many network hops between
-DB-Servers and Coordinators.
-
-### Smart data distribution
-
-By optimizing the distribution of data, SmartGraphs reduce the number of network
-hops traversals require.
-
-SmartGraphs come with a concept of a `smartGraphAttribute` that is used to
-inform the database how exactly to shard data. When defining this attribute,
-think of it as a value that is stored in every vertex. For instance, in
-social network datasets, this attribute can be the ID or the region/country of
-the users.
-
-The graph will than be automatically sharded in such a way that all vertices
-with the same value are stored on the same physical machine, all edges
-connecting vertices with identical `smartGraphAttribute` values are stored on
-this machine as well. Sharding with this attribute means that the relevant data
-is now co-located on servers, whenever possible.
-
-
-
-_The outcome of moving the data like this is that you retain the scalability as well as the performance of graph traversals in ArangoDB._
-
-## SmartGraphs using SatelliteCollections
-
-These SmartGraphs are capable of using [SatelliteCollections](../../develop/satellitecollections.md)
-within their graph definition. Therefore, edge definitions defined between
-SmartCollections and SatelliteCollections can be created. As SatelliteCollections
-(and the edge collections between SmartGraph collections and SatelliteCollections)
-are globally replicated to each participating DB-Server, (weighted) graph traversal,
-and (k-)shortest path(s) query can partially be executed locally on each DB-Server.
-This means a larger part of the query can be executed fully local
-whenever data from the SatelliteCollections is required.
-
-
-
-## Disjoint SmartGraphs
-
-Disjoint SmartGraphs are are useful for use cases which have to deal with a
-large forest of graphs, when you have clearly separated subgraphs in your
-graph dataset. Disjoint SmartGraphs enable the automatic sharding of these
-subgraphs and prohibit edges connecting them.
-
-
-
-_This ensures that graph traversals, shortest path, and k-shortest-paths queries
-can be executed locally on a DB-Server, achieving improved performance for
-these type of queries._
-
-## Disjoint SmartGraphs using SatelliteCollections
-
-Disjoint SmartGraphs using SatelliteCollections prohibit
-edges between vertices with different `smartGraphAttribute` values.
-All SmartVertices can be connected to SatelliteVertices.
diff --git a/site/content/3.10/graphs/smartgraphs/getting-started.md b/site/content/3.10/graphs/smartgraphs/getting-started.md
deleted file mode 100644
index 7a41c973bf..0000000000
--- a/site/content/3.10/graphs/smartgraphs/getting-started.md
+++ /dev/null
@@ -1,207 +0,0 @@
----
-title: Getting started with SmartGraphs
-menuTitle: Getting Started
-weight: 5
-description: >-
- How to create and use SmartGraphs
----
-SmartGraphs **cannot use existing collections**. When switching to SmartGraph
-from an existing dataset you have to import the data into a fresh SmartGraph.
-
-All collections that are being used in SmartGraphs need to be part of the same
-`distributeShardslike` group. The `smartGraphAttribute` and the number of
-shards are immutable.
-The `smartGraphAttribute` attribute is used to inform the database how to shard
-data and, as a consequence, all vertices must have this attribute. The `_from`
-and `_to` attributes that point _from_ one document _to_ another document
-stored in vertex collections are set by default, following the same smart
-sharding pattern.
-
-## Create a SmartGraph using the web interface
-
-The web interface (also called Web UI) allows you to easily create and manage
-SmartGraphs. To get started, follow the steps outlined below.
-
-1. In the web interface, navigate to the **GRAPHS** section.
-2. To add a new graph, click **Add Graph**.
-3. In the **Create Graph** dialog that appears, select the
- **SmartGraph** tab.
-4. Fill in all the following fields:
- - For **Name**, enter a name for the SmartGraph.
- - For **Shards**, enter the number of parts to split the graph into.
- - Optional: For **Replication factor**, enter the total number of
- desired copies of the data in the cluster.
- - Optional: For **Write concern**, enter the total number of copies
- of the data in the cluster required for each write operation.
- - For **SmartGraph Attribute**, insert the attribute that is used to
- smartly shard the vertices of the graph. Every vertex in your graph
- needs to have this attribute. Note that it cannot be modified later.
- - Optional: For **SatelliteCollections**, insert vertex collections
- that are used in your edge definitions. These collections are
- then created as satellites, and thus replicated to all DB-Servers.
-5. Define the relations on the graph:
- - For **Edge definition**, insert a single non-existent name to define
- the relation of the graph. This automatically creates a new edge
- collection, which is displayed in the **COLLECTIONS** section of the
- left sidebar menu.
- {{< tip >}}
- To define multiple relations, press the **Add relation** button.
- To remove a relation, press the **Remove relation** button.
- {{< /tip >}}
- - For **fromCollections**, insert a list of vertex collections
- that contain the start vertices of the relation.
- - For **toCollections**, insert a list of vertex collections that
- contain the end vertices of the relation.
- {{< tip >}}
- Insert only non-existent collection names. Collections are automatically
- created during the graph setup and are displayed in the
- **Collections** tab of the left sidebar menu.
- {{< /tip >}}
- - For **Orphan collections**, insert a list of vertex collections
- that are part of the graph but not used in any edge definition.
-6. Click **Create**.
-7. Click the card of the newly created graph and use the functions of the Graph
- Viewer to visually interact with the graph and manage the graph data.
-
-
-
-## Create a SmartGraph using *arangosh*
-
-In contrast to General Graphs we have to add more options when creating the
-SmartGraph. The two options `smartGraphAttribute` and `numberOfShards` are
-required and cannot be modified later.
-
-```js
----
-name: smartGraphCreateGraphHowTo1
-description: ''
-type: cluster
----
-var graph_module = require("@arangodb/smart-graph");
-var graph = graph_module._create("myGraph", [], [], {smartGraphAttribute: "region", numberOfShards: 9});
-graph;
-~graph_module._drop("myGraph");
-```
-
-## Create a Disjoint SmartGraph using *arangosh*
-
-In contrast to regular SmartGraphs we have to add one option when creating the
-graph. The boolean option `isDisjoint` is required, needs to be set to `true`
-and cannot be modified later.
-
-```js
----
-name: smartGraphCreateGraphHowToDisjoint1
-description: ''
-type: cluster
----
-var graph_module = require("@arangodb/smart-graph");
-var graph = graph_module._create("myGraph", [], [], {smartGraphAttribute: "region", numberOfShards: 9, isDisjoint: true});
-graph;
-~graph_module._drop("myGraph");
-```
-
-## Add vertex collections
-
-This is analogous to General Graphs. Unlike with General Graphs, the
-**collections must not exist** when creating the SmartGraph. The SmartGraph
-module will create them for you automatically to set up the sharding for all
-these collections correctly. If you create collections via the SmartGraph
-module and remove them from the graph definition, then you may re-add them
-without trouble however, as they will have the correct sharding.
-
-```js
----
-name: smartGraphCreateGraphHowTo2
-description: ''
-type: cluster
----
-~var graph_module = require("@arangodb/smart-graph");
-~var graph = graph_module._create("myGraph", [], [], {smartGraphAttribute: "region", numberOfShards: 9});
-graph._addVertexCollection("shop");
-graph._addVertexCollection("customer");
-graph._addVertexCollection("pet");
-graph = graph_module._graph("myGraph");
-~graph_module._drop("myGraph", true);
-```
-
-## Define relations on the Graph
-
-Adding edge collections works the same as with General Graphs, but again, the
-collections are created by the SmartGraph module to set up sharding correctly
-so they must not exist when creating the SmartGraph (unless they have the
-correct sharding already).
-
-```js
----
-name: smartGraphCreateGraphHowTo3
-description: ''
-type: cluster
----
-~var graph_module = require("@arangodb/smart-graph");
-~var graph = graph_module._create("myGraph", [], [], {smartGraphAttribute: "region", numberOfShards: 9});
-~graph._addVertexCollection("shop");
-~graph._addVertexCollection("customer");
-~graph._addVertexCollection("pet");
-var rel = graph_module._relation("isCustomer", ["shop"], ["customer"]);
-graph._extendEdgeDefinitions(rel);
-graph = graph_module._graph("myGraph");
-~graph_module._drop("myGraph", true);
-```
-
-## Using SatelliteCollections in SmartGraphs
-
-When creating a collection, you can decide whether it's a SatelliteCollection
-or not. For example, a vertex collection can be satellite as well.
-SatelliteCollections don't require sharding as the data will be distributed
-globally on all DB-Servers. The `smartGraphAttribute` is also not required.
-
-### Create a SmartGraph using SatelliteCollections
-
-In addition to the attributes you would set to create a SmartGraph, there is an
-additional attribute `satellites` you can optionally set. It needs to be an array of
-one or more collection names. These names can be used in edge definitions
-(relations) and these collections will be created as SatelliteCollections.
-However, all vertex collections on one side of the relation have to be of
-the same type - either all satellite or all smart. This is because `_from`
-and `_to` can have different types based on the sharding pattern.
-
-In this example, both vertex collections are created as SatelliteCollections.
-
-{{< info >}}
-When providing a satellite collection that is not used in a relation,
-it will not be created. If you create the collection in a following
-request, only then the option will count.
-{{< /info >}}
-
-```js
----
-name: hybridSmartGraphCreateGraphHowTo1
-description: ''
-type: cluster
----
-var graph_module = require("@arangodb/smart-graph");
-var rel = graph_module._relation("isCustomer", "shop", "customer")
-var graph = graph_module._create("myGraph", [rel], [], {satellites: ["shop", "customer"], smartGraphAttribute: "region", numberOfShards: 9});
-graph;
-~graph_module._drop("myGraph", true);
-```
-
-### Create a Disjoint SmartGraph using SatelliteCollections
-
-The option `isDisjoint` needs to be set to `true` in addition to the other
-options for a SmartGraph using SatelliteCollections. Only the `shop` vertex collection is created
-as a SatelliteCollection in this example:
-
-```js
----
-name: hybridSmartGraphCreateGraphHowTo2
-description: ''
-type: cluster
----
-var graph_module = require("@arangodb/smart-graph");
-var rel = graph_module._relation("isCustomer", "shop", "customer")
-var graph = graph_module._create("myGraph", [rel], [], {satellites: ["shop"], smartGraphAttribute: "region", isDisjoint: true, numberOfShards: 9});
-graph;
-~graph_module._drop("myGraph", true);
-```
diff --git a/site/content/3.10/graphs/smartgraphs/testing-graphs-on-single-server.md b/site/content/3.10/graphs/smartgraphs/testing-graphs-on-single-server.md
deleted file mode 100644
index 5a952b26a6..0000000000
--- a/site/content/3.10/graphs/smartgraphs/testing-graphs-on-single-server.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: SmartGraphs and SatelliteGraphs on a Single Server
-menuTitle: Testing Graphs on Single Server
-weight: 15
-description: >-
- Simulate SmartGraphs and SatelliteGraphs on a single server to make it easier
- to port them to an ArangoDB cluster later
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-## General idea
-
-You can create SmartGraphs and SatelliteGraphs in a single server instance and
-test them there. Internally, the graphs are General Graphs, supplemented by
-formal properties such as `isSmart`, which play no role in the behavior of the
-graphs, however. The same is true for vertex and edge collections: they have the
-corresponding properties, but they are non-functional.
-
-After a test phase, you can dump such graphs and then restore them in a cluster
-instance. The graphs themselves and the vertex and edge collections obtain true
-SmartGraph or SatelliteGraph sharding properties as if they were created in the
-cluster.
-
-## The Procedure
-
-On a single server, create [SmartGraphs](management.md) or
-[SatelliteGraphs](../satellitegraphs/management.md) graphs by using
-`arangosh` as usual. Then you can set all the cluster-relevant properties of
-graphs and collections:
-
-- `numberOfShards`
-- `isSmart`
-- `isSatellite`
-- `replicationFactor`
-- `smartGraphAttribute`
-- `satellites`
-- `shardingStrategy`
-
-After that, you can [dump](../../components/tools/arangodump/examples.md) the graphs with
-`arangodump` as usual.
-
-[Restore](../../components/tools/arangorestore/examples.md) the dumped data into a running
-ArangoDB cluster. As a result, all cluster relevant properties are restored
-correctly and affect the sharding and the performance as expected.
diff --git a/site/content/3.10/index-and-search/analyzers.md b/site/content/3.10/index-and-search/analyzers.md
deleted file mode 100644
index 8d2e041062..0000000000
--- a/site/content/3.10/index-and-search/analyzers.md
+++ /dev/null
@@ -1,1670 +0,0 @@
----
-title: Transforming data with Analyzers
-menuTitle: Analyzers
-weight: 160
-description: >-
- Analyzers allow you to transform data, for sophisticated text processing and
- searching, either standalone or in combination with Views and inverted indexes
----
-While AQL string functions allow for basic text manipulation, true text
-processing including tokenization, language-specific word stemming, case
-conversion and removal of diacritical marks (accents) from characters only
-become possible with Analyzers.
-
-Analyzers parse input values and transform them into sets of sub-values,
-for example by breaking up text into words. If they are used in Views then
-the documents' attribute values of the linked collections are used as input
-and additional metadata is produced internally. The data can then be used for
-searching and sorting to provide the most appropriate match for the specified
-conditions, similar to queries to web search engines.
-
-Analyzers can be used on their own to tokenize and normalize strings in AQL
-queries with the [`TOKENS()` function](../aql/functions/string.md#tokens).
-The following example shows the creation of a custom Analyzer and how it
-transforms an example input:
-
-```js
----
-name: analyzerCustomTokens
-description: ''
----
-var analyzers = require("@arangodb/analyzers")
-var a = analyzers.save("custom", "text", {
- locale: "en",
- stopwords: ["a", "example"]
-}, []);
-db._query(`RETURN TOKENS("UPPER & lower, a Stemming Example.", "custom")`).toArray();
-~analyzers.remove(a.name);
-```
-
-How Analyzers process values depends on their type and configuration.
-The configuration is comprised of type-specific properties and list of features.
-The features control the additional metadata to be generated to augment View
-indexes, to be able to rank results for instance.
-
-Analyzers can be managed via an [HTTP API](../develop/http-api/analyzers.md) and through
-a [JavaScript module](../develop/javascript-api/analyzers.md).
-
-{{< youtube id="tbOTYL26reg" >}}
-
-## Value Handling
-
-While most of the Analyzer functionality is geared towards text processing,
-there is no restriction to strings as input data type when using them through
-Views or inverted indexes – your documents may have attributes of any data type
-after all.
-
-Strings are processed according to the Analyzer, whereas other primitive data
-types (`null`, `true`, `false`, numbers) are generally left unchanged.
-Exceptions are Analyzers that specifically work with other data types, like
-geo-spatial or query-based Analyzers.
-
-The elements of arrays are processed individually, regardless of the level of
-nesting, if you use Analyzers stand-alone. That is, strings are processed by the
-configured Analyzer(s) and other primitive values are returned as-is.
-This also applies if you use Analyzers in `arangosearch` Views, or in
-`search-alias` Views with inverted indexes that have the `searchField` option
-enabled. The array elements are unpacked, processed, and indexed individually.
-
-If you use inverted indexes with the `searchField` option disabled, optionally
-through `search-alias` Views, array elements are not unpacked by default. Most
-Analyzers do not accept arrays as input in this context. You can unpack one
-array level and let the configured Analyzer process the individual elements by
-using `[*]` as a suffix for a field in the index definition. Primitive values
-other than strings are indexed as-is.
-
-Analyzers cannot process objects as a whole. However, you can work with
-individual object attributes. You can use inverted indexes and Views to index
-specific object attributes or sub-attributes, or index all sub-attributes with
-the `includeAllFields` option enabled. Each non-object value is handled as
-described above. Sub-objects in arrays can be indexed, too (with limitations).
-However, only primitive values are added to the index. Arrays and objects
-cannot be searched for as a whole.
-
-Also see:
-- [`SEARCH` operation](../aql/high-level-operations/search.md) on how to query indexed
- values such as numbers and nested values
-- [`arangosearch` Views](arangosearch/arangosearch-views-reference.md) and
- [Inverted indexes](indexing/working-with-indexes/inverted-indexes.md) for details about how
- compound data types (arrays, objects) get indexed
-
-## Analyzer Names
-
-Each Analyzer has a name for identification with the following
-naming conventions:
-
-- The name must only consist of the letters `a` to `z` (both in lower and
- upper case), the numbers `0` to `9`, underscore (`_`) and dash (`-`) symbols.
- This also means that any non-ASCII names are not allowed.
-- It must always start with a letter.
-- The maximum allowed length of a name is 254 bytes.
-- Analyzer names are case-sensitive.
-
-Custom Analyzers are stored per database, in a system collection `_analyzers`.
-The names get prefixed with the database name and two colons, e.g.
-`myDB::customAnalyzer`.This does not apply to the globally available
-[built-in Analyzers](#built-in-analyzers), which are not stored in an
-`_analyzers` collection.
-
-Custom Analyzers stored in the `_system` database can be referenced in queries
-against other databases by specifying the prefixed name, e.g.
-`_system::customGlobalAnalyzer`. Analyzers stored in databases other than
-`_system` cannot be accessed from within another database however.
-
-## Analyzer Types
-
-The following Analyzer types are available:
-
-- [`identity`](#identity): treats value as atom (no transformation)
-- [`delimiter`](#delimiter): splits into tokens at user-defined character
-- [`stem`](#stem): applies stemming to the value as a whole
-- [`norm`](#norm): applies normalization to the value as a whole
-- [`ngram`](#ngram): creates _n_-grams from value with user-defined lengths
-- [`text`](#text): tokenizes text strings into words, optionally with stemming,
- normalization, stop-word filtering and edge _n_-gram generation
-- [`segmentation`](#segmentation): tokenizes text in a language-agnostic manner,
- optionally with normalization
-- [`aql`](#aql): runs an AQL query to prepare tokens for index
-- [`pipeline`](#pipeline): chains multiple Analyzers
-- [`stopwords`](#stopwords): removes the specified tokens from the input
-- [`collation`](#collation): respects the alphabetic order of a language in range queries
-- [`minhash`](#minhash): applies another Analyzer and then a locality-sensitive
- hash function, to find candidates for set comparisons based on the
- Jaccard index (Enterprise Edition only)
-- [`classification`](#classification): classifies the input text using a
- word embedding model (Enterprise Edition only)
-- [`nearest_neighbors`](#nearest_neighbors): finds the nearest neighbors of the
- input text using a word embedding model (Enterprise Edition only)
-- [`geojson`](#geojson): breaks up a GeoJSON object into a set of indexable tokens
-- [`geo_s2`](#geo_s2): like `geojson` but offers more efficient formats for
- indexing geo-spatial data (Enterprise Edition only)
-- [`geopoint`](#geopoint): breaks up JSON data describing a coordinate pair into
- a set of indexable tokens
-
-The following table compares the Analyzers for **text processing**:
-
-Analyzer / Capability | Tokenization | Stemming | Normalization | _N_-grams
-:----------------------------------------:|:------------:|:--------:|:-------------:|:--------:
-[`stem`](#stem) | No | Yes | No | No
-[`norm`](#norm) | No | No | Yes | No
-[`ngram`](#ngram) | No | No | No | Yes
-[`text`](#text) | Yes | Yes | Yes | (Yes)
-[`segmentation`](#segmentation) | Yes | No | Yes | No
-
-The available normalizations are case conversion and accents/diacritics removal.
-The `segmentation` Analyzer only supports case conversion.
-
-The `text` Analyzer supports edge _n_-grams but not full _n_-grams.
-
-## Tokenization
-
-The `text` and `segmentation` Analyzer types tokenize text into words (or a
-comparable concept of a word). See
-[Word Boundaries](https://www.unicode.org/reports/tr29/#Word_Boundaries)
-in the Unicode Standard Annex #29 about Unicode text segmentation for details.
-
-These tokenizing Analyzers extract tokens, which removes characters like
-punctuation and whitespace. An exception is the [`segmentation` Analyzer](#segmentation)
-if you select `"graphic"` or `"all"` for the `break` option. They preserve `@`
-and `.` characters of email addresses, for instance. There are also exceptions
-with both Analyzer types for sequences like numbers, for which decimal and
-thousands separators (`.` and `,`) are preserved.
-
-## Normalization
-
-The `norm`, `text`, and `segmentation` Analyzer types allow you to convert the
-input text to all lowercase or all uppercase for normalization purposes, namely
-case insensitive search. Case folding is not supported. Also see
-[Case Mapping](https://unicode-org.github.io/icu/userguide/transforms/casemappings.html)
-in the ICU documentation.
-
-The `norm` and `text` Analyzer types also allow you to convert characters with
-diacritical marks to the base characters. This normalization enables
-accent-insensitive search.
-
-## Analyzer Features
-
-The *features* of an Analyzer determine what searching capabilities are
-available and are only applicable in the context of Views and inverted indexes.
-
-The valid values for the features are dependant on both the capabilities of
-the underlying Analyzer *type* and the query filtering and sorting functions that the
-result can be used with. For example, the `text` type produces
-`frequency` + `norm` + `position`, and the `PHRASE()` AQL function requires
-`frequency` + `position` to be available.
-
-{{< tip >}}
-You should only enable the features you require, as there is a cost associated
-with them. The metadata they produce needs to be computed and stored, requiring
-time and disk space.
-
-The examples in the documentation only set the required features for the shown
-examples, which is often none (empty array `[]` in the call of `analyzers.save()`).
-{{< /tip >}}
-
-The following *features* are supported:
-
-- **frequency**: track how often a term occurs.
- Required for [`PHRASE()`](../aql/functions/arangosearch.md#phrase),
- [`NGRAM_MATCH()`](../aql/functions/arangosearch.md#ngram_match),
- [`BM25()`](../aql/functions/arangosearch.md#bm25),
- [`TFIDF()`](../aql/functions/arangosearch.md#tfidf), and
- [`OFFSET_INFO()`](../aql/functions/arangosearch.md#offset_info).
-- **norm**: calculate and store the field normalization factor that is used to
- score fairer if the same term is repeated, reducing its importance.
- Required for [`BM25()`](../aql/functions/arangosearch.md#bm25)
- (except BM15) and [`TFIDF()`](../aql/functions/arangosearch.md#tfidf)
- (if called with normalization enabled). It is recommended to enable this
- feature for custom Analyzers.
-- **position**: enumerate the tokens for position-dependent queries. Required
- for [`PHRASE()`](../aql/functions/arangosearch.md#phrase),
- [`NGRAM_MATCH()`](../aql/functions/arangosearch.md#ngram_match), and
- [`OFFSET_INFO()`](../aql/functions/arangosearch.md#offset_info).
- If present, then the `frequency` feature is also required.
-- **offset**: enable search highlighting capabilities (Enterprise Edition only).
- Required for [`OFFSET_INFO()`](../aql/functions/arangosearch.md#offset_info).
- If present, then the `position` and `frequency` features are also required.
-
-## Analyzer Properties
-
-The valid attributes/values for the *properties* are dependant on what *type*
-is used. For example, the `delimiter` type needs to know the desired delimiting
-character(s), whereas the `text` type takes a locale, stop-words and more.
-
-### `identity`
-
-An Analyzer applying the `identity` transformation, i.e. returning the input
-unmodified.
-
-It does not support any *properties* and will ignore them.
-
-**Examples**
-
-Applying the identity Analyzers does not perform any transformations, hence
-the input is returned unaltered:
-
-```js
----
-name: analyzerIdentity
-description: ''
----
-db._query(`RETURN TOKENS("UPPER lower dïäcríticš", "identity")`).toArray();
-```
-
-### `delimiter`
-
-An Analyzer capable of breaking up delimited text into tokens as per
-[RFC 4180](https://tools.ietf.org/html/rfc4180)
-(without starting new records on newlines).
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `delimiter` (string): the delimiting character(s). The whole string is
- considered as one delimiter.
-
-You can wrap tokens in the input string in double quote marks to quote the
-delimiter. For example, a `delimiter` Analyzer that uses `,` as delimiter and an
-input string of `foo,"bar,baz"` results in the tokens `foo` and `bar,baz`
-instead of `foo`, `bar`, and `baz`.
-
-You can chain multiple `delimiter` Analyzers with a [`pipeline` Analyzer](#pipeline)
-to split by different delimiters.
-
-**Examples**
-
-Split input strings into tokens at hyphen-minus characters:
-
-```js
----
-name: analyzerDelimiter
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("delimiter_hyphen", "delimiter", {
- delimiter: "-"
-}, []);
-db._query(`RETURN TOKENS("some-delimited-words", "delimiter_hyphen")`).toArray();
-~analyzers.remove(a.name);
-```
-
-### `stem`
-
-An Analyzer capable of stemming the text, treated as a single token,
-for supported languages.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `locale` (string): a locale in the format `language`, e.g. `"de"` or `"en"`.
- The locale is forwarded to the Snowball stemmer without checks.
- An invalid locale does not prevent the creation of the Analyzer.
- Also see [Supported Languages](#supported-languages).
-
-**Examples**
-
-Apply stemming to the input string as a whole:
-
-```js
----
-name: analyzerStem
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("stem_en", "stem", {
- locale: "en"
-}, []);
-db._query(`RETURN TOKENS("databases", "stem_en")`).toArray();
-~analyzers.remove(a.name);
-```
-
-### `norm`
-
-An Analyzer capable of normalizing the text, treated as a single
-token, i.e. case conversion and accent removal.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `locale` (string): a locale in the format `language[_COUNTRY][_VARIANT]`
- (square brackets denote optional parts), e.g. `"de"`, `"en_US"`, or `"es__TRADITIONAL"`.
- See the [ICU Documentation](https://unicode-org.github.io/icu/userguide/locale/)
- for details. The locale is forwarded to ICU without checks.
- An invalid locale does not prevent the creation of the Analyzer.
- Also see [Supported Languages](#supported-languages).
-- `accent` (boolean, _optional_):
- - `true` to preserve accented characters (default)
- - `false` to convert accented characters to their base characters
-- `case` (string, _optional_):
- - `"lower"` to convert to all lower-case characters
- - `"upper"` to convert to all upper-case characters
- - `"none"` to not change character case (default)
-
-**Examples**
-
-Convert input string to all upper-case characters:
-
-```js
----
-name: analyzerNorm1
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("norm_upper", "norm", {
- locale: "en",
- case: "upper"
-}, []);
-db._query(`RETURN TOKENS("UPPER lower dïäcríticš", "norm_upper")`).toArray();
-~analyzers.remove(a.name);
-```
-
-Convert accented characters to their base characters:
-
-```js
----
-name: analyzerNorm2
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("norm_accent", "norm", {
- locale: "en",
- accent: false
-}, []);
-db._query(`RETURN TOKENS("UPPER lower dïäcríticš", "norm_accent")`).toArray();
-~analyzers.remove(a.name);
-```
-
-Convert input string to all lower-case characters and remove diacritics:
-
-```js
----
-name: analyzerNorm3
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("norm_accent_lower", "norm", {
- locale: "en",
- accent: false,
- case: "lower"
-}, []);
-db._query(`RETURN TOKENS("UPPER lower dïäcríticš", "norm_accent_lower")`).toArray();
-~analyzers.remove(a.name);
-```
-
-### `ngram`
-
-An Analyzer capable of producing _n_-grams from a specified input in a range of
-min..max (inclusive). Can optionally preserve the original input.
-
-This Analyzer type can be used to implement substring matching.
-Note that it slices the input based on bytes and not characters by default
-(*streamType*). The *"binary"* mode supports single-byte characters only;
-multi-byte UTF-8 characters raise an *Invalid UTF-8 sequence* query error.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `min` (number): unsigned integer for the minimum _n_-gram length
-- `max` (number): unsigned integer for the maximum _n_-gram length
-- `preserveOriginal` (boolean):
- - `true` to include the original value as well
- - `false` to produce the _n_-grams based on *min* and *max* only
-- `startMarker` (string, _optional_): this value will be prepended to _n_-grams
- which include the beginning of the input. Can be used for matching prefixes.
- Choose a character or sequence as marker which does not occur in the input.
-- `endMarker` (string, _optional_): this value will be appended to _n_-grams
- which include the end of the input. Can be used for matching suffixes.
- Choose a character or sequence as marker which does not occur in the input.
-- `streamType` (string, _optional_): type of the input stream
- - `"binary"`: one byte is considered as one character (default)
- - `"utf8"`: one Unicode codepoint is treated as one character
-
-**Examples**
-
-With *min* = `4` and *max* = `5`, the Analyzer will produce the following
-_n_-grams for the input string `"foobar"`:
-- `"foob"`
-- `"fooba"`
-- `"foobar"` (if *preserveOriginal* is enabled)
-- `"ooba"`
-- `"oobar"`
-- `"obar"`
-
-An input string `"foo"` will not produce any _n_-gram unless *preserveOriginal*
-is enabled, because it is shorter than the *min* length of 4.
-
-Above example but with *startMarker* = `"^"` and *endMarker* = `"$"` would
-produce the following:
-- `"^foob"`
-- `"^fooba"`
-- `"^foobar"` (if *preserveOriginal* is enabled)
-- `"foobar$"` (if *preserveOriginal* is enabled)
-- `"ooba"`
-- `"oobar$"`
-- `"obar$"`
-
-Create and use a trigram Analyzer with `preserveOriginal` disabled:
-
-```js
----
-name: analyzerNgram1
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("trigram", "ngram", {
- min: 3,
- max: 3,
- preserveOriginal: false,
- streamType: "utf8"
-}, []);
-db._query(`RETURN TOKENS("foobar", "trigram")`).toArray();
-~analyzers.remove(a.name);
-```
-
-Create and use a bigram Analyzer with `preserveOriginal` enabled and with start
-and stop markers:
-
-```js
----
-name: analyzerNgram2
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("bigram_markers", "ngram", {
- min: 2,
- max: 2,
- preserveOriginal: true,
- startMarker: "^",
- endMarker: "$",
- streamType: "utf8"
-}, []);
-db._query(`RETURN TOKENS("foobar", "bigram_markers")`).toArray();
-~analyzers.remove(a.name);
-```
-
-### `text`
-
-An Analyzer capable of breaking up strings into individual words while also
-optionally filtering out stop-words, extracting word stems, applying
-case conversion and accent removal.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `locale` (string): a locale in the format `language[_COUNTRY][_VARIANT]`
- (square brackets denote optional parts), e.g. `"de"`, `"en_US"`, or `"es__TRADITIONAL"`.
- See the [ICU Documentation](https://unicode-org.github.io/icu/userguide/locale/)
- for details. The locale is forwarded to ICU without checks.
- An invalid locale does not prevent the creation of the Analyzer.
- Also see [Supported Languages](#supported-languages).
-- `accent` (boolean, _optional_):
- - `true` to preserve accented characters
- - `false` to convert accented characters to their base characters (default)
-- `case` (string, _optional_):
- - `"lower"` to convert to all lower-case characters (default)
- - `"upper"` to convert to all upper-case characters
- - `"none"` to not change character case
-- `stemming` (boolean, _optional_):
- - `true` to apply stemming on returned words (default)
- - `false` to leave the tokenized words as-is
-- `edgeNgram` (object, _optional_): if present, then edge _n_-grams are generated
- for each token (word). That is, the start of the _n_-gram is anchored to the
- beginning of the token, whereas the `ngram` Analyzer would produce all
- possible substrings from a single input token (within the defined length
- restrictions). Edge _n_-grams can be used to cover word-based auto-completion
- queries with an index, for which you should set the following other options:
- `accent: false`, `case: "lower"` and most importantly `stemming: false`.
- - `min` (number, _optional_): minimal _n_-gram length
- - `max` (number, _optional_): maximal _n_-gram length
- - `preserveOriginal` (boolean, _optional_): whether to include the original
- token even if its length is less than *min* or greater than *max*
-- `stopwords` (array, _optional_): an array of strings with words to omit
- from result. Default: load words from `stopwordsPath`. To disable stop-word
- filtering provide an empty array `[]`. If both `stopwords` and
- `stopwordsPath` are provided then both word sources are combined.
-- `stopwordsPath` (string, _optional_): path with a *language* sub-directory
- (e.g. `en` for a locale `en_US`) containing files with words to omit.
- Each word has to be on a separate line. Everything after the first whitespace
- character on a line will be ignored and can be used for comments. The files
- can be named arbitrarily and have any file extension (or none).
-
- Default: if no path is provided then the value of the environment variable
- `IRESEARCH_TEXT_STOPWORD_PATH` is used to determine the path, or if it is
- undefined then the current working directory is assumed. If the `stopwords`
- attribute is provided then no stop-words are loaded from files, unless an
- explicit `stopwordsPat` is also provided.
-
- Note that if the `stopwordsPath` cannot be accessed, is missing language
- sub-directories or has no files for a language required by an Analyzer,
- then the creation of a new Analyzer is refused. If such an issue is
- discovered for an existing Analyzer during startup then the server will
- abort with a fatal error.
-
-The Analyzer uses a fixed order of operations:
-
-1. Tokenization
-2. Accent removal (if `accent` is set to `false`)
-3. Case conversion (unless `case` is set to `none`)
-4. Stop word removal (if any are defined)
-5. Word stemming (if `stemming` is set to `true`)
-
-If you require a different order, consider using a [`pipeline` Analyzer](#pipeline).
-
-Stop words are removed after case/accent operations but before stemming.
-The reason is that stemming could map multiple words to the same one, and you
-would not be able to filter out specific words only.
-
-The case/accent operations are not applied to the stop words for performance
-reasons. You need to pre-process them accordingly, for example, using the
-[`TOKENS()` function](../aql/functions/string.md#tokens) with a
-[`text` Analyzer](#text) that has the same `locale`, `case`, and `accent`
-settings as the planned `text` Analyzer, but with `stemming` set to `false` and
-`stopwords` set to `[]`.
-
-**Examples**
-
-The built-in `text_en` Analyzer has stemming enabled (note the word endings):
-
-```js
----
-name: analyzerTextStem
-description: ''
----
-db._query(`RETURN TOKENS("Crazy fast NoSQL-database!", "text_en")`).toArray();
-```
-
-You may create a custom Analyzer with the same configuration but with stemming
-disabled like this:
-
-```js
----
-name: analyzerTextNoStem
-description: ''
----
-var analyzers = require("@arangodb/analyzers")
-var a = analyzers.save("text_en_nostem", "text", {
- locale: "en",
- case: "lower",
- accent: false,
- stemming: false,
- stopwords: []
-}, [])
-db._query(`RETURN TOKENS("Crazy fast NoSQL-database!", "text_en_nostem")`).toArray();
-~analyzers.remove(a.name);
-```
-
-Custom text Analyzer with the edge _n_-grams capability and normalization enabled,
-stemming disabled and `"the"` defined as stop-word to exclude it:
-
-```js
----
-name: analyzerTextEdgeNgram
-description: ''
----
-~var analyzers = require("@arangodb/analyzers")
-var a = analyzers.save("text_edge_ngrams", "text", {
- edgeNgram: { min: 3, max: 8, preserveOriginal: true },
- locale: "en",
- case: "lower",
- accent: false,
- stemming: false,
- stopwords: [ "the" ]
-}, []);
-db._query(`RETURN TOKENS(
- "The quick brown fox jumps over the dogWithAVeryLongName",
- "text_edge_ngrams"
-)`).toArray();
-~analyzers.remove(a.name);
-```
-
-### `collation`
-
-Introduced in: v3.9.0
-
-An Analyzer capable of converting the input into a set of language-specific
-tokens. This makes comparisons follow the rules of the respective language,
-most notable in range queries against Views.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `locale` (string): a locale in the format
- `language[_COUNTRY][_VARIANT][@keywords]` (square brackets denote optional parts),
- e.g. `"de"`, `"en_US"`, `"es__TRADITIONAL"`, or `fr@collation=phonebook`. See the
- [ICU Documentation](https://unicode-org.github.io/icu/userguide/locale/)
- for details. The locale is forwarded to ICU without checks.
- An invalid locale does not prevent the creation of the Analyzer.
- Also see [Supported Languages](#supported-languages).
-
-**Examples**
-
-In Swedish, the letter `å` (note the small circle above the `a`) comes after
-`z`. Other languages treat it like a regular `a`, putting it before `b`.
-Below example creates two `collation` Analyzers, one with an English locale
-(`en`) and one with a Swedish locale (`sv`). It then demonstrates the
-difference in alphabetical order using a simple range query that returns
-letters before `c`:
-
-```js
----
-name: analyzerCollation
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var en = analyzers.save("collation_en", "collation", { locale: "en" }, []);
-var sv = analyzers.save("collation_sv", "collation", { locale: "sv" }, []);
-var test = db._create("test");
-var docs = db.test.save([
- { text: "a" },
- { text: "å" },
- { text: "b" },
- { text: "z" },
-]);
-var view = db._createView("view", "arangosearch",
- { links: { test: { analyzers: [ "collation_en", "collation_sv" ], includeAllFields: true }}});
-~assert(db._query(`FOR d IN view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 4);
-db._query("FOR doc IN view SEARCH ANALYZER(doc.text < TOKENS('c', 'collation_en')[0], 'collation_en') RETURN doc.text").toArray();
-db._query("FOR doc IN view SEARCH ANALYZER(doc.text < TOKENS('c', 'collation_sv')[0], 'collation_sv') RETURN doc.text").toArray();
-~db._dropView(view.name());
-~db._drop(test.name());
-~analyzers.remove(en.name);
-~analyzers.remove(sv.name);
-```
-
-### `aql`
-
-Introduced in: v3.8.0
-
-An Analyzer capable of running a restricted AQL query to perform
-data manipulation / filtering.
-
-The query must not access the storage engine. This means no `FOR` loops over
-collections or Views, no use of the `DOCUMENT()` function, no graph traversals.
-AQL functions are allowed as long as they do not involve Analyzers (`TOKENS()`,
-`PHRASE()`, `NGRAM_MATCH()`, `ANALYZER()` etc.) or data access, and if they can
-be run on DB-Servers in case of a cluster deployment. User-defined functions
-are not permitted.
-
-The input data is provided to the query via a bind parameter `@param`.
-It is always a string. The AQL query is invoked for each token in case of
-multiple input tokens, such as an array of strings.
-
-The output can be one or multiple tokens (top-level result elements). They get
-converted to the configured `returnType`, either booleans, numbers or strings
-(default).
-
-{{< tip >}}
-If `returnType` is `"number"` or `"bool"` then it is unnecessary to set this
-AQL Analyzer as context Analyzer with `ANALYZER()` in View queries. You can
-compare indexed fields to numeric values, `true` or `false` directly, because
-they bypass Analyzer processing.
-{{< /tip >}}
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `queryString` (string): AQL query to be executed
-- `collapsePositions` (boolean):
- - `true`: set the position to 0 for all members of the query result array
- - `false` (default): set the position corresponding to the index of the
- result array member
-- `keepNull` (boolean):
- - `true` (default): treat `null` like an empty string
- - `false`: discard `null`s from View index. Can be used for index filtering
- (i.e. make your query return null for unwanted data). Note that empty
- results are always discarded.
-- `batchSize` (integer): number between 1 and 1000 (default = 1) that
- determines the batch size for reading data from the query. In general, a
- single token is expected to be returned. However, if the query is expected
- to return many results, then increasing `batchSize` trades memory for
- performance.
-- `memoryLimit` (integer): memory limit for query execution in bytes.
- (default is 1048576 = 1Mb) Maximum is 33554432U (32Mb)
-- `returnType` (string): data type of the returned tokens. If the indicated
- type does not match the actual type then an implicit type conversion is
- applied (see [TO_STRING()](../aql/functions/type-check-and-cast.md#to_string),
- [TO_NUMBER()](../aql/functions/type-check-and-cast.md#to_number),
- [TO_BOOL()](../aql/functions/type-check-and-cast.md#to_bool))
- - `"string"` (default): convert emitted tokens to strings
- - `"number"`: convert emitted tokens to numbers
- - `"bool"`: convert emitted tokens to booleans
-
-**Examples**
-
-Soundex Analyzer for a phonetically similar term search:
-
-```js
----
-name: analyzerAqlSoundex
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("soundex", "aql", { queryString: "RETURN SOUNDEX(@param)" }, []);
-db._query("RETURN TOKENS('ArangoDB', 'soundex')").toArray();
-~analyzers.remove(a.name);
-```
-
-Concatenating Analyzer for conditionally adding a custom prefix or suffix:
-
-```js
----
-name: analyzerAqlConcat
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("concat", "aql", { queryString:
- "RETURN LOWER(LEFT(@param, 5)) == 'inter' ? CONCAT(@param, 'ism') : CONCAT('inter', @param)"
-}, []);
-db._query("RETURN TOKENS('state', 'concat')").toArray();
-db._query("RETURN TOKENS('international', 'concat')").toArray();
-~analyzers.remove(a.name);
-```
-
-Filtering Analyzer that ignores unwanted data based on the prefix `"ir"`,
-with `keepNull: false` and explicitly returning `null`:
-
-```js
----
-name: analyzerAqlFilterNull
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("filter", "aql", { keepNull: false, queryString:
- "RETURN LOWER(LEFT(@param, 2)) == 'ir' ? null : @param"
-}, []);
-db._query("RETURN TOKENS('regular', 'filter')").toArray();
-db._query("RETURN TOKENS('irregular', 'filter')").toArray();
-~analyzers.remove(a.name);
-```
-
-Filtering Analyzer that discards unwanted data based on the prefix `"ir"`,
-using a filter for an empty result, which is discarded from the View index even
-without `keepNull: false`:
-
-```js
----
-name: analyzerAqlFilter
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("filter", "aql", { queryString:
- "FILTER LOWER(LEFT(@param, 2)) != 'ir' RETURN @param"
-}, []);
-var coll = db._create("coll");
-var doc1 = db.coll.save({ value: "regular" });
-var doc2 = db.coll.save({ value: "irregular" });
-var view = db._createView("view", "arangosearch",
- { links: { coll: { fields: { value: { analyzers: ["filter"] }}}}})
-~assert(db._query(`FOR d IN view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] > 0);
-db._query("FOR doc IN view SEARCH ANALYZER(doc.value IN ['regular', 'irregular'], 'filter') RETURN doc").toArray();
-~db._dropView(view.name())
-~analyzers.remove(a.name);
-~db._drop(coll.name());
-```
-
-Custom tokenization with `collapsePositions` on and off:
-The input string `"A-B-C-D"` is split into an array of strings
-`["A", "B", "C", "D"]`. The position metadata (as used by the `PHRASE()`
-function) is set to 0 for all four strings if `collapsePositions` is enabled.
-Otherwise the position is set to the respective array index, 0 for `"A"`,
-1 for `"B"` and so on.
-
-| `collapsePositions` | A | B | C | D |
-|--------------------:|:-:|:-:|:-:|:-:|
-| `true` | 0 | 0 | 0 | 0 |
-| `false` | 0 | 1 | 2 | 3 |
-
-```js
----
-name: analyzerAqlCollapse
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a1 = analyzers.save("collapsed", "aql", { collapsePositions: true, queryString:
- "FOR d IN SPLIT(@param, '-') RETURN d"
-}, ["frequency", "position"]);
-var a2 = analyzers.save("uncollapsed", "aql", { collapsePositions: false, queryString:
- "FOR d IN SPLIT(@param, '-') RETURN d"
-}, ["frequency", "position"]);
-var coll = db._create("coll");
-var doc = db.coll.save({ text: "A-B-C-D" });
-var view = db._createView("view", "arangosearch",
- { links: { coll: { analyzers: [ "collapsed", "uncollapsed" ], includeAllFields: true }}});
-~assert(db._query(`FOR d IN view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 1);
-db._query("FOR d IN view SEARCH PHRASE(d.text, {TERM: 'B'}, 1, {TERM: 'D'}, 'uncollapsed') RETURN d");
-db._query("FOR d IN view SEARCH PHRASE(d.text, {TERM: 'B'}, -1, {TERM: 'D'}, 'uncollapsed') RETURN d");
-db._query("FOR d IN view SEARCH PHRASE(d.text, {TERM: 'B'}, 1, {TERM: 'D'}, 'collapsed') RETURN d");
-db._query("FOR d IN view SEARCH PHRASE(d.text, {TERM: 'B'}, -1, {TERM: 'D'}, 'collapsed') RETURN d");
-~db._dropView(view.name());
-~analyzers.remove(a1.name);
-~analyzers.remove(a2.name);
-~db._drop(coll.name());
-```
-
-The position data is not directly exposed, but we can see its effects through
-the `PHRASE()` function. There is one token between `"B"` and `"D"` to skip in
-case of uncollapsed positions. With positions collapsed, both are in the same
-position, thus there is negative one to skip to match the tokens.
-
-### `pipeline`
-
-Introduced in: v3.8.0
-
-An Analyzer capable of chaining effects of multiple Analyzers into one.
-The pipeline is a list of Analyzers, where the output of an Analyzer is passed
-to the next for further processing. The final token value is determined by last
-Analyzer in the pipeline.
-
-The Analyzer is designed for cases like the following:
-- Normalize text for a case insensitive search and apply _n_-gram tokenization
-- Split input with `delimiter` Analyzer, followed by stemming with the `stem`
- Analyzer
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `pipeline` (array): an array of Analyzer definition-like objects with
- `type` and `properties` attributes
-
-{{< info >}}
-- You cannot use Analyzers of the types `geopoint`, `geojson`, and `geo_s2` in pipelines.
- These Analyzers require additional postprocessing and can only be applied to
- document fields directly.
-- The output data type of an Analyzer needs to be compatible with the input
- data type of the next Analyzer in the chain. The `aql` Analyzer, in particular,
- has a `returnType` property, and if you set it to `number` or `bool`, the
- subsequent Analyzer in the pipeline needs to support this data type as input.
- Most Analyzers expect string inputs and are thus incompatible with such a setup.
-{{< /info >}}
-
-**Examples**
-
-Normalize to all uppercase and compute bigrams:
-
-```js
----
-name: analyzerPipelineUpperNgram
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("ngram_upper", "pipeline", { pipeline: [
- { type: "norm", properties: { locale: "en", case: "upper" } },
- { type: "ngram", properties: { min: 2, max: 2, preserveOriginal: false, streamType: "utf8" } }
-] }, []);
-db._query(`RETURN TOKENS("Quick brown foX", "ngram_upper")`).toArray();
-~analyzers.remove(a.name);
-```
-
-Split at delimiting characters `,` and `;`, then stem the tokens:
-
-```js
----
-name: analyzerPipelineDelimiterStem
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("delimiter_stem", "pipeline", { pipeline: [
- { type: "delimiter", properties: { delimiter: "," } },
- { type: "delimiter", properties: { delimiter: ";" } },
- { type: "stem", properties: { locale: "en" } }
-] }, []);
-db._query(`RETURN TOKENS("delimited,stemmable;words", "delimiter_stem")`).toArray();
-~analyzers.remove(a.name);
-```
-
-### `stopwords`
-
-Introduced in: v3.8.1
-
-An Analyzer capable of removing specified tokens from the input.
-
-It uses binary comparison to determine if an input token should be discarded.
-It checks for exact matches. If the input contains only a substring that
-matches one of the defined stop words, then it is not discarded. Longer inputs
-such as prefixes of stop words are also not discarded.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `stopwords` (array): array of strings that describe the tokens to
- be discarded. The interpretation of each string depends on the value of
- the `hex` parameter.
-- `hex` (boolean): If false (default), then each string in `stopwords` is used
- verbatim. If true, then the strings need to be hex-encoded. This allows for
- removing tokens that contain non-printable characters. To encode UTF-8
- strings to hex strings you can use e.g.
- - AQL:
- ```aql
- FOR token IN ["and","the"] RETURN TO_HEX(token)
- ```
- - arangosh / Node.js:
- ```js
- ["and","the"].map(token => Buffer(token).toString("hex"))
- ```
- - Modern browser:
- ```js
- ["and","the"].map(token => Array.from(new TextEncoder().encode(token), byte => byte.toString(16).padStart(2, "0")).join(""))
- ```
-
-**Examples**
-
-Create and use a `stopword` Analyzer that removes the tokens `and` and `the`.
-The stop word array with hex-encoded strings for this looks like
-`["616e64","746865"]` (`a` = 0x61, `n` = 0x6e, `d` = 0x64 and so on).
-Note that `a` and `theater` are not removed, because there is no exact match
-with either of the stop words `and` and `the`:
-
-```js
----
-name: analyzerStopwords
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("stop", "stopwords", {
- stopwords: ["616e64","746865"], hex: true
-}, []);
-db._query("RETURN FLATTEN(TOKENS(SPLIT('the fox and the dog and a theater', ' '), 'stop'))").toArray();
-~analyzers.remove(a.name);
-```
-
-Create and use an Analyzer pipeline that normalizes the input (convert to
-lower-case and base characters) and then discards the stopwords `and` and `the`:
-
-```js
----
-name: analyzerPipelineStopwords
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("norm_stop", "pipeline", { "pipeline": [
- { type: "norm", properties: { locale: "en", accent: false, case: "lower" } },
- { type: "stopwords", properties: { stopwords: ["and","the"], hex: false } },
-]}, []);
-db._query("RETURN FLATTEN(TOKENS(SPLIT('The fox AND the dog äñḏ a ţhéäter', ' '), 'norm_stop'))").toArray();
-~analyzers.remove(a.name);
-```
-
-### `segmentation`
-
-Introduced in: v3.9.0
-
-An Analyzer capable of breaking up the input text into tokens in a
-language-agnostic manner as per
-[Unicode Standard Annex #29](https://unicode.org/reports/tr29),
-making it suitable for mixed language strings. It can optionally preserve all
-non-whitespace or all characters instead of keeping alphanumeric characters only,
-as well as apply case conversion.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `break` (string, _optional_):
- - `"all"`: return all tokens
- - `"alpha"`: return tokens composed of alphanumeric characters only (default).
- Alphanumeric characters are Unicode codepoints from the Letter and Number
- categories, see
- [Unicode Technical Note #36](https://www.unicode.org/notes/tn36/).
- - `"graphic"`: return tokens composed of non-whitespace characters only.
- Note that the list of whitespace characters does not include line breaks:
- - `U+0009` Character Tabulation
- - `U+0020` Space
- - `U+0085` Next Line
- - `U+00A0` No-break Space
- - `U+1680` Ogham Space Mark
- - `U+2000` En Quad
- - `U+2028` Line Separator
- - `U+202F` Narrow No-break Space
- - `U+205F` Medium Mathematical Space
- - `U+3000` Ideographic Space
-- `case` (string, _optional_):
- - `"lower"` to convert to all lower-case characters (default)
- - `"upper"` to convert to all upper-case characters
- - `"none"` to not change character case
-
-**Examples**
-
-Create different `segmentation` Analyzers to show the behavior of the different
-`break` options:
-
-```js
----
-name: analyzerSegmentationBreak
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var all = analyzers.save("segment_all", "segmentation", { break: "all" }, []);
-var alpha = analyzers.save("segment_alpha", "segmentation", { break: "alpha" }, []);
-var graphic = analyzers.save("segment_graphic", "segmentation", { break: "graphic" }, []);
-db._query(`LET str = 'Test\twith An_EMAIL-address+123@example.org\n蝴蝶。\u2028бутерброд'
- RETURN {
- "all": TOKENS(str, 'segment_all'),
- "alpha": TOKENS(str, 'segment_alpha'),
- "graphic": TOKENS(str, 'segment_graphic'),
- }
-`).toArray();
-~analyzers.remove(all.name);
-~analyzers.remove(alpha.name);
-~analyzers.remove(graphic.name);
-```
-
-### `minhash`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.0
-
-An Analyzer that computes so called MinHash signatures using a
-locality-sensitive hash function. It applies an Analyzer of your choice before
-the hashing, for example, to break up text into words.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `analyzer` (object, _required_): an Analyzer definition-like objects with
- `type` and `properties` attributes
-- `numHashes` (number, _required_): the size of the MinHash signature. Must be
- greater or equal to `1`. The signature size defines the probabilistic error
- (`err = rsqrt(numHashes)`). For an error amount that does not exceed 5%
- (`0.05`), use a size of `1 / (0.05 * 0.05) = 400`.
-
-**Examples**
-
-Create a `minhash` Analyzers:
-
-```js
----
-name: analyzerMinHash
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var analyzerMinHash = analyzers.save("minhash5", "minhash", { analyzer: { type: "segmentation", properties: { break: "alpha", case: "lower" } }, numHashes: 5 }, []);
-var analyzerSegment = analyzers.save("segment", "segmentation", { break: "alpha", case: "lower" }, []);
-db._query(`
- LET str1 = "The quick brown fox jumps over the lazy dog."
- LET str2 = "The fox jumps over the crazy dog."
- RETURN {
- approx: JACCARD(TOKENS(str1, "minhash5"), TOKENS(str2, "minhash5")),
- actual: JACCARD(TOKENS(str1, "segment"), TOKENS(str2, "segment"))
- }`).toArray();
-~analyzers.remove(analyzerMinHash.name);
-~analyzers.remove(analyzerSegment.name);
-```
-
-### `classification`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.0
-
-{{< warning >}}
-This feature is experimental and under active development.
-The naming and interfaces may change at any time.
-Execution times are not representative of the final product.
-{{< /warning >}}
-
-An Analyzer capable of classifying tokens in the input text.
-
-It applies a user-provided [supervised fastText](https://fasttext.cc/docs/en/supervised-tutorial.html)
-word embedding model to classify the input text. It is able to classify
-individual tokens as well as entire inputs.
-
-The *properties* allowed for this Analyzer are an object with the following attributes:
-
-- `model_location` (string): the on-disk path to the trained fastText supervised model.
- Note: if you are running this in an ArangoDB cluster, this model must exist on
- every machine in the cluster.
-- `top_k` (number, optional): the number of class labels that will be produced
- per input (default: 1).
-- `threshold` (number, optional): the probability threshold for which a label
- will be assigned to an input. A fastText model produces a probability per
- class label, and this is what will be filtered (default: `0.99`).
-
-**Examples**
-
-Create and use a `classification` Analyzer with a stored "cooking" classifier
-to classify items.
-
-```js
-var analyzers = require("@arangodb/analyzers");
-var classifier_single = analyzers.save("classifier_single", "classification", { "model_location": "/path_to_local_fasttext_model_directory/model_cooking.bin" }, []);
-var classifier_top_two = analyzers.save("classifier_double", "classification", { "model_location": "/path_to_local_fasttext_model_directory/model_cooking.bin", "top_k": 2 }, []);
-db._query(`LET str = "Which baking dish is best to bake a banana bread ?"
- RETURN {
- "all": TOKENS(str, "classifier_single"),
- "double": TOKENS(str, "classifier_double")
- }
- `);
-```
-
-```json
-[
- {
- "all" : [
- "__label__baking"
- ],
- "double" : [
- "__label__baking",
- "__label__bananas"
- ]
- }
-]
-```
-
-### `nearest_neighbors`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.0
-
-{{< warning >}}
-This feature is experimental and under active development.
-The naming and interfaces may change at any time.
-Execution times are not representative of the final product.
-{{< /warning >}}
-
-An Analyzer capable of finding nearest neighbors of tokens in the input.
-
-It applies a user-provided [supervised fastText](https://fasttext.cc/docs/en/supervised-tutorial.html)
-word embedding model to retrieve nearest neighbor tokens in the text.
-It is able to find neighbors of individual tokens as well as entire input strings.
-For entire input strings, the Analyzer will return nearest neighbors for each
-token within the input string.
-
-The *properties* allowed for this Analyzer are an object with the following attributes:
-
-- `model_location` (string): the on-disk path to the trained fastText supervised model.
- Note: if you are running this in an ArangoDB cluster, this model must exist on
- every machine in the cluster.
-- `top_k` (number, optional): the number of class labels that will be produced
- per input (default: `1`).
-
-**Examples**
-
-Create and use a `nearest_neighbors` Analyzer with a stored "cooking" classifier
-to find similar terms.
-
-```js
-var analyzers = require("@arangodb/analyzers");
-var nn_single = analyzers.save("nn_single", "nearest_neighbors", { "model_location": "/path_to_local_fasttext_model_directory/model_cooking.bin" }, []);
-var nn_top_two = analyzers.save("nn_double", "nearest_neighbors", { "model_location": "/path_to_local_fasttext_model_directory/model_cooking.bin", "top_k": 2 }, []);
-db._query(`LET str = "salt, oil"
- RETURN {
- "all": TOKENS(str, "nn_single"),
- "double": TOKENS(str, "nn_double")
- }
- `);
-```
-
-```json
-[
- {
- "all" : [
- "pepper",
- "olive"
- ],
- "double" : [
- "pepper",
- "table",
- "olive",
- "avocado"
- ]
- }
-]
-```
-
-### `geojson`
-
-Introduced in: v3.8.0
-
-An Analyzer capable of breaking up a GeoJSON object or coordinate array in
-`[longitude, latitude]` order into a set of indexable tokens for further usage
-with [ArangoSearch Geo functions](../aql/functions/arangosearch.md#geo-functions).
-
-The Analyzer can be used for two different coordinate representations:
-
-- a GeoJSON feature like a Point or Polygon, using a JSON object like the following:
-
- ```js
- {
- "type": "Point",
- "coordinates": [ -73.97, 40.78 ] // [ longitude, latitude ]
- }
- ```
-
-- a coordinate array with two numbers as elements in the following format:
-
- ```js
- [ -73.97, 40.78 ] // [ longitude, latitude ]
- ```
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `type` (string, _optional_):
- - `"shape"` (default): index all GeoJSON geometry types (Point, Polygon etc.)
- - `"centroid"`: compute and only index the centroid of the input geometry
- - `"point"`: only index GeoJSON objects of type Point, ignore all other
- geometry types
-- `options` (object, _optional_): options for fine-tuning geo queries.
- These options should generally remain unchanged
- - `maxCells` (number, _optional_): maximum number of S2 cells (default: 20)
- - `minLevel` (number, _optional_): the least precise S2 level (default: 4)
- - `maxLevel` (number, _optional_): the most precise S2 level (default: 23)
-- `legacy` (boolean, _optional_):
- This option controls how GeoJSON Polygons are interpreted (introduced in v3.10.5).
- Also see [Legacy Polygons](indexing/working-with-indexes/geo-spatial-indexes.md#legacy-polygons) and
- [GeoJSON interpretation](../aql/functions/geo.md#geojson-interpretation).
-
- - If `legacy` is `true`, the smaller of the two regions defined by a
- linear ring is interpreted as the interior of the ring and a ring can at most
- enclose half the Earth's surface.
- - If `legacy` is `false`, the area to the left of the boundary ring's
- path is considered to be the interior and a ring can enclose the entire
- surface of the Earth.
-
- The default is `false`.
-
- {{< warning >}}
- If you use `geojson` Analyzers and upgrade from a version below 3.10 to a
- version of 3.10 or higher, the interpretation of GeoJSON Polygons changes.
-
- If you have polygons in your data that mean to refer to a relatively small
- region but have the boundary running clockwise around the intended interior,
- they are interpreted as intended prior to 3.10, but from 3.10 onward, they are
- interpreted as "the other side" of the boundary.
-
- Whether a clockwise boundary specifies the complement of the small region
- intentionally or not cannot be determined automatically. Please test the new
- behavior manually.
-
- If you require the old behavior, upgrade to at least 3.10.5, drop your
- `geojson` Analyzers, and create new ones with `legacy` set to `true`.
- If these Analyzers are used in `arangosearch` Views, then they need to be
- dropped as well before dropping the Analyzers, and recreated after creating
- the new Analyzers.
- {{< /warning >}}
-
-You should not set any of the [Analyzer features](#analyzer-features) as they
-cannot be utilized for Geo Analyzers.
-
-**Examples**
-
-Create a collection with GeoJSON Points stored in an attribute `location`, a
-`geojson` Analyzer with default properties, and a View using the Analyzer.
-Then query for locations that are within a 3 kilometer radius of a given point
-and return the matched documents, including the calculated distance in meters.
-The stored coordinate pairs and the `GEO_POINT()` arguments are expected in
-longitude, latitude order:
-
-```js
----
-name: analyzerGeoJSON
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("geo_json", "geojson", {}, []);
-var coll = db._create("geo");
-var docs = db.geo.save([
- { location: { type: "Point", coordinates: [6.937, 50.932] } },
- { location: { type: "Point", coordinates: [6.956, 50.941] } },
- { location: { type: "Point", coordinates: [6.962, 50.932] } },
-]);
-var view = db._createView("geo_view", "arangosearch", {
- links: {
- geo: {
- fields: {
- location: {
- analyzers: ["geo_json"]
- }
- }
- }
- }
-});
-~assert(db._query(`FOR d IN geo_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 3);
-db._query(`LET point = GEO_POINT(6.93, 50.94)
- FOR doc IN geo_view
- SEARCH ANALYZER(GEO_DISTANCE(doc.location, point) < 2000, "geo_json")
- RETURN MERGE(doc, { distance: GEO_DISTANCE(doc.location, point) })`).toArray();
-~db._dropView("geo_view");
-~analyzers.remove("geo_json", true);
-~db._drop("geo");
-```
-
-### `geo_s2`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.5
-
-An Analyzer capable of breaking up a GeoJSON object or coordinate array in
-`[longitude, latitude]` order into a set of indexable tokens for further usage
-with [ArangoSearch Geo functions](../aql/functions/arangosearch.md#geo-functions).
-
-The Analyzer is similar to the `geojson` Analyzer, but it internally uses a
-format for storing the geo-spatial data that is more efficient. You can choose
-between different formats to make a tradeoff between the size on disk, the
-precision, and query performance.
-
-The Analyzer can be used for two different coordinate representations:
-
-- a GeoJSON feature like a Point or Polygon, using a JSON object like the following:
-
- ```js
- {
- "type": "Point",
- "coordinates": [ -73.97, 40.78 ] // [ longitude, latitude ]
- }
- ```
-
-- a coordinate array with two numbers as elements in the following format:
-
- ```js
- [ -73.97, 40.78 ] // [ longitude, latitude ]
- ```
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `format` (string, _optional_): the internal binary representation to use for
- storing the geo-spatial data in an index
- - `"latLngDouble"` (default): store each latitude and longitude value as an
- 8-byte floating-point value (16 bytes per coordinate pair). This format preserves
- numeric values exactly and is more compact than the VelocyPack format used
- by the `geojson` Analyzer.
- - `"latLngInt"`: store each latitude and longitude value as an 4-byte integer
- value (8 bytes per coordinate pair). This is the most compact format but the
- precision is limited to approximately 1 to 10 centimeters.
- - `"s2Point"`: store each longitude-latitude pair in the native format of
- Google S2 which is used for geo-spatial calculations (24 bytes per coordinate pair).
- This is not a particular compact format but it reduces the number of
- computations necessary when you execute geo-spatial queries.
- This format preserves numeric values exactly.
-- `type` (string, _optional_):
- - `"shape"` (default): index all GeoJSON geometry types (Point, Polygon etc.)
- - `"centroid"`: compute and only index the centroid of the input geometry
- - `"point"`: only index GeoJSON objects of type Point, ignore all other
- geometry types
-- `options` (object, _optional_): options for fine-tuning geo queries.
- These options should generally remain unchanged
- - `maxCells` (number, _optional_): maximum number of S2 cells (default: 20)
- - `minLevel` (number, _optional_): the least precise S2 level (default: 4)
- - `maxLevel` (number, _optional_): the most precise S2 level (default: 23)
-
-You should not set any of the [Analyzer features](#analyzer-features) as they
-cannot be utilized for Geo Analyzers.
-
-**Examples**
-
-Create a collection with GeoJSON Points stored in an attribute `location`, a
-`geo_s2` Analyzer with the `latLngInt` format, and a View using the Analyzer.
-Then query for locations that are within a 3 kilometer radius of a given point
-and return the matched documents, including the calculated distance in meters.
-The stored coordinate pairs and the `GEO_POINT()` arguments are expected in
-longitude, latitude order:
-
-```js
----
-name: analyzerGeoS2
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("geo_efficient", "geo_s2", { format: "latLngInt" }, []);
-var coll = db._create("geo");
-var docs = db.geo.save([
- { location: { type: "Point", coordinates: [6.937, 50.932] } },
- { location: { type: "Point", coordinates: [6.956, 50.941] } },
- { location: { type: "Point", coordinates: [6.962, 50.932] } },
-]);
-var view = db._createView("geo_view", "arangosearch", {
- links: {
- geo: {
- fields: {
- location: {
- analyzers: ["geo_efficient"]
- }
- }
- }
- }
-});
-~assert(db._query(`FOR d IN geo_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 3);
-db._query(`LET point = GEO_POINT(6.93, 50.94)
- FOR doc IN geo_view
- SEARCH ANALYZER(GEO_DISTANCE(doc.location, point) < 2000, "geo_efficient")
- RETURN MERGE(doc, { distance: GEO_DISTANCE(doc.location, point) })`).toArray();
-~db._dropView("geo_view");
-~analyzers.remove("geo_efficient", true);
-~db._drop("geo");
-```
-
-The calculated distance between the reference point and the point stored in
-the second document is `1825.1307…`. If you change the search condition to
-`< 1825.1303`, then the document is still returned despite the distance being
-higher than this value. This is due to the precision limitations of the
-`latLngInt` format. The returned distance is unaffected because it is calculated
-independent of the Analyzer. If you use either of the other two formats which
-preserve the exact coordinate values, then the document is filtered out as
-expected.
-
-### `geopoint`
-
-Introduced in: v3.8.0
-
-An Analyzer capable of breaking up a coordinate array in `[latitude, longitude]`
-order or a JSON object describing a coordinate pair using two separate attributes
-into a set of indexable tokens for further usage with
-[ArangoSearch Geo functions](../aql/functions/arangosearch.md#geo-functions).
-
-The Analyzer can be used for two different coordinate representations:
-
-- an array with two numbers as elements in the following format:
-
- ```js
- [ 40.78, -73.97 ] // [ latitude, longitude ]
- ```
-
-- two separate numeric attributes, one for latitude and one for longitude, as
- shown below:
-
- ```js
- { "location": { "lat": 40.78, "lon": -73.97 } }
- ```
-
- The attributes cannot be at the top level of the document, but must be nested
- like in the example, so that the Analyzer can be defined for the field
- `location` with the Analyzer properties
- `{ "latitude": ["lat"], "longitude": ["lon"] }`.
-
-The *properties* allowed for this Analyzer are an object with the following
-attributes:
-
-- `latitude` (array, _optional_): array of strings that describes the
- attribute path of the latitude value relative to the field for which the
- Analyzer is defined in the View
-- `longitude` (array, _optional_): array of strings that describes the
- attribute path of the longitude value relative to the field for which the
- Analyzer is defined in the View
-- `options` (object, _optional_): options for fine-tuning geo queries.
- These options should generally remain unchanged
- - `maxCells` (number, _optional_): maximum number of S2 cells (default: 20)
- - `minLevel` (number, _optional_): the least precise S2 level (default: 4)
- - `maxLevel` (number, _optional_): the most precise S2 level (default: 23)
-
-You should not set any of the [Analyzer features](#analyzer-features) as they
-cannot be utilized for Geo Analyzers.
-
-**Examples**
-
-Create a collection with coordinate pairs stored in an attribute `location`,
-a `geopoint` Analyzer with default properties, and a View using the Analyzer.
-Then query for locations that are within a 3 kilometer radius of a given point.
-The stored coordinate pairs are in latitude, longitude order, but `GEO_POINT()` and
-`GEO_DISTANCE()` expect longitude, latitude order:
-
-```js
----
-name: analyzerGeoPointPair
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("geo_pair", "geopoint", {}, []);
-var coll = db._create("geo");
-var docs = db.geo.save([
- { location: [50.932, 6.937] },
- { location: [50.941, 6.956] },
- { location: [50.932, 6.962] },
-]);
-var view = db._createView("geo_view", "arangosearch", {
- links: {
- geo: {
- fields: {
- location: {
- analyzers: ["geo_pair"]
- }
- }
- }
- }
-});
-~assert(db._query(`FOR d IN geo_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 3);
-db._query(`LET point = GEO_POINT(6.93, 50.94)
- FOR doc IN geo_view
- SEARCH ANALYZER(GEO_DISTANCE(doc.location, point) < 2000, "geo_pair")
- RETURN MERGE(doc, { distance: GEO_DISTANCE([doc.location[1], doc.location[0]], point) })`).toArray();
-~db._dropView("geo_view");
-~analyzers.remove("geo_pair", true);
-~db._drop("geo");
-```
-
-Create a collection with coordinate pairs stored in an attribute `location` as
-separate nested attributes `lat` and `lng`, a `geopoint` Analyzer that
-specifies the attribute paths to the latitude and longitude attributes
-(relative to `location` attribute), and a View using the Analyzer.
-Then query for locations that are within a 3 kilometer radius of a given point:
-
-```js
----
-name: analyzerGeoPointLatLng
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-var a = analyzers.save("geo_latlng", "geopoint", {
- latitude: ["lat"],
- longitude: ["lng"]
-}, []);
-var coll = db._create("geo");
-var docs = db.geo.save([
- { location: { lat: 50.932, lng: 6.937 } },
- { location: { lat: 50.941, lng: 6.956 } },
- { location: { lat: 50.932, lng: 6.962 } },
-]);
-var view = db._createView("geo_view", "arangosearch", {
- links: {
- geo: {
- fields: {
- location: {
- analyzers: ["geo_latlng"]
- }
- }
- }
- }
-});
-~assert(db._query(`FOR d IN geo_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 3);
-db._query(`LET point = GEO_POINT(6.93, 50.94)
- FOR doc IN geo_view
- SEARCH ANALYZER(GEO_DISTANCE(doc.location, point) < 2000, "geo_latlng")
- RETURN MERGE(doc, { distance: GEO_DISTANCE([doc.location.lng, doc.location.lat], point) })`).toArray();
-~db._dropView("geo_view");
-~analyzers.remove("geo_latlng", true);
-~db._drop("geo");
-```
-
-## Built-in Analyzers
-
-There is a set of built-in Analyzers which are available by default for
-convenience and backward compatibility. They cannot be removed.
-
-The `identity` Analyzer has no properties and the `frequency` and `norm`
-features. The Analyzers of type `text` all tokenize strings with stemming
-enabled, no stop words configured, accent removal and case conversion to
-lowercase turned on and the `frequency`, `norm` and `position` features
-
-Name | Type | Locale (Language) | Case | Accent | Stemming | Stop words | Features |
------------|------------|-------------------|---------|---------|----------|------------|----------|
-`identity` | `identity` | | | | | | `["frequency", "norm"]`
-`text_de` | `text` | `de` (German) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_en` | `text` | `en` (English) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_es` | `text` | `es` (Spanish) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_fi` | `text` | `fi` (Finnish) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_fr` | `text` | `fr` (French) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_it` | `text` | `it` (Italian) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_nl` | `text` | `nl` (Dutch) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_no` | `text` | `no` (Norwegian) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_pt` | `text` | `pt` (Portuguese) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_ru` | `text` | `ru` (Russian) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_sv` | `text` | `sv` (Swedish) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-`text_zh` | `text` | `zh` (Chinese) | `lower` | `false` | `true` | `[ ]` | `["frequency", "norm", "position"]`
-
-Note that _locale_, _case_, _accent_, _stemming_ and _stopwords_ are Analyzer
-properties. `text_zh` does not have actual stemming support for Chinese despite
-what the property value suggests.
-
-## Supported Languages
-
-### Tokenization and Normalization
-
-Analyzers rely on [ICU](http://site.icu-project.org/) for
-tokenization and normalization, which is language-dependent.
-The ICU data file `icudtl.dat` that ArangoDB ships with contains information for
-a lot of languages, which are technically all supported.
-
-Setting an unsupported or invalid locale does not raise a warning or error.
-ICU will fall back to a locale without the requested variant, country, or
-script, or use its default locale if neither of the former is valid.
-
-{{< warning >}}
-The alphabetical order of characters is not taken into account by ArangoSearch,
-i.e. range queries in SEARCH operations against Views will not follow the
-language rules as per the defined Analyzer locale (except for the
-[`collation` Analyzer](#collation)) nor the server language
-(startup option `--default-language`)!
-Also see [Known Issues](../release-notes/version-3.10/known-issues-in-3-10.md#arangosearch).
-{{< /warning >}}
-
-### Stemming
-
-Stemming support is provided by [Snowball](https://snowballstem.org/),
-which supports the following languages:
-
-Language | Code
--------------|-----
-Arabic * | `ar`
-Armenian ** | `hy`
-Basque * | `eu`
-Catalan * | `ca`
-Danish * | `da`
-Dutch | `nl`
-English | `en`
-Finnish | `fi`
-French | `fr`
-German | `de`
-Greek * | `el`
-Hindi * | `hi`
-Hungarian * | `hu`
-Indonesian * | `id`
-Irish * | `ga`
-Italian | `it`
-Lithuanian * | `lt`
-Nepali * | `ne`
-Norwegian | `no`
-Portuguese | `pt`
-Romanian * | `ro`
-Russian | `ru`
-Serbian * | `sr`
-Spanish | `es`
-Swedish | `sv`
-Tamil * | `ta`
-Turkish * | `tr`
-Yiddish ** | `yi`
-
-\* Introduced in: v3.7.0
-
-\*\* Introduced in: v3.10.0
diff --git a/site/content/3.10/index-and-search/arangosearch/_index.md b/site/content/3.10/index-and-search/arangosearch/_index.md
deleted file mode 100644
index 795de06af3..0000000000
--- a/site/content/3.10/index-and-search/arangosearch/_index.md
+++ /dev/null
@@ -1,1004 +0,0 @@
----
-title: Information Retrieval with ArangoSearch
-menuTitle: ArangoSearch
-weight: 155
-description: >-
- ArangoSearch is ArangoDB's built-in search engine for full-text, complex data
- structures, and more
----
-ArangoSearch provides information retrieval features, natively integrated
-into ArangoDB's query language and with support for all data models. It is
-primarily a full-text search engine, a much more powerful alternative to the
-[full-text index](../indexing/working-with-indexes/fulltext-indexes.md) type. It can index nested fields
-from multiple collections, optionally with transformations such as text
-normalization and tokenization applied, rank query results by relevance and
-more.
-
-## Example Use Cases
-
-- Perform federated full-text searches over product descriptions for a
- web shop, with the product documents stored in various collections.
-- Find information in a research database using stemmed phrases, case and
- accent insensitive, with irrelevant terms removed from the search index
- (stop word filtering), ranked by relevance based on term frequency (TFIDF).
-- Query a movie dataset for titles with words in a particular order
- (optionally with wildcards), and sort the results by best matching (BM25)
- but favor movies with a longer duration.
-
-## Getting Started with ArangoSearch
-
-ArangoSearch introduces the concept of **Views**, which can be seen as
-virtual collections. There are two types of Views:
-
-- **`arangosearch` Views**:
- Each View of the `arangosearch` type represents an inverted index to provide fast
- full-text searching over one or multiple linked collections and holds the
- configuration for the search capabilities, such as the attributes to index.
- It can cover multiple or even all attributes of the documents in the linked
- collections.
-
- See [`arangosearch` Views Reference](arangosearch-views-reference.md) for details.
-
-- **`search-alias` Views**:
- Views of the `search-alias` type reference one or more
- [Inverted indexes](../indexing/working-with-indexes/inverted-indexes.md). Inverted indexes are defined on
- the collection level and can be used stand-alone for filtering, but adding
- them to a `search-alias` View enables you to search over multiple collections at
- once, called "federated search", and offers you the same capabilities for
- ranking search results by relevance and search highlighting like with
- `arangosearch` Views. Each inverted index can index multiple or even all
- attribute of the documents of the collection it is defined for.
-
- See [`search-alias` Views Reference](search-alias-views-reference.md) for details.
-
-{{< info >}}
-Views are not updated synchronously as the source collections
-change in order to minimize the performance impact. They are
-**eventually consistent**, with a configurable consolidation policy.
-{{< /info >}}
-
-The input values can be processed by so called [**Analyzers**](../analyzers.md)
-which can normalize strings, tokenize text into words and more, enabling
-different possibilities to search for values later on.
-
-Search results can be sorted by their similarity ranking to return the best
-matches first using popular scoring algorithms (Okapi BM25, TF-IDF),
-user-defined relevance boosting and dynamic score calculation.
-
-
-
-Views can be managed in the web interface, via an [HTTP API](../../develop/http-api/views/_index.md) and
-through a [JavaScript API](../../develop/javascript-api/@arangodb/db-object.md#views).
-
-Views can be queried with AQL using the [`SEARCH` operation](../../aql/high-level-operations/search.md).
-It takes a search expression composed of the fields to search, the search terms,
-logical and comparison operators, as well as
-[ArangoSearch functions](../../aql/functions/arangosearch.md).
-
-### Create your first `arangosearch` View
-
-1. Create a test collection (e.g. `food`) and insert a few documents so
- that you have something to index and search for:
- - `{ "name": "avocado", "type": "fruit" }` (yes, it is a fruit)
- - `{ "name": "carrot", "type": "vegetable" }`
- - `{ "name": "chili pepper", "type": "vegetable" }`
- - `{ "name": "tomato", "type": ["fruit", "vegetable"] }`
-2. In the **VIEWS** section of the web interface, click the **Add View** card.
-3. Enter a name (e.g. `food_view`) for the View, click **Create** and
- click the card of the newly created View.
-4. You can toggle the mode of the View definition editor from **Tree** to **Code**
- to edit the JSON object as text.
-5. Replace `"links": {},` with below configuration, then save the changes:
- ```js
- "links": {
- "food": {
- "includeAllFields": true
- }
- },
- ```
-6. After a few seconds of processing, the editor will show you the updated link
- definition with default settings added:
- ```js
- "links": {
- "food": {
- "analyzers": [
- "identity"
- ],
- "fields": {},
- "includeAllFields": true,
- "storeValues": "none",
- "trackListPositions": false
- }
- },
- ```
- The View indexes all attributes (fields) of the documents in the
- `food` collection from now on (with some delay). The attribute values
- get processed by the default `identity` Analyzer, which means that they
- get indexed unaltered. Note that arrays (`["fruit", "vegetable"]` in the example)
- are automatically expanded in `arangosearch` Views by default, indexing the
- individual elements of the array (`"fruit"` and `"vegetable"`).
-7. In the **QUERIES** section, try the following query:
- ```aql
- FOR doc IN food_view
- RETURN doc
- ```
- The View is used like a collection and simply iterated over to return all
- (indexed) documents. You should see the documents stored in `food` as result.
-8. Now add a search expression. Unlike with regular collections where you would
- use `FILTER`, a `SEARCH` operation is needed to utilize the View index:
- ```aql
- FOR doc IN food_view
- SEARCH doc.name == "avocado"
- RETURN doc
- ```
- In this basic example, the ArangoSearch expression looks identical to a
- `FILTER` expression, but this is not always the case. You can also combine
- both, with `FILTER`s after `SEARCH`, in which case the filter criteria are
- applied to the search results as a post-processing step.
-
-{{< info >}}
-Note that if you link a collection to a View and execute a query against this
-View while it is still being indexed, you may not get complete results.
-{{< /info >}}
-
-### Create your first `search-alias` View
-
-1. Create a test collection (e.g. `food`) and insert a few documents so
- that you have something to index and search for. You may use the web interface
- for this:
- - `{ "name": "avocado", "type": ["fruit"] }` (yes, it is a fruit)
- - `{ "name": "carrot", "type": ["vegetable"] }`
- - `{ "name": "chili pepper", "type": ["vegetable"] }`
- - `{ "name": "tomato", "type": ["fruit", "vegetable"] }`
-2. Use [arangosh](../../components/tools/arangodb-shell/_index.md) to connect to the server.
-3. Use the JavaScript API to create an inverted index:
- ```js
- db.food.ensureIndex({ type: "inverted", fields: ["name", "type[*]"], name: "inv-idx-name-type" });
- ```
- The `[*]` is needed to index the individual elements of the `type` array.
- Note that all `type` attributes of the example documents are arrays, even if
- they only contain a single element. If you use `[*]` for expanding arrays,
- only array elements are indexed, whereas primitive values like the string
- `"fruit"` would be ignored by the inverted index (but see the `searchField`
- options regarding exceptions). Giving the index a name manually makes it
- easier for you to identify the index.
-4. The inverted index indexes the specified attributes (fields) of the documents
- in the `food` collection from now on (with some delay). The attribute values
- get processed by the default `identity` Analyzer, which means that they
- get indexed unaltered.
-5. Create a `search-alias` View that uses the inverted index:
- ```js
- db._createView("food_view", "search-alias", { indexes: [
- { collection: "food", index: "inv-idx-name-type" }
- ]});
- ```
- The View uses the inverted index for searching and adds
- additional functionality like ranking results and searching across
- multiple collections at once.
-6. In the **QUERIES** section of the web interface, try the following query:
- ```aql
- FOR doc IN food_view
- RETURN doc
- ```
- The View is used like a collection and simply iterated over to return all
- (indexed) documents. You should see the documents stored in `food` as result.
-7. Now add a search expression. Unlike with regular collections where you would
- use `FILTER`, a `SEARCH` operation is needed to utilize the View index:
- ```aql
- FOR doc IN food_view
- SEARCH doc.name == "avocado"
- RETURN doc
- ```
- In this basic example, the ArangoSearch expression looks identical to a
- `FILTER` expression, but this is not always the case. You can also combine
- both, with `FILTER`s after `SEARCH`, in which case the filter criteria are
- applied to the search results as a post-processing step.
-8. You can also use the inverted index as a stand-alone index as demonstrated
- below, by iterating over the collection (not the View) with an index hint to
- utilize the inverted index together with the `FILTER` operation:
- ```aql
- FOR doc IN food OPTIONS { indexHint: "inv-idx-name-type", forceIndexHint: true }
- FILTER doc.name == "avocado"
- RETURN doc
- ```
- Note that you can't rank results and search across multiple collections
- using stand-alone inverted index, but you can if you add inverted indexes
- to a `search-alias` View and search the View with the `SEARCH` operation.
-
-### Understanding the Analyzer context
-
-`arangosearch` Views allow you to index the same field with multiple Analyzers.
-This makes it necessary to select the right one in your query by setting the
-Analyzer context with the `ANALYZER()` function.
-
-{{< tip >}}
-If you use `search-alias` Views, the Analyzers are inferred from the definitions
-of the inverted indexes. This is possible because every field can only be
-indexed with a single Analyzer. Don't specify the Analyzer context with the
-`ANALYZER()` function in `search-alias` queries to avoid errors.
-{{< /tip >}}
-
-We did not specify an Analyzer explicitly in above example, but it worked
-regardless. That is because the `identity` Analyzer is used by default in both
-View definitions and AQL queries. The Analyzer chosen in a query needs to match
-with one of the Analyzers that a field was indexed with as per the `arangosearch` View
-definition - and this happened to be the case. We can rewrite the query to be
-more explicit about the Analyzer context:
-
-```aql
-FOR doc IN food_view
- SEARCH ANALYZER(doc.name == "avocado", "identity")
- RETURN doc
-```
-
-`ANALYZER(… , "identity")` matches the Analyzer defined in the View
-`"analyzers": [ "identity" ]`. The latter defines how fields are transformed at
-index time, whereas the former selects which index to use at query time.
-
-To use a different Analyzer, such as the built-in `text_en` Analyzer, you would
-change the View definition to `"analyzers": [ "text_en", "identity" ]` (or just
-`"analyzers": [ "text_en" ]` if you don't need the `identity` Analyzer at all)
-as well as adjust the query to use `ANALYZER(… , "text_en")`.
-
-If a field is not indexed with the Analyzer requested in the query, then you
-will get an **empty result** back. Make sure that the fields are indexed
-correctly and that you set the Analyzer context.
-
-You can test if a field is indexed with particular Analyzer with one of the
-variants of the [`EXISTS()` function](../../aql/functions/arangosearch.md#exists),
-for example, as shown below:
-
-```aql
-RETURN LENGTH(
- FOR doc IN food_view
- SEARCH EXISTS(doc.name, "analyzer", "identity")
- LIMIT 1
- RETURN true) > 0
-```
-
-If you use an `arangosearch` View, you need to change the `"storeValues"`
-property in the View definition from `"none"` to `"id"` for the function to work.
-For `search-alias` Views, this feature is always enabled.
-
-### Basic search expressions
-
-ArangoSearch supports a variety of logical operators and comparison operators
-to filter Views. A basic one is the **equality** comparison operator:
-
-`doc.name == "avocado"`
-
-The inversion (inequality) is also allowed:
-
-`doc.name != "avocado"`
-
-You can also test against multiple values with the **IN** operator:
-
-`doc.name IN ["avocado", "carrot"]`
-
-The same can be expressed with a logical **OR** for multiple conditions:
-
-`doc.name == "avocado" OR doc.name == "carrot"`
-
-Similarly, **AND** can be used to require that multiple conditions must be true:
-
-`doc.name == "avocado" AND doc.type == "fruit"`
-
-An interesting case is the tomato document with its two array elements as type:
-`["fruit", "vegetable"]`. The View definition defaults to
-`"trackListPositions": false`, which means that the array elements get indexed
-individually as if the attribute had both string values at the same time
-(requiring array expansion using `type[*]` or `"searchField": true` in case of the
-inverted index for the `search-alias` View), matching the following conditions:
-
-`doc.type == "fruit" AND doc.type == "vegetable"`
-
-The same can be expressed with `ALL ==` and `ALL IN`. Note that the attribute
-reference and the search conditions are swapped for this:
-
-`["fruit", "vegetable"] ALL == doc.type`
-
-To find fruits which are not vegetables at the same time, the latter can be
-excluded with `NOT`:
-
-`doc.type == "fruit" AND NOT doc.type == "vegetable"`
-
-For a complete list of operators supported in ArangoSearch expressions see
-[AQL `SEARCH` operation](../../aql/high-level-operations/search.md).
-
-### Searching for tokens from full-text
-
-So far we searched for full matches of name and/or type. Strings could contain
-more than just a single term however. It could be multiple words, sentences, or
-paragraphs. For such text, we need a way to search for individual tokens,
-usually the words that it is comprised of. This is where Text Analyzers come
-in. A Text Analyzer tokenizes an entire string into individual tokens that are
-then stored in an inverted index.
-
-There are a few pre-configured text Analyzers, but you can also add your own as
-needed. For now, let us use the built-in `text_en` Analyzer for tokenizing
-English text.
-
-_`arangosearch` View:_
-
-1. Replace `"fields": {},` in the `food_view` View definition with below code:
- ```js
- "fields": {
- "name": {
- "analyzers": ["text_en", "identity"]
- }
- },
- ```
-2. Save the change. After a few seconds, the `name` attribute has been indexed
- with the `text_en` Analyzer in addition to the `identity` Analyzer.
-3. Run below query that sets `text_en` as context Analyzer and searches for
- the word `pepper`:
- ```aql
- FOR doc IN food_view
- SEARCH ANALYZER(doc.name == "pepper", "text_en")
- RETURN doc.name
- ```
-4. It matches `chili pepper` because the Analyzer tokenized it into `chili` and
- `pepper` and the latter matches the search criterion. Compare that to the
- `identity` Analyzer:
- ```aql
- FOR doc IN food_view
- SEARCH ANALYZER(doc.name == "pepper", "identity")
- RETURN doc.name
- ```
- It does not match because `chili pepper` is indexed as a single token that
- does not match the search criterion.
-5. Switch back to the `text_en` Analyzer but with a different search term:
- ```aql
- FOR doc IN food_view
- SEARCH ANALYZER(doc.name == "PéPPêR", "text_en")
- RETURN doc.name
- ```
- This will not match anything, even though this particular Analyzer converts
- characters to lowercase and accented characters to their base characters.
- The problem is that this transformation is applied to the document attribute
- when it gets indexed, but we haven't applied it to the search term.
-6. If we apply the same transformation then we get a match:
- ```aql
- FOR doc IN food_view
- SEARCH ANALYZER(doc.name == TOKENS("PéPPêR", "text_en")[0], "text_en")
- RETURN doc.name
- ```
- Note that the [`TOKENS()` functions](../../aql/functions/string.md#tokens)
- returns an array. We pick the first element with `[0]`, which is the
- normalized search term `"pepper"`.
-
-_`search-alias` View:_
-
-1. Collection indexes cannot be changed once created. Therefore, you need to
- create a new inverted index to index a field differently.
- In _arangosh_, create a new inverted index that indexes the `name` attribute
- with the `text_en` Analyzer, which splits strings into tokens so that you
- can search for individual words. Give the index a name to make it easier for
- you to identify the index:
- ```js
- db.food.ensureIndex({ type: "inverted", fields: [
- { name: "name", analyzer: "text_en" }
- ], name: "inv-idx-name-en" });
- ```
- After a few seconds, the `name` attribute has been indexed
- with the `text_en` Analyzer. Note that every field can only be indexed with a
- single Analyzer in inverted indexes and `search-alias` Views.
-2. Create a new `search-alias` View that uses the inverted index:
- ```js
- db._createView("food_view_fulltext", "search-alias", { indexes: [
- { collection: "food", index: "inv-idx-name-en" }
- ]});
- ```
-3. In the **QUERIES** section of the web interface, run below query which
- searches for the word `pepper`:
- ```aql
- FOR doc IN food_view_fulltext
- SEARCH doc.name == "pepper"
- RETURN doc.name
- ```
- It matches `chili pepper` because the Analyzer tokenized it into `chili` and
- `pepper` and the latter matches the search criterion.
-4. Try a different search term:
- ```aql
- FOR doc IN food_view_fulltext
- SEARCH doc.name == "PéPPêR"
- RETURN doc.name
- ```
- This does not match anything, even though the `text_en` Analyzer converts
- characters to lowercase and accented characters to their base characters.
- The problem is that this transformation is applied to the document attribute
- when it gets indexed, but we haven't applied it to the search term.
-5. If we apply the same transformation then we get a match:
- ```aql
- FOR doc IN food_view_fulltext
- SEARCH doc.name == TOKENS("PéPPêR", "text_en")[0]
- RETURN doc.name
- ```
- Note that the [`TOKENS()` functions](../../aql/functions/string.md#tokens)
- returns an array. We pick the first element with `[0]`, which is the
- normalized search term `"pepper"`.
-
-### Search expressions with ArangoSearch functions
-
-Basic operators are not enough for complex query needs. Additional search
-functionality is provided via [ArangoSearch functions](../../aql/functions/arangosearch.md)
-that can be composed with basic operators and other functions to form search
-expressions.
-
-ArangoSearch AQL functions take either an expression or a reference (of an
-attribute path or the document emitted by a View) as the first argument.
-
-```aql
-BOOST(, …)
-STARTS_WITH(doc.attribute, …)
-TDIDF(doc, …)
-```
-
-If an attribute path expressions is needed, then you have to reference a
-document object emitted by a View, e.g. the `doc` variable of
-`FOR doc IN viewName`, and then specify which attribute you want to test for, as an unquoted string literal. For example, `doc.attr` or
-`doc.deeply.nested.attr`, but not `"doc.attr"`. You can also use the
-bracket notation `doc["attr"]`.
-
-```aql
-FOR doc IN viewName
- SEARCH STARTS_WITH(doc.deeply.nested["attr"], "avoca")
- RETURN doc
-```
-
-If a reference to the document emitted by the View is required, like for
-scoring functions, then you need to pass the raw variable.
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT BM25(doc) DESC
- ...
-```
-
-If an expression is expected, it means that search conditions can be expressed in
-AQL syntax. They are typically function calls to ArangoSearch filter functions,
-possibly nested and/or using logical operators for multiple conditions.
-
-```aql
-BOOST(STARTS_WITH(doc.name, "chi"), 2.5) OR STARTS_WITH(doc.name, "tom")
-```
-
-You should make sure that search terms can match the indexed values by processing
-the search terms with the same Analyzers as the indexed document attributes.
-This is especially important for full-text search and any form of normalization,
-where there is little chance that an unprocessed search term happens to match
-the processed, indexed values.
-
-If you use `arangosearch` Views, the default Analyzer that is used for searching
-is `"identity"`. You need to set the Analyzer context in queries against `arangosearch`
-Views to select the Analyzer of the indexed data, as a field can be indexed
-by multiple Analyzers, or it uses the `identity` Analyzer.
-
-If you use `search-alias` Views, the Analyzers are inferred from the definitions
-of the inverted indexes, and you don't need to and should not set the Analyzer
-context with the `ANALYZER()` function. You should still transform search terms
-using the same Analyzer as for the indexed values.
-
-While some ArangoSearch functions accept an Analyzer argument, it is sometimes
-necessary to wrap search (sub-)expressions with an `ANALYZER()` call to set the
-correct Analyzer in the query so that it matches one of the Analyzers with
-which the field has been indexed. This only applies to queries against
-`arangosearch` Views.
-
-It can be easier and cleaner to use `ANALYZER()` even if you exclusively
-use functions that take an Analyzer argument and leave that argument out:
-
-```aql
-// Analyzer specified in each function call
-PHRASE(doc.name, "chili pepper", "text_en") OR PHRASE(doc.name, "tomato", "text_en")
-
-// Analyzer specified using ANALYZER()
-ANALYZER(PHRASE(doc.name, "chili pepper") OR PHRASE(doc.name, "tomato"), "text_en")
-```
-
-{{< tip >}}
-The [`PHRASE()` function](../../aql/functions/arangosearch.md#phrase) applies the
-`text_en` Analyzer to the search terms in both cases. `chili pepper` gets
-tokenized into `chili` and `pepper` and these tokens are then searched in this
-order. Searching for `pepper chili` would not match.
-{{< /tip >}}
-
-Certain expressions do not require any ArangoSearch functions, such as basic
-comparisons. However, the Analyzer used for searching will be `"identity"`
-unless `ANALYZER()` is used to set a different one.
-
-```aql
-// The "identity" Analyzer will be used by default
-SEARCH doc.name == "avocado"
-
-// Same as before but being explicit
-SEARCH ANALYZER(doc.name == "avocado", "identity")
-
-// Use the "text_en" Analyzer for searching instead
-SEARCH ANALYZER(doc.name == "avocado", "text_en")
-```
-
-### Ranking results by relevance
-
-Finding matches is one thing, but especially if there are a lot of results then
-the most relevant documents should be listed first. ArangoSearch implements
-[scoring functions](../../aql/functions/arangosearch.md#scoring-functions) that
-can be used to rank documents by relevance. The popular ranking schemes
-[Okapi BM25](https://en.wikipedia.org/wiki/Okapi_BM25) and
-[TF-IDF](https://en.wikipedia.org/wiki/Tf%E2%80%93idf) are
-available.
-
-Here is an example that sorts results from high to low BM25 score and also
-returns the score:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-FOR doc IN food_view
- SEARCH doc.type == "vegetable"
- SORT BM25(doc) DESC
- RETURN { name: doc.name, type: doc.type, score: BM25(doc) }
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-FOR doc IN food_view
- SEARCH ANALYZER(doc.type == "vegetable", "identity")
- SORT BM25(doc) DESC
- RETURN { name: doc.name, type: doc.type, score: BM25(doc) }
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-As you can see, the variable emitted by the View in the `FOR … IN` loop is
-passed to the [`BM25()` function](../../aql/functions/arangosearch.md#bm25).
-
-| name | type | score |
-|:-------------|:----------------------|:--------------------|
-| tomato | ["fruit","vegetable"] | 0.43373921513557434 |
-| carrot | vegetable | 0.38845786452293396 |
-| chili pepper | vegetable | 0.38845786452293396 |
-
-The [`TFIDF()` function](../../aql/functions/arangosearch.md#tfidf) works the same:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-FOR doc IN food_view
- SEARCH doc.type == "vegetable"
- SORT TFIDF(doc) DESC
- RETURN { name: doc.name, type: doc.type, score: TFIDF(doc) }
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-FOR doc IN food_view
- SEARCH ANALYZER(doc.type == "vegetable", "identity")
- SORT TFIDF(doc) DESC
- RETURN { name: doc.name, type: doc.type, score: TFIDF(doc) }
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-It returns different scores:
-
-| name | type | score |
-|:-------------|:----------------------|:--------------------|
-| tomato | ["fruit","vegetable"] | 1.2231435775756836 |
-| carrot | vegetable | 1.2231435775756836 |
-| chili pepper | vegetable | 1.2231435775756836 |
-
-The scores will change whenever you insert, modify or remove documents, because
-the ranking takes factors like how often a term occurs overall and within a
-single document into account. For example, if you insert a hundred more fruit
-documents (`INSERT { type: "fruit" } INTO food`) then the TF-IDF score for
-vegetables will become 1.4054651260375977.
-
-You can adjust the ranking in two different ways:
-- Boost sub-expressions to favor a condition over another with the
- [`BOOST()` function](../../aql/functions/arangosearch.md#boost)
-- Calculate a custom score with an expression, optionally taking `BM25()` and
- `TFIDF()` into account
-Have a look at the [Ranking Examples](ranking.md) for that.
-
-## Indexing complex JSON documents
-
-### Working with sub-attributes
-
-As with regular indexes, there is no limitation to top-level attributes.
-Any document attribute at any depth can be indexed. However, with ArangoSearch
-it is possible to index all documents attributes or particular attributes
-including their sub-attributes without having to modifying the View definition
-as new sub-attribute are added. This is possible with `arangosearch` Views
-as well as with inverted indexes if you use them through `search-alias` Views.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-You need to create an inverted index and enable the **Include All Fields**
-feature to index all document attributes, then add the index to a
-`search-alias` View. No matter what attributes you add to your documents,
-they will automatically get indexed.
-
-You can also add **Fields**, click their underlined names, and enable
-**Include All Fields** for specific attributes and their sub-attributes:
-
-```js
-...
- "fields": [
- {
- "name": "value",
- "includeAllFields": true
- }
- ],
-...
-```
-
-This will index the attribute `value` and its sub-attributes. Consider the
-following example document:
-
-```json
-{
- "value": {
- "nested": {
- "deep": "apple pie"
- }
- }
-}
-```
-
-The View will automatically index `apple pie`, and it can then be queried
-like this:
-
-```aql
-FOR doc IN food_view
- SEARCH doc.value.nested.deep == "apple pie"
- RETURN doc
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-We already used the **Include All Fields** feature to index all document
-attributes above when we modified the View definition to this:
-
-```js
-{
- "links": {
- "food": {
- "includeAllFields": true
- }
- },
- ...
-}
-```
-
-No matter what attributes you add to your documents, they will automatically
-get indexed. To do this for certain attribute paths only, you can enable
-the **Include All Fields** options for specific attributes only, and include a
-list of Analyzers to process the values with:
-
-```js
-{
- "links": {
- "food": {
- "fields": {
- "value": {
- "includeAllFields": true,
- "analyzers": ["identity", "text_en"]
- }
- }
- }
- }
-}
-```
-
-This will index the attribute `value` and its sub-attributes. Consider the
-following example document:
-
-```json
-{
- "value": {
- "nested": {
- "deep": "apple pie"
- }
- }
-}
-```
-
-The View will automatically index `apple pie`, processed with the `identity` and
-`text_en` Analyzers, and it can then be queried like this:
-
-```aql
-FOR doc IN food_view
- SEARCH ANALYZER(doc.value.nested.deep == "apple pie", "identity")
- RETURN doc
-```
-
-```aql
-FOR doc IN food_view
- SEARCH ANALYZER(doc.value.nested.deep IN TOKENS("pie", "text_en"), "text_en")
- RETURN doc
-```
-
-{{< /tab >}}
-
-{{< /tabs >}}
-
-{{< warning >}}
-Using `includeAllFields` for a lot of attributes in combination with complex
-Analyzers may significantly slow down the indexing process.
-{{< /warning >}}
-
-### Indexing and querying arrays
-
-With `arangosearch` Views, the elements of arrays are indexed individually by
-default, as if the source attribute had each element as value at the same time
-(like a _disjunctive superposition_ of their values). This is controlled by the
-View setting [`trackListPositions`](arangosearch-views-reference.md#link-properties)
-that defaults to `false`.
-
-With `search-alias` Views, you can get the same behavior by enabling the
-`searchField` option globally or for specific fields in their inverted indexes,
-or you can explicitly expand certain array attributes by appending `[*]` to the
-field name.
-
-Consider the following document:
-
-```json
-{
- "value": {
- "nested": {
- "deep": [ 1, 2, 3 ]
- }
- }
-}
-```
-
-A View that is configured to index the field `value` including sub-fields
-will index the individual numbers under the path `value.nested.deep`, which
-you can query for like:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.value.nested.deep == 2
- RETURN doc
-```
-
-This is different to `FILTER` operations, where you would use an
-[array comparison operator](../../aql/operators.md#array-comparison-operators)
-to find an element in the array:
-
-```aql
-FOR doc IN collection
- FILTER doc.value.nested.deep ANY == 2
- RETURN doc
-```
-
-You can set `trackListPositions` to `true` if you want to query for a value
-at a specific array index (requires `searchField` to be `true` for
-`search-alias` Views):
-
-```aql
-SEARCH doc.value.nested.deep[1] == 2
-```
-
-With `trackListPositions` enabled there will be **no match** for the document
-anymore if the specification of an array index is left out in the expression:
-
-```aql
-SEARCH doc.value.nested.deep == 2
-```
-
-Conversely, there will be no match if an array index is specified but
-`trackListPositions` is disabled.
-
-String tokens are also indexed individually, but only some Analyzer types
-return multiple tokens.
-If the Analyzer does, then comparison tests are done per token/word.
-For example, given the field `text` is analyzed with `"text_en"` and contains
-the string `"a quick brown fox jumps over the lazy dog"`, the following
-expression will be true:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-doc.text == 'fox'
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-ANALYZER(doc.text == 'fox', "text_en")
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-Note that the `"text_en"` Analyzer stems the words, so this is also true:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-doc.text == 'jump'
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-ANALYZER(doc.text == 'jump', "text_en")
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-So a comparison will actually test if a word is contained in the text. With
-`trackListPositions: false`, this means for arrays if the word is contained in
-any element of the array. For example, given:
-
-```json
-{"text": [ "a quick", "brown fox", "jumps over the", "lazy dog" ] }
-```
-
-… the following will be true:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-doc.text == 'jump'
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-ANALYZER(doc.text == 'jump', "text_en")
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-With `trackListPositions: true` you would need to specify the index of the
-array element `"jumps over the"` to be true:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-doc.text[2] == 'jump'
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-ANALYZER(doc.text[2] == 'jump', "text_en")
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-Arrays of strings are handled similarly. Each array element is treated like a
-token (or possibly multiple tokens if a tokenizing Analyzer is used and
-therefore applied to each element).
-
-## Dealing with eventual consistency
-
-Regular indexes are immediately consistent. If you have a collection with a
-`persistent` index on an attribute `text` and update the value of the attribute
-for instance, then this modification is reflected in the index immediately.
-View indexes (and inverted indexes) on the other hand are eventual
-consistent. Document changes are not reflected instantly, but only near-realtime.
-This mainly has performance reasons.
-
-If you run a search query shortly after a CRUD operation, then the results may
-be slightly stale, e.g. not include a newly inserted document:
-
-```js
-db._query(`INSERT { text: "cheese cake" } INTO collection`);
-db._query(`FOR doc IN viewName SEARCH doc.text == "cheese cake" RETURN doc`);
-// May not find the new document
-```
-
-Re-running the search query a bit later will include the new document, however.
-
-There is an internal option to wait for the View to update and thus include
-changes just made to documents:
-
-```js
-db._query(`INSERT { text: "pop tart" } INTO collection`);
-db._query(`FOR doc IN viewName SEARCH doc.text == "pop tart" OPTIONS { waitForSync: true } RETURN doc`);
-```
-
-This is not necessary if you use a single server deployment and populate a
-collection with documents before creating a View.
-
-{{< warning >}}
-`SEARCH … OPTIONS { waitForSync: true }` is intended to be used in unit tests
-to block search queries until the View caught up with the underlying
-collections. It is designed to make this use case easier. It should not be used
-for other purposes and especially not in production, as it can stall queries.
-{{< /warning >}}
-
-{{< danger >}}
-Do not use`SEARCH … OPTIONS { waitForSync: true }` in transactions. View index
-changes cannot be rolled back if transactions get aborted. It will lead to
-permanent inconsistencies between the linked collections and the View.
-{{< /danger >}}
-
-## How to go from here
-
-To learn more, check out the different search examples:
-
-- [**Exact value matching**](exact-value-matching.md):
- Search for values as stored in documents (full strings, numbers, booleans).
-- [**Range queries**](range-queries.md):
- Match values that are above, below or between a minimum and a maximum value.
- This is primarily for numeric values.
-- [**Prefix matching**](prefix-matching.md):
- Search for strings that start with certain strings. A common use case for
- this is to implement auto-complete kind of functionality.
-- [**Case-sensitivity and diacritics**](case-sensitivity-and-diacritics.md):
- Strings can be normalized so that it does not matter whether characters are
- upper or lower case, and character accents can be ignored for a better search
- experience. This can be combined with other types of search.
-- [**Wildcard search**](wildcard-search.md):
- Search for partial matches in strings (ends with, contains and more).
-- [**Full-text token search**](full-text-token-search.md):
- Full-text can be tokenized into words that can then be searched individually,
- regardless of their original order, also in combination with prefix
- search. Array values are also indexed as separate tokens.
-- [**Phrase and proximity search**](phrase-and-proximity-search.md):
- Search tokenized full-text with the tokens in a certain order, such as
- partial or full sentences, optionally with wildcard tokens for a proximity
- search.
-- [**Faceted search**](faceted-search.md):
- Combine aggregation with search queries to retrieve how often values occur
- overall.
-- [**Fuzzy search**](fuzzy-search.md):
- Match strings even if they are not exactly the same as the search terms.
- By allowing some fuzziness you can compensate for typos and match similar
- tokens that could be relevant too.
-- [**Geospatial search**](geospatial-search.md):
- You can use ArangoSearch for geographic search queries to find nearby
- locations, places within a certain area and more. It can be combined with
- other types of search queries unlike with the regular geo index.
-- [**Search highlighting**](search-highlighting.md):
- Retrieve the positions of matches within strings, to highlight what was found
- in search results (Enterprise Edition only).
-- [**Nested search**](nested-search.md):
- Match arrays of objects with all the conditions met by a single sub-object,
- and define for how many of the elements this must be true (Enterprise Edition only).
-
-For relevance and performance tuning, as well as the reference documentation, see:
-
-- [**Ranking**](ranking.md):
- Sort search results by relevance, fine-tune the importance of certain search
- conditions, and calculate a custom relevance score.
-- [**Performance**](performance.md):
- Give the View index a primary sort order to benefit common search queries
- that you will run and store often used attributes directly in the View index
- for fast access.
-- **Views Reference**
- You can find all View properties and options that are available for the
- respective type in the [`arangosearch` Views Reference](arangosearch-views-reference.md)
- and [`search-alias` Views Reference](search-alias-views-reference.md)
- documentation.
-
-If you are interested in more technical details, have a look at:
-
-- [**ArangoSearch Tutorial**](https://www.arangodb.com/learn/search/tutorial/#:~:text=Ranking%20in%20ArangoSearch):
- The tutorial includes sections about the View concept, Analysis, and the
- ranking model.
-- [**ArangoSearch architecture overview**](https://www.arangodb.com/2018/04/arangosearch-architecture-overview/):
- A description of ArangoSearch's design, its inverted index and some
- implementation details.
-- The [**IResearch library**](https://github.com/iresearch-toolkit/iresearch)
- that provides the searching and ranking capabilities.
diff --git a/site/content/3.10/index-and-search/arangosearch/arangosearch-views-reference.md b/site/content/3.10/index-and-search/arangosearch/arangosearch-views-reference.md
deleted file mode 100644
index 83b80d0f85..0000000000
--- a/site/content/3.10/index-and-search/arangosearch/arangosearch-views-reference.md
+++ /dev/null
@@ -1,466 +0,0 @@
----
-title: '`arangosearch` Views Reference'
-menuTitle: '`arangosearch` Views Reference'
-weight: 85
-description: ''
----
-`arangosearch` Views enable sophisticated information retrieval queries such as
-full-text search for unstructured or semi-structured data over documents from
-different collections, filtering on multiple document attributes and sorting
-the documents that satisfy the search criteria by relevance.
-
-Views guarantee the best execution plan (merge join) when querying multiple
-attributes, unlike collections with user-defined indexes.
-
-Views of type `arangosearch` can be managed as follows:
-- in the web interface, in the **VIEWS** section
-- via the [Views HTTP API](../../develop/http-api/views/_index.md)
-- through the [JavaScript API](../../develop/javascript-api/@arangodb/db-object.md#views)
-
-Once you set up a View, you can query it via AQL with the
-[`SEARCH` operation](../../aql/high-level-operations/search.md).
-
-See [Information Retrieval with ArangoSearch](_index.md) for an
-introduction to Views and how to search them.
-
-## Create `arangosearch` Views using the JavaScript API
-
-The following example shows how you can create an `arangosearch` View in _arangosh_:
-
-```js
----
-name: viewArangoSearchCreate
-description: ''
----
-var coll = db._create("books");
-db._createView("products", "arangosearch", { links: { books: { fields: { title: { analyzers: ["text_en"] } } } } });
-~db._dropView("products");
-~db._drop(coll.name());
-```
-
-## View Definition/Modification
-
-An `arangosearch` View is configured via an object containing a set of
-View-specific configuration directives and a map of link-specific configuration
-directives.
-
-During View creation the following directives apply:
-
-- **name** (string, _immutable_): the View name
-- **type** (string, _immutable_): the value `"arangosearch"`
-- any of the directives from the section [View Properties](#view-properties)
-
-You may want to create the View without links and add them later. The View
-creation with links is not an atomic operation. If errors related to the links
-occur, for example, because of incorrect collection or Analyzers names,
-inaccessible collections, or similar, then the View is still created without
-these links.
-
-During view modification the following directives apply:
-
-- **links** (object, _optional_):
- a mapping of `collection-name` / `collection-identifier` to one of:
- - link creation - link definition as per the section [Link properties](#link-properties)
- - link removal - JSON keyword `null` (i.e. nullify a link if present)
-- any of the directives from the section [View Properties](#view-properties)
-
-### Link Properties
-
-- **analyzers** (_optional_; type: `array`; subtype: `string`; default: `[
- "identity" ]`)
-
- A list of Analyzers, by name as defined via the [Analyzers](../analyzers.md),
- that should be applied to values of processed document attributes.
-
-- **fields** (_optional_; type: `object`; default: `{}`)
-
- An object `{ attribute-name: [Link properties], … }` of fields that should be
- processed at each level of the document. Each key specifies the document
- attribute to be processed. Note that the value of `includeAllFields` is also
- consulted when selecting fields to be processed.
-
- The `fields` property is a recursive data structure. This means that `fields`
- can be part of the Link properties again. This lets you index nested attributes.
- For example, you might have documents like the following in a collection named
- `coll`:
-
- ```json
- { "attr": { "nested": "foo" } }
- ```
-
- If you want to index the `nested` attribute with the `text_en` Analyzer without
- using `includeAllFields`, you can do so with the following View definition:
-
- ```json
- {
- "links": {
- "coll": {
- "fields": {
- "attr": {
- "fields": {
- "nested": {
- "analyzers": ["text_en"]
- }
- }
- }
- }
- }
- }
- }
- ```
-
- Each value specifies the [Link properties](#link-properties) directives to be
- used when processing the specified field. A Link properties value of `{}`
- denotes inheritance of all (except `fields`) directives from the current level.
-
-- **includeAllFields** (_optional_; type: `boolean`; default: `false`)
-
- If set to `true`, then process all document attributes. Otherwise, only
- consider attributes mentioned in `fields`. Attributes not explicitly
- specified in `fields` are processed with default link properties, i.e.
- `{}`.
-
- {{< warning >}}
- Using `includeAllFields` for a lot of attributes in combination with complex
- Analyzers may significantly slow down the indexing process.
- {{< /warning >}}
-
-- **nested** (_optional_; type: `object`; default: `{}`)
-
- An object `{ attribute-name: [Link properties], … }` to index the specified
- sub-objects that are stored in an array. Other than with the `fields`
- property, the values get indexed in a way that lets you query for co-occurring
- values. For example, you can search the sub-objects and all the conditions
- need to be met by a single sub-object instead of across all of them.
-
- This property is available in the Enterprise Edition only.
-
- {{< info >}}
- You cannot use the `nested` property at the top-level of the link properties.
- It needs to have a parent field, e.g.
- `"fields": { "": { "nested": { ... } } }`.
- However, You can nest `nested` properties to index objects in arrays in
- objects in arrays etc.
- {{< /info >}}
-
- See [Nested search with ArangoSearch](nested-search.md)
- for details.
-
-- **trackListPositions** (_optional_; type: `boolean`; default: `false`)
-
- If set to `true`, then for array values track the value position in arrays.
- E.g., when querying for the input `{ attr: [ "valueX", "valueY", "valueZ" ] }`,
- the user must specify: `doc.attr[1] == "valueY"`. Otherwise, all values in
- an array are treated as equal alternatives. E.g., when querying for the input
- `{ attr: [ "valueX", "valueY", "valueZ" ] }`, the user must specify:
- `doc.attr == "valueY"`.
-
-- **storeValues** (_optional_; type: `string`; default: `"none"`)
-
- This property controls how the view should keep track of the attribute values.
- Valid values are:
-
- - **none**: Do not store value meta data in the View.
- - **id**: Store information about value presence so that you can use the
- `EXISTS()` function.
-
- The `storeValues` option is not to be confused with the `storedValues` option,
- which stores attribute values in the View index.
-
-- **inBackground** (_optional_; type: `boolean`; default: `false`)
-
- If set to `true`, then no exclusive lock is used on the source collection
- during View index creation, so that it remains basically available.
- `inBackground` is an option that can be set when adding links. It does not get
- persisted as it is not a View property, but only a one-off option. Also see:
- [Creating Indexes in Background](../indexing/basics.md#creating-indexes-in-background)
-
-- **cache** (_optional_; type: `boolean`; default: `false`)
-
- {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
- Introduced in: v3.9.5, v3.10.2
-
- If you enable this option, then field normalization values are always cached
- in memory. This can improve the performance of scoring and ranking queries.
- Otherwise, these values are memory-mapped and it is up to the operating system
- to load them from disk into memory and to evict them from memory.
-
- Normalization values are computed for fields which are processed with Analyzers
- that have the `"norm"` feature enabled. These values are used to score fairer
- if the same tokens occur repeatedly, to emphasize these documents less.
-
- You can also enable this option to always cache auxiliary data used for querying
- fields that are indexed with Geo Analyzers in memory.
- This can improve the performance of geo-spatial queries.
-
- See the [`--arangosearch.columns-cache-limit` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-limit)
- to control the memory consumption of this cache. You can reduce the memory
- usage of the column cache in cluster deployments by only using the cache for
- leader shards, see the
- [`--arangosearch.columns-cache-only-leader` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-only-leader)
- (introduced in v3.10.6).
-
-### View Properties
-
-{{< info >}}
-If you use ArangoSearch caching in supported 3.9 versions and upgrade an
-Active Failover deployment to 3.10, you may need to re-configure the
-cache-related options and thus recreate inverted indexes and Views. See
-[Known Issues in 3.10](../../release-notes/version-3.10/known-issues-in-3-10.md#arangosearch).
-{{< /info >}}
-
-- **primarySort** (_optional_; type: `array`; default: `[]`; _immutable_)
-
- A primary sort order can be defined to enable an AQL optimization. If a query
- iterates over all documents of a View, wants to sort them by attribute values
- and the (left-most) fields to sort by as well as their sorting direction match
- with the *primarySort* definition, then the `SORT` operation is optimized away.
- Also see [Primary Sort Order](performance.md#primary-sort-order)
-
-- **primarySortCompression** (_optional_; type: `string`; default: `lz4`; _immutable_)
-
- Introduced in: v3.7.1
-
- Defines how to compress the primary sort data (introduced in v3.7.0).
-
- - `"lz4"` (default): use LZ4 fast compression.
- - `"none"`: disable compression to trade space for speed.
-
-- **primarySortCache** (_optional_; type: `boolean`; default: `false`; _immutable_)
-
- {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
- Introduced in: v3.9.6, v3.10.2
-
- If you enable this option, then the primary sort columns are always cached in
- memory. This can improve the performance of queries that utilize the
- [primary sort order](performance.md#primary-sort-order).
- Otherwise, these values are memory-mapped and it is up to the operating system
- to load them from disk into memory and to evict them from memory.
-
- See the [`--arangosearch.columns-cache-limit` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-limit)
- to control the memory consumption of this cache. You can reduce the memory
- usage of the column cache in cluster deployments by only using the cache for
- leader shards, see the
- [`--arangosearch.columns-cache-only-leader` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-only-leader)
- (introduced in v3.10.6).
-
-- **primaryKeyCache** (_optional_; type: `boolean`; default: `false`; _immutable_)
-
- {{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
- Introduced in: v3.9.6, v3.10.2
-
- If you enable this option, then the primary key columns are always cached in
- memory. This can improve the performance of queries that return many documents.
- Otherwise, these values are memory-mapped and it is up to the operating system
- to load them from disk into memory and to evict them from memory.
-
- See the [`--arangosearch.columns-cache-limit` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-limit)
- to control the memory consumption of this cache. You can reduce the memory
- usage of the column cache in cluster deployments by only using the cache for
- leader shards, see the
- [`--arangosearch.columns-cache-only-leader` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-only-leader)
- (introduced in v3.10.6).
-
-- **storedValues** (_optional_; type: `array`; default: `[]`; _immutable_)
-
- Introduced in: v3.7.1
-
- An array of objects to describe which document attributes to store in the
- View index. It can then cover search queries, which means the data can be
- taken from the index directly and accessing the storage engine can be
- avoided.
-
- Each object is expected in the following form:
-
- `{ "fields": [ "attr1", "attr2", ... "attrN" ], "compression": "none", "cache": false }`
-
- - The required `fields` attribute is an array of strings with one or more
- document attribute paths. The specified attributes are placed into a single
- column of the index. A column with all fields that are involved in common
- search queries is ideal for performance. The column should not include too
- many unneeded fields, however.
-
- - The optional `compression` attribute defines the compression type used for
- the internal column-store, which can be `"lz4"` (LZ4 fast compression, default)
- or `"none"` (no compression).
-
- - The optional `cache` attribute allows you to always cache stored values in
- memory (introduced in v3.9.5 and v3.10.2, Enterprise Edition only).
- This can improve the query performance if stored values are involved. See the
- [`--arangosearch.columns-cache-limit` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-limit)
- to control the memory consumption of this cache. You can reduce the memory
- usage of the column cache in cluster deployments by only using the cache for
- leader shards, see the
- [`--arangosearch.columns-cache-only-leader` startup option](../../components/arangodb-server/options.md#--arangosearchcolumns-cache-only-leader)
- (introduced in v3.10.6).
-
- You may use the following shorthand notations on View creation instead of
- an array of objects as described above. The default compression and cache
- settings are used in this case:
-
- - An array of strings, like `["attr1", "attr2"]`, to place each attribute into
- a separate column of the index (introduced in v3.10.3).
-
- - An array of arrays of strings, like `[["attr1", "attr2"]]`, to place the
- attributes into a single column of the index, or `[["attr1"], ["attr2"]]`
- to place each attribute into a separate column.
-
- The `storedValues` option is not to be confused with the `storeValues` option,
- which allows to store meta data about attribute values in the View index.
-
-An inverted index is the heart of `arangosearch` Views.
-The index consists of several independent segments and the index **segment**
-itself is meant to be treated as a standalone index. **Commit** is meant to be
-treated as the procedure of accumulating processed data creating new index
-segments. **Consolidation** is meant to be treated as the procedure of joining
-multiple index segments into a bigger one and removing garbage documents (e.g.
-deleted from a collection). **Cleanup** is meant to be treated as the procedure
-of removing unused segments after release of internal resources.
-
-- **cleanupIntervalStep** (_optional_; type: `integer`; default: `2`; to
- disable use: `0`)
-
- ArangoSearch waits at least this many commits between removing unused files in
- its data directory for the case where the consolidation policies merge
- segments often (i.e. a lot of commit+consolidate). A lower value causes a
- lot of disk space to be wasted for the case where the consolidation policies
- rarely merge segments (i.e. few inserts/deletes). A higher value impacts
- performance without any added benefits.
-
- > With every **commit** or **consolidate** operation a new state of the view
- > internal data-structures is created on disk. Old states/snapshots are
- > released once there are no longer any users remaining. However, the files
- > for the released states/snapshots are left on disk, and only removed by
- > "cleanup" operation.
-
-- **commitIntervalMsec** (_optional_; type: `integer`; default: `1000`;
- to disable use: `0`)
-
- Wait at least this many milliseconds between committing View data store
- changes and making documents visible to queries.
-
- For the case where there are a lot of inserts/updates, a higher value causes the
- index not to account for them and memory usage continues to grow until the commit.
- A lower value impacts performance, including the case where there are no or only a
- few inserts/updates because of synchronous locking, and it wastes disk space for
- each commit call.
-
- > For data retrieval `arangosearch` Views follow the concept of
- > "eventually-consistent", i.e. eventually all the data in ArangoDB is
- > matched by corresponding query expressions.
- > The concept of `arangosearch` View "commit" operation is introduced to
- > control the upper-bound on the time until document addition/removals are
- > actually reflected by corresponding query expressions.
- > Once a "commit" operation is complete, all documents added/removed prior to
- > the start of the "commit" operation are reflected by queries invoked in
- > subsequent ArangoDB transactions. In-progress ArangoDB transactions
- > still continue to return a repeatable-read state.
-
-- **consolidationIntervalMsec** (_optional_; type: `integer`; default: `1000`;
- to disable use: `0`)
-
- Wait at least this many milliseconds between applying `consolidationPolicy` to
- consolidate View data store and possibly release space on the filesystem.
-
- For the case where there are a lot of data modification operations, a higher
- value could potentially have the data store consume more space and file handles.
- For the case where there are a few data modification operations, a lower value
- impacts performance due to no segment candidates available for
- consolidation.
-
- > For data modification `arangosearch` Views follow the concept of a
- > "versioned data store". Thus old versions of data may be removed once there
- > are no longer any users of the old data. The frequency of the cleanup and
- > compaction operations are governed by `consolidationIntervalMsec` and the
- > candidates for compaction are selected via `consolidationPolicy`.
-
-ArangoSearch performs operations in its index based on numerous writer
-objects that are mapped to processed segments. In order to control memory that
-is used by these writers (in terms of "writers pool") one can use
-`writebuffer*` properties of a view.
-
-- **writebufferIdle** (_optional_; type: `integer`; default: `64`;
- to disable use: `0`; _immutable_)
-
- Maximum number of writers (segments) cached in the pool.
-
-- **writebufferActive** (_optional_; type: `integer`; default: `0`;
- to disable use: `0`; _immutable_)
-
- Maximum number of concurrent active writers (segments) that perform a transaction.
- Other writers (segments) wait till current active writers (segments) finish.
-
-- **writebufferSizeMax** (_optional_; type: `integer`; default: `33554432`;
- to disable use: `0`; _immutable_)
-
- Maximum memory byte size per writer (segment) before a writer (segment) flush is
- triggered. `0` value turns off this limit for any writer (buffer) and data is
- flushed periodically. `0` value should be used carefully due to high
- potential memory consumption.
-
-- **consolidationPolicy** (_optional_; type: `object`; default: `{}`)
-
- The consolidation policy to apply for selecting data store segment merge
- candidates.
-
- > With each ArangoDB transaction that inserts documents, one or more
- > ArangoSearch internal segments gets created. Similarly, for removed
- > documents the segments containing such documents have these documents
- > marked as "deleted". Over time, this approach causes a lot of small and
- > sparse segments to be created. A **consolidation** operation selects one or
- > more segments and copies all of their valid documents into a single new
- > segment, thereby allowing the search algorithm to perform more optimally and
- > for extra file handles to be released once old segments are no longer used.
-
- - **type** (_optional_; type: `string`; default: `"tier"`)
-
- The segment candidates for the "consolidation" operation are selected based
- upon several possible configurable formulas as defined by their types.
- The currently supported types are:
-
- - `"bytes_accum"`: Consolidation is performed based on current memory
- consumption of segments and `threshold` property value.
- - `"tier"`: Consolidate based on segment byte size and live document count
- as dictated by the customization attributes.
-
- {{< warning >}}
- The "bytes_accum" policy type is deprecated and remains in ArangoSearch for backwards
- compatibility with the older versions. Please make sure to always use the `tier` policy
- instead.
- {{< /warning >}}
-
- `consolidationPolicy` properties for `"bytes_accum"` type:
-
- - **threshold** (_optional_; type: `float`; default: `0.1`)
-
- Defines threshold value of `[0.0, 1.0]` possible range. Consolidation is
- performed on segments which accumulated size in bytes is less than all
- segments' byte size multiplied by the `threshold`; i.e. the following formula
- is applied for each segment:
- `{threshold} > (segment_bytes + sum_of_merge_candidate_segment_bytes) / all_segment_bytes`.
-
- `consolidationPolicy` properties for `"tier"` type:
-
- - **segmentsMin** (_optional_; type: `integer`; default: `1`)
-
- The minimum number of segments that are evaluated as candidates for consolidation.
-
- - **segmentsMax** (_optional_; type: `integer`; default: `10`)
-
- The maximum number of segments that are evaluated as candidates for consolidation.
-
- - **segmentsBytesMax** (_optional_; type: `integer`; default: `5368709120`)
-
- Maximum allowed size of all consolidated segments in bytes.
-
- - **segmentsBytesFloor** (_optional_; type: `integer`; default: `2097152`)
-
- Defines the value (in bytes) to treat all smaller segments as equal for consolidation
- selection.
-
- - **minScore** (_optional_; type: `integer`; default: `0`)
-
- Filter out consolidation candidates with a score less than this.
diff --git a/site/content/3.10/index-and-search/arangosearch/geospatial-search.md b/site/content/3.10/index-and-search/arangosearch/geospatial-search.md
deleted file mode 100644
index 996e1f9eb0..0000000000
--- a/site/content/3.10/index-and-search/arangosearch/geospatial-search.md
+++ /dev/null
@@ -1,632 +0,0 @@
----
-title: Geospatial Search with ArangoSearch
-menuTitle: Geospatial search
-weight: 55
-description: >-
- ArangoSearch supports geospatial queries like finding locations and GeoJSON shapes within a radius or area
----
-ArangoSearch can accelerate various types of geospatial queries for data that
-is indexed by a View. The regular [geospatial index](../indexing/working-with-indexes/geo-spatial-indexes.md) can do
-most of this too, but ArangoSearch allows you to combine geospatial requests
-with other kinds of searches, like full-text search.
-
-## Creating geospatial Analyzers
-
-Geospatial data that can be indexed:
-
-- GeoJSON features such as Points and Polygons
- (with coordinates in `[longitude, latitude]` order), for example:
-
- ```json
- {
- "location": {
- "type": "Point",
- "coordinates": [ -73.983, 40.764 ]
- }
- }
- ```
-
-- Coordinates using an array with two numbers in `[longitude, latitude]` order,
- for example:
-
- ```json
- {
- "location": [ -73.983, 40.764 ]
- }
- ```
-
-- Coordinates using an array with two numbers in `[latitude, longitude]` order,
- for example:
-
- ```json
- {
- "location": [ 40.764, -73.983 ]
- }
- ```
-
-- Coordinates using two separate numeric attributes, for example:
-
- ```json
- {
- "location": {
- "lat": 40.764,
- "lng": -73.983
- }
- }
- ```
-
-You need to create Geo Analyzers manually. There are no pre-configured
-(built-in) Geo Analyzers.
-
-- The data needs to be pre-processed with a `geojson` or `geo_s2` Analyzer in
- case of GeoJSON or coordinate arrays in `[longitude, latitude]` order.
-
-- For coordinate arrays in `[latitude, longitude]` order or coordinate pairs using
- separate attributes, you need to use a `geopoint` Analyzer.
-
-**Custom Analyzers:**
-
-Create a `geojson` Analyzer in arangosh to pre-process arbitrary
-GeoJSON features or `[longitude, latitude]` arrays.
-The default properties are usually what you want, therefore an empty object
-is passed. No [Analyzer features](../analyzers.md#analyzer-features) are set
-because they cannot be utilized for Geo Analyzers:
-
-```js
-//db._useDatabase("your_database"); // Analyzer will be created in current database
-var analyzers = require("@arangodb/analyzers");
-analyzers.save("geojson", "geojson", {}, []);
-```
-
-See [`geojson` Analyzer](../analyzers.md#geojson) for details.
-
-{{< tip >}}
-In the Enterprise Edition, you can use the `geo_s2` Analyzer instead of the
-`geojson` Analyzer to more efficiently index geo-spatial data. It is mostly a
-drop-in replacement, but you can choose between different binary formats. See
-[Analyzers](../analyzers.md#geo_s2) for details.
-{{< /tip >}}
-
-Create a `geopoint` Analyzer in arangosh using the default properties
-(empty object) to pre-process coordinate arrays in `[latitude, longitude]` order.
-No [Analyzer features](../analyzers.md#analyzer-features) are set as they cannot
-be utilized for Geo Analyzers:
-
-```js
-//db._useDatabase("your_database"); // Analyzer will be created in current database
-var analyzers = require("@arangodb/analyzers");
-analyzers.save("geo_pair", "geopoint", {}, []);
-```
-
-Create a `geopoint` Analyzer in arangosh to pre-process coordinates with
-latitude and longitude stored in two different attributes. These attributes
-cannot be at the top-level of the document, but must be nested in an object,
-e.g. `{ location: { lat: 40.78, lon: -73.97 } }`. The path relative to the
-parent attribute (here: `location`) needs to be described in the Analyzer
-properties for each of the coordinate attributes.
-No [Analyzer features](../analyzers.md#analyzer-features) are set as they cannot
-be utilized for Geo Analyzers:
-
-```js
-//db._useDatabase("your_database"); // Analyzer will be created in current database
-var analyzers = require("@arangodb/analyzers");
-analyzers.save("geo_latlng", "geopoint", { latitude: ["lat"], longitude: ["lng"] }, []);
-```
-
-## Using the example dataset
-
-Load the dataset into an ArangoDB instance and create a View `restaurantsViews`
-as described below:
-
-**Dataset:** [Demo Geo S2 dataset](example-datasets.md#demo-geo-s2-dataset)
-
-### View definition
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
-db.restaurants.ensureIndex({ name: "inv-rest", type: "inverted", fields: [ { name: "location", analyzer: "geojson" } ] });
-db.neighborhoods.ensureIndex({ name: "inv-hood", type: "inverted", fields: [ "name", { name: "geometry", analyzer: "geojson" } ] });
-db._createView("restaurantsViewAlias", "search-alias", { indexes: [
- { collection: "restaurants", index: "inv-rest" },
- { collection: "neighborhoods", index: "inv-hood" }
-] });
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```json
-{
- "links": {
- "restaurants": {
- "fields": {
- "name": {
- "analyzers": [
- "identity"
- ]
- },
- "location": {
- "analyzers": [
- "geojson"
- ]
- }
- }
- },
- "neighborhoods": {
- "fields": {
- "name": {
- "analyzers": [
- "identity"
- ]
- },
- "geometry": {
- "analyzers": [
- "geojson"
- ]
- }
- }
- }
- }
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Search for points within a radius
-
-Using the Museum of Modern Arts as reference location, find restaurants within
-a 100 meter radius. Return the matches sorted by distance and include how far
-away they are from the reference point in the result.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-LET moma = GEO_POINT(-73.983, 40.764)
-FOR doc IN restaurantsViewAlias
- SEARCH GEO_DISTANCE(doc.location, moma) < 100
- LET distance = GEO_DISTANCE(doc.location, moma)
- SORT distance
- RETURN {
- geometry: doc.location,
- distance
- }
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-LET moma = GEO_POINT(-73.983, 40.764)
-FOR doc IN restaurantsView
- SEARCH ANALYZER(GEO_DISTANCE(doc.location, moma) < 100, "geojson")
- LET distance = GEO_DISTANCE(doc.location, moma)
- SORT distance
- RETURN {
- geometry: doc.location,
- distance
- }
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-Search for restaurants with `Cafe` in their name within a radius of 1000 meters
-and return the ten closest matches:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-LET moma = GEO_POINT(-73.983, 40.764)
-FOR doc IN restaurantsViewAlias
- SEARCH LIKE(doc.name, "%Cafe%")
- AND GEO_DISTANCE(doc.location, moma) < 1000
- LET distance = GEO_DISTANCE(doc.location, moma)
- SORT distance
- LIMIT 10
- RETURN {
- geometry: doc.location,
- name: doc.name,
- distance
- }
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-LET moma = GEO_POINT(-73.983, 40.764)
-FOR doc IN restaurantsView
- SEARCH LIKE(doc.name, "%Cafe%")
- AND ANALYZER(GEO_DISTANCE(doc.location, moma) < 1000, "geojson")
- LET distance = GEO_DISTANCE(doc.location, moma)
- SORT distance
- LIMIT 10
- RETURN {
- geometry: doc.location,
- name: doc.name,
- distance
- }
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Search for points within a polygon
-
-First off, search for the neighborhood `Upper West Side` in a subquery and
-return its GeoJSON Polygon. Then search for restaurants that are contained
-in this polygon and return them together with the polygon itself:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-LET upperWestSide = FIRST(
- FOR doc IN restaurantsViewAlias
- SEARCH doc.name == "Upper West Side"
- RETURN doc.geometry
-)
-FOR result IN PUSH(
- FOR doc IN restaurantsViewAlias
- SEARCH GEO_CONTAINS(upperWestSide, doc.location)
- RETURN doc.location,
- upperWestSide
-)
- RETURN result
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-LET upperWestSide = FIRST(
- FOR doc IN restaurantsView
- SEARCH doc.name == "Upper West Side"
- RETURN doc.geometry
-)
-FOR result IN PUSH(
- FOR doc IN restaurantsView
- SEARCH ANALYZER(GEO_CONTAINS(upperWestSide, doc.location), "geojson")
- RETURN doc.location,
- upperWestSide
-)
- RETURN result
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-
-
-You do not have to look up the polygon, you can also provide one inline.
-It is also not necessary to return the polygon, you can return the matches only:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-LET upperWestSide = {
- "coordinates": [
- [
- [-73.9600301843709, 40.79803810789689], [-73.96147779901374, 40.79865415643638],
- [-73.96286980146162, 40.79923967661966], [-73.96571144280439, 40.80043806998765],
- [-73.96775900356977, 40.80130351598543], [-73.967873797219, 40.801351698184384],
- [-73.96798415954912, 40.80139826627661], [-73.9685836074654, 40.80163547026629],
- [-73.9700474216526, 40.8022650103398], [-73.97021594793262, 40.80233585148787],
- [-73.9702740046587, 40.80235903699402], [-73.97032589767365, 40.802384560870536],
- [-73.97150381003809, 40.80283773617443], [-73.97250022180596, 40.80321661262814],
- [-73.97257779806121, 40.803247183771205], [-73.9726574474597, 40.803276514162725],
- [-73.9727990775659, 40.803329159656634], [-73.97287179081519, 40.80335618764528],
- [-73.97286807262155, 40.80332073417601], [-73.97292317001968, 40.80324982284384],
- [-73.97303522902072, 40.803073973741895], [-73.97307070537943, 40.8030555484474],
- [-73.97312859120993, 40.80297471550862], [-73.97320518045738, 40.802830058276285],
- [-73.97336671067801, 40.80263011334645], [-73.9735422772157, 40.802356411250294],
- [-73.97361322463904, 40.80229685988716], [-73.97372060455763, 40.80216781528022],
- [-73.97377477246009, 40.802045845948555], [-73.97389989052462, 40.80188986353119],
- [-73.97394098366709, 40.801809025361415], [-73.97414361823468, 40.80151689534114],
- [-73.97448722520987, 40.801057428896804], [-73.97466556722725, 40.80081351473415],
- [-73.97483924436003, 40.800558243262664], [-73.97496436808184, 40.80036963419388],
- [-73.97508668067891, 40.800189533632995], [-73.97526783234453, 40.79993284953172],
- [-73.97554385822018, 40.79952825063732], [-73.97576219486481, 40.79926017354928],
- [-73.97579140130195, 40.799209627886206], [-73.97578169321191, 40.799156256780286],
- [-73.97583055785255, 40.799092280410655], [-73.97628527884252, 40.798435235410054],
- [-73.97639951930574, 40.79827321018994], [-73.97685207845194, 40.79763134839318],
- [-73.97779475503205, 40.79628462977189], [-73.97828949152755, 40.79558676046088],
- [-73.9792882208298, 40.79417763216331], [-73.98004684398002, 40.79311352516391],
- [-73.98013222578662, 40.79299376538315], [-73.98043434256003, 40.79259428309673],
- [-73.980537679464, 40.79245681262498], [-73.9809470835408, 40.79186789327993],
- [-73.9812893737095, 40.79137550943302], [-73.98139174468567, 40.79122824550256],
- [-73.98188645946546, 40.79051658043251], [-73.98234664147564, 40.789854780772636],
- [-73.98369521623664, 40.7879152813127], [-73.98461942858408, 40.78658601634982],
- [-73.98513520970327, 40.7865876183929], [-73.98581237352651, 40.78661686535709],
- [-73.98594956155667, 40.786487113463934], [-73.98594285499497, 40.78645284788032],
- [-73.98589227228327, 40.78642652901958], [-73.98574378790113, 40.78657008235215],
- [-73.98465507883022, 40.78653474180794], [-73.98493597820354, 40.786130720650974],
- [-73.98501696406497, 40.78601423719563], [-73.98513163631205, 40.786060297019965],
- [-73.98516263994709, 40.78602099926717], [-73.98521103560748, 40.78603955488367],
- [-73.98520511880037, 40.78604766921276], [-73.98561523764435, 40.78621964571528],
- [-73.98567072256468, 40.786242911993796], [-73.98563627093053, 40.786290150146684],
- [-73.9856828719225, 40.78630978621313], [-73.98576566029752, 40.786196274858625],
- [-73.98571752900456, 40.78617599466878], [-73.98568388524215, 40.78622212391968],
- [-73.9852400328845, 40.786035858136785], [-73.98528177918672, 40.785978620950054],
- [-73.98524962933016, 40.78596313985583], [-73.98524273937655, 40.78597257215073],
- [-73.98525509797992, 40.78597620551157], [-73.98521621867376, 40.786030501313824],
- [-73.98517050238989, 40.786013334158156], [-73.98519883325449, 40.785966552197756],
- [-73.98508113773205, 40.785921935110444], [-73.98542932126942, 40.78541394218462],
- [-73.98609763784941, 40.786058225697936], [-73.98602727478911, 40.78622896423671],
- [-73.98607113521973, 40.78624070602659], [-73.98612842248588, 40.78623900133112],
- [-73.98616462880626, 40.786121882448306], [-73.98649778102359, 40.78595120288725],
- [-73.98711901394266, 40.78521031850151], [-73.98707234561569, 40.78518963831753],
- [-73.98645586240198, 40.7859192190814], [-73.98617270496544, 40.786068452258675],
- [-73.98546581197026, 40.78536070057543], [-73.98561580396368, 40.78514186259503],
- [-73.98611372725294, 40.78443891187735], [-73.98625187003543, 40.78423876424543],
- [-73.98647480368592, 40.783916573718706], [-73.98787728725019, 40.78189205083216],
- [-73.98791968459247, 40.78183347771321], [-73.98796079284213, 40.781770987031514],
- [-73.98801805997222, 40.78163418881042], [-73.98807644914505, 40.78165093500162],
- [-73.9881002938246, 40.78160287830527], [-73.98804128806725, 40.78158596085119],
- [-73.98812746102577, 40.78140179644223], [-73.98799363156404, 40.78134281734761],
- [-73.98746219857024, 40.7811086095956], [-73.98741432690473, 40.7810875110951],
- [-73.98736363983177, 40.78106280511045], [-73.98730772854313, 40.781041303287786],
- [-73.98707137465644, 40.78090638159226], [-73.98654378951805, 40.780657980791055],
- [-73.98567936117642, 40.78031263333493], [-73.98536952677372, 40.781078372362586],
- [-73.98507184345014, 40.781779680969194], [-73.9835260146705, 40.781130011022704],
- [-73.98232616371553, 40.78062377270337], [-73.9816278736105, 40.780328934969766],
- [-73.98151911347311, 40.78028175751621], [-73.98140948736065, 40.780235418619405],
- [-73.98067365344895, 40.7799251824873], [-73.97783054404911, 40.77872973181589],
- [-73.97499744020544, 40.777532546222], [-73.97453231422314, 40.77816778452296],
- [-73.97406668257638, 40.77880541672153], [-73.97357117423289, 40.7794778616211],
- [-73.9717230555586, 40.78202147595964], [-73.97122292220932, 40.782706256089995],
- [-73.97076013116715, 40.783340137553594], [-73.97030068162124, 40.78397541394847],
- [-73.96983225556819, 40.7846109105862], [-73.96933573318945, 40.78529327955705],
- [-73.96884378957469, 40.78596738856434], [-73.96838479313664, 40.78659569652393],
- [-73.96792696354466, 40.78722157112602], [-73.96744908373155, 40.78786072059045],
- [-73.96700977073398, 40.78847679074218], [-73.96655226678917, 40.78910715282553],
- [-73.96609500572444, 40.78973438976665], [-73.96562799538655, 40.790366117129004],
- [-73.96517705499011, 40.79099034109932], [-73.96468540739478, 40.79166402679883],
- [-73.9641759852132, 40.79236204502772], [-73.96371096541819, 40.79301293488322],
- [-73.96280590635729, 40.79423581323211], [-73.96235980150668, 40.79485206056065],
- [-73.96189985460951, 40.79547927006112], [-73.96144060655736, 40.79611082718394],
- [-73.96097971807933, 40.79673864404529], [-73.96052271669541, 40.797368469462334],
- [-73.9600301843709, 40.79803810789689]
- ]
- ],
- "type": "Polygon"
-}
-FOR doc IN restaurantsViewAlias
- SEARCH GEO_CONTAINS(upperWestSide, doc.location)
- RETURN doc.location
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-LET upperWestSide = {
- "coordinates": [
- [
- [-73.9600301843709, 40.79803810789689], [-73.96147779901374, 40.79865415643638],
- [-73.96286980146162, 40.79923967661966], [-73.96571144280439, 40.80043806998765],
- [-73.96775900356977, 40.80130351598543], [-73.967873797219, 40.801351698184384],
- [-73.96798415954912, 40.80139826627661], [-73.9685836074654, 40.80163547026629],
- [-73.9700474216526, 40.8022650103398], [-73.97021594793262, 40.80233585148787],
- [-73.9702740046587, 40.80235903699402], [-73.97032589767365, 40.802384560870536],
- [-73.97150381003809, 40.80283773617443], [-73.97250022180596, 40.80321661262814],
- [-73.97257779806121, 40.803247183771205], [-73.9726574474597, 40.803276514162725],
- [-73.9727990775659, 40.803329159656634], [-73.97287179081519, 40.80335618764528],
- [-73.97286807262155, 40.80332073417601], [-73.97292317001968, 40.80324982284384],
- [-73.97303522902072, 40.803073973741895], [-73.97307070537943, 40.8030555484474],
- [-73.97312859120993, 40.80297471550862], [-73.97320518045738, 40.802830058276285],
- [-73.97336671067801, 40.80263011334645], [-73.9735422772157, 40.802356411250294],
- [-73.97361322463904, 40.80229685988716], [-73.97372060455763, 40.80216781528022],
- [-73.97377477246009, 40.802045845948555], [-73.97389989052462, 40.80188986353119],
- [-73.97394098366709, 40.801809025361415], [-73.97414361823468, 40.80151689534114],
- [-73.97448722520987, 40.801057428896804], [-73.97466556722725, 40.80081351473415],
- [-73.97483924436003, 40.800558243262664], [-73.97496436808184, 40.80036963419388],
- [-73.97508668067891, 40.800189533632995], [-73.97526783234453, 40.79993284953172],
- [-73.97554385822018, 40.79952825063732], [-73.97576219486481, 40.79926017354928],
- [-73.97579140130195, 40.799209627886206], [-73.97578169321191, 40.799156256780286],
- [-73.97583055785255, 40.799092280410655], [-73.97628527884252, 40.798435235410054],
- [-73.97639951930574, 40.79827321018994], [-73.97685207845194, 40.79763134839318],
- [-73.97779475503205, 40.79628462977189], [-73.97828949152755, 40.79558676046088],
- [-73.9792882208298, 40.79417763216331], [-73.98004684398002, 40.79311352516391],
- [-73.98013222578662, 40.79299376538315], [-73.98043434256003, 40.79259428309673],
- [-73.980537679464, 40.79245681262498], [-73.9809470835408, 40.79186789327993],
- [-73.9812893737095, 40.79137550943302], [-73.98139174468567, 40.79122824550256],
- [-73.98188645946546, 40.79051658043251], [-73.98234664147564, 40.789854780772636],
- [-73.98369521623664, 40.7879152813127], [-73.98461942858408, 40.78658601634982],
- [-73.98513520970327, 40.7865876183929], [-73.98581237352651, 40.78661686535709],
- [-73.98594956155667, 40.786487113463934], [-73.98594285499497, 40.78645284788032],
- [-73.98589227228327, 40.78642652901958], [-73.98574378790113, 40.78657008235215],
- [-73.98465507883022, 40.78653474180794], [-73.98493597820354, 40.786130720650974],
- [-73.98501696406497, 40.78601423719563], [-73.98513163631205, 40.786060297019965],
- [-73.98516263994709, 40.78602099926717], [-73.98521103560748, 40.78603955488367],
- [-73.98520511880037, 40.78604766921276], [-73.98561523764435, 40.78621964571528],
- [-73.98567072256468, 40.786242911993796], [-73.98563627093053, 40.786290150146684],
- [-73.9856828719225, 40.78630978621313], [-73.98576566029752, 40.786196274858625],
- [-73.98571752900456, 40.78617599466878], [-73.98568388524215, 40.78622212391968],
- [-73.9852400328845, 40.786035858136785], [-73.98528177918672, 40.785978620950054],
- [-73.98524962933016, 40.78596313985583], [-73.98524273937655, 40.78597257215073],
- [-73.98525509797992, 40.78597620551157], [-73.98521621867376, 40.786030501313824],
- [-73.98517050238989, 40.786013334158156], [-73.98519883325449, 40.785966552197756],
- [-73.98508113773205, 40.785921935110444], [-73.98542932126942, 40.78541394218462],
- [-73.98609763784941, 40.786058225697936], [-73.98602727478911, 40.78622896423671],
- [-73.98607113521973, 40.78624070602659], [-73.98612842248588, 40.78623900133112],
- [-73.98616462880626, 40.786121882448306], [-73.98649778102359, 40.78595120288725],
- [-73.98711901394266, 40.78521031850151], [-73.98707234561569, 40.78518963831753],
- [-73.98645586240198, 40.7859192190814], [-73.98617270496544, 40.786068452258675],
- [-73.98546581197026, 40.78536070057543], [-73.98561580396368, 40.78514186259503],
- [-73.98611372725294, 40.78443891187735], [-73.98625187003543, 40.78423876424543],
- [-73.98647480368592, 40.783916573718706], [-73.98787728725019, 40.78189205083216],
- [-73.98791968459247, 40.78183347771321], [-73.98796079284213, 40.781770987031514],
- [-73.98801805997222, 40.78163418881042], [-73.98807644914505, 40.78165093500162],
- [-73.9881002938246, 40.78160287830527], [-73.98804128806725, 40.78158596085119],
- [-73.98812746102577, 40.78140179644223], [-73.98799363156404, 40.78134281734761],
- [-73.98746219857024, 40.7811086095956], [-73.98741432690473, 40.7810875110951],
- [-73.98736363983177, 40.78106280511045], [-73.98730772854313, 40.781041303287786],
- [-73.98707137465644, 40.78090638159226], [-73.98654378951805, 40.780657980791055],
- [-73.98567936117642, 40.78031263333493], [-73.98536952677372, 40.781078372362586],
- [-73.98507184345014, 40.781779680969194], [-73.9835260146705, 40.781130011022704],
- [-73.98232616371553, 40.78062377270337], [-73.9816278736105, 40.780328934969766],
- [-73.98151911347311, 40.78028175751621], [-73.98140948736065, 40.780235418619405],
- [-73.98067365344895, 40.7799251824873], [-73.97783054404911, 40.77872973181589],
- [-73.97499744020544, 40.777532546222], [-73.97453231422314, 40.77816778452296],
- [-73.97406668257638, 40.77880541672153], [-73.97357117423289, 40.7794778616211],
- [-73.9717230555586, 40.78202147595964], [-73.97122292220932, 40.782706256089995],
- [-73.97076013116715, 40.783340137553594], [-73.97030068162124, 40.78397541394847],
- [-73.96983225556819, 40.7846109105862], [-73.96933573318945, 40.78529327955705],
- [-73.96884378957469, 40.78596738856434], [-73.96838479313664, 40.78659569652393],
- [-73.96792696354466, 40.78722157112602], [-73.96744908373155, 40.78786072059045],
- [-73.96700977073398, 40.78847679074218], [-73.96655226678917, 40.78910715282553],
- [-73.96609500572444, 40.78973438976665], [-73.96562799538655, 40.790366117129004],
- [-73.96517705499011, 40.79099034109932], [-73.96468540739478, 40.79166402679883],
- [-73.9641759852132, 40.79236204502772], [-73.96371096541819, 40.79301293488322],
- [-73.96280590635729, 40.79423581323211], [-73.96235980150668, 40.79485206056065],
- [-73.96189985460951, 40.79547927006112], [-73.96144060655736, 40.79611082718394],
- [-73.96097971807933, 40.79673864404529], [-73.96052271669541, 40.797368469462334],
- [-73.9600301843709, 40.79803810789689]
- ]
- ],
- "type": "Polygon"
-}
-FOR doc IN restaurantsView
- SEARCH ANALYZER(GEO_CONTAINS(upperWestSide, doc.location), "geojson")
- RETURN doc.location
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Search for polygons within polygons
-
-Define a GeoJSON polygon that is a rectangle, then search for neighborhoods
-that are fully contained in this area:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-LET sides = {
- left: -74,
- top: 40.8,
- right: -73.93,
- bottom: 40.76
-}
-
-LET rect = GEO_POLYGON([
- [sides.left, sides.bottom],
- [sides.right, sides.bottom],
- [sides.right, sides.top],
- [sides.left, sides.top],
- [sides.left, sides.bottom]
-])
-
-FOR result IN PUSH(
- FOR doc IN restaurantsViewAlias
- SEARCH GEO_CONTAINS(rect, doc.geometry)
- RETURN doc.geometry,
- rect
-)
- RETURN result
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-LET sides = {
- left: -74,
- top: 40.8,
- right: -73.93,
- bottom: 40.76
-}
-
-LET rect = GEO_POLYGON([
- [sides.left, sides.bottom],
- [sides.right, sides.bottom],
- [sides.right, sides.top],
- [sides.left, sides.top],
- [sides.left, sides.bottom]
-])
-
-FOR result IN PUSH(
- FOR doc IN restaurantsView
- SEARCH ANALYZER(GEO_CONTAINS(rect, doc.geometry), "geojson")
- RETURN doc.geometry,
- rect
-)
- RETURN result
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-
-
-Searching for geo features in a rectangle is something you can use together
-with an interactive map that the user can select the area of interest with.
-Take a look at the lunch break video about the
-[ArangoBnB demo project](https://www.youtube.com/watch?v=ec-X9PA3DJc) to learn more.
-
-## Search for polygons intersecting polygons
-
-Define a GeoJSON polygon that is a rectangle, then search for neighborhoods
-that intersect with this area:
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```aql
-LET sides = {
- left: -74,
- top: 40.8,
- right: -73.93,
- bottom: 40.76
-}
-
-LET rect = GEO_POLYGON([
- [sides.left, sides.bottom],
- [sides.right, sides.bottom],
- [sides.right, sides.top],
- [sides.left, sides.top],
- [sides.left, sides.bottom]
-])
-
-FOR result IN PUSH(
- FOR doc IN restaurantsViewAlias
- SEARCH GEO_INTERSECTS(rect, doc.geometry)
- RETURN doc.geometry,
- rect
-)
- RETURN result
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```aql
-LET sides = {
- left: -74,
- top: 40.8,
- right: -73.93,
- bottom: 40.76
-}
-
-LET rect = GEO_POLYGON([
- [sides.left, sides.bottom],
- [sides.right, sides.bottom],
- [sides.right, sides.top],
- [sides.left, sides.top],
- [sides.left, sides.bottom]
-])
-
-FOR result IN PUSH(
- FOR doc IN restaurantsView
- SEARCH ANALYZER(GEO_INTERSECTS(rect, doc.geometry), "geojson")
- RETURN doc.geometry,
- rect
-)
- RETURN result
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-
diff --git a/site/content/3.10/index-and-search/arangosearch/nested-search.md b/site/content/3.10/index-and-search/arangosearch/nested-search.md
deleted file mode 100644
index f126935cf6..0000000000
--- a/site/content/3.10/index-and-search/arangosearch/nested-search.md
+++ /dev/null
@@ -1,321 +0,0 @@
----
-title: Nested search with ArangoSearch
-menuTitle: Nested search
-weight: 65
-description: >-
- You can search for nested objects in arrays that satisfy multiple conditions
- each, and define how often these conditions should be fulfilled for the entire
- array
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-By default, `arangosearch` Views index arrays as if the parent attribute had
-multiple values at once. This is also supported for `search-alias` Views by enabling
-the `searchField` option. With `trackListPositions` set to `true`, every array
-element is indexed individually and can be queried separately using the
-respective array index. With the nested search feature, you get another
-option for indexing arrays, in particular nested objects in arrays.
-
-You can let the View index the sub-objects in a way that lets you query for
-co-occurring values. For example, you can search the sub-objects and all the
-conditions need to be met by a single sub-object instead of across all of them.
-
-## Using nested search
-
-Consider the following document:
-
-```json
-{
- "dimensions": [
- { "type": "height", "value": 35 },
- { "type": "width", "value": 60 }
- ]
-}
-```
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-You would normally index the `dimensions.type` and `dimensions.value` fields and
-with an inverted index and then use it via a `search-alias` View, in arangosh:
-
-```js
-db..ensureIndex({
- name: "inv-idx",
- type: "inverted",
- searchField: true,
- fields: [
- "dimensions.type",
- "dimensions.value"
- ]
-});
-
-db._createView("viewName", "search-alias", { indexes: [
- { collection: "", index: "inv-idx" }
-]});
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-You would normally index the `dimensions` field and its sub-fields with an
-`arangosearch` View definition like the following:
-
-```json
-{
- "links": {
- "": {
- "fields": {
- "dimensions": {
- "fields": {
- "type": {},
- "value": {}
- }
- }
- }
- }
- },
- ...
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-You might then write a query like the following to find documents where the
-height is greater than 40:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.dimensions.type == "height" AND doc.dimensions.value > 40
- RETURN doc
-```
-
-This query matches the above document despite the height only being 35. The reason is
-that each condition is true for at least one of the nested objects. There is no
-check whether both conditions are true for the same object, however. You could
-add a `FILTER` statement to remove false-positive matches from the search
-results, but it is cumbersome to check the conditions again, for every sub-object:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.dimensions.type == "height" AND doc.dimensions.value > 40
- FILTER LENGTH(doc.dimensions[* FILTER CURRENT.type == "height" AND CURRENT.value > 40]) > 0
- RETURN doc
-```
-
-The nested search feature allows you to condense the query while utilizing the
-View index:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.dimensions[? FILTER CURRENT.type == "height" AND CURRENT.value > 40]
- RETURN doc
-```
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-The required inverted index definition for using a `search-alias` View
-to perform nested searches needs to index the parent `dimensions` field, as well
-as the nested attributes using the `nested` property under the `fields` property:
-
-```js
-db..ensureIndex({
- name: "inv-nest",
- type: "inverted",
- fields: [
- {
- name: "dimensions",
- nested: [
- { name: "type" },
- { name: "value" }
- ]
- }
- ]
-});
-
-db._createView("viewName", "search-alias", { indexes: [
- { collection: "", index: "inv-nest" }
-]});
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-The required `arangosearch` View definition for this to work is as follows:
-
-```json
-{
- "links": {
- "": {
- "fields": {
- "dimensions": {
- "nested": {
- "type": {},
- "value": {}
- }
- }
- }
- }
- }
-}
-```
-
-Note the usage of a `nested` property instead of a `fields` property.
-{{< /tab >}}
-
-{{< /tabs >}}
-
-This configures the View to index the objects in the `dimensions` array so that
-you can use the [Question mark operator](../../aql/operators.md#question-mark-operator)
-to query the nested objects. The default `identity` Analyzer is used for the
-fields because none is specified explicitly.
-
-## Defining how often the conditions need to be true
-
-You can optionally specify a quantifier to define how often the conditions need
-to be true for the entire array. The following query matches documents that have
-one or two nested objects with a height greater than 40:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.dimensions[? 1..2 FILTER CURRENT.type == "height" AND CURRENT.value > 40]
- RETURN doc
-```
-
-If you leave out the quantifier, it defaults to `ANY`. The conditions need to be
-fulfilled by at least one sub-object, but more than one sub-object may meet the
-conditions. With a quantity of `1`, it would need to be one match exactly.
-Similarly, ranges require an exact match between the minimum and maximum number,
-including the specified boundaries. To require two or more sub-objects to
-fulfill the conditions, you can use `AT LEAST (2)`, and so on.
-
-{{< info >}}
-- To use the question mark operator with the `ALL` quantifier in `SEARCH`
- queries against `arangosearch` Views, you need at least ArangoDB v3.10.1 and
- set the `storeValues` property of the View to `"id"`.
-- The expression of the `AT LEAST` quantifier needs to evaluate to a number
- before the search is performed. It can therefore not reference the document
- emitted by `FOR doc IN viewName`, nor the `CURRENT` pseudo-variable.
-- Using the question mark operator without quantifier and filter conditions
- (`[?]`) is possible but cannot utilize indexes.
-{{< /info >}}
-
-## Searching deeply nested data
-
-You can index and search for multiple levels of objects in arrays.
-Consider the following document:
-
-```json
-{
- "dimensions": [
- {
- "part": "frame",
- "measurements": [
- { "type": "height", "value": 47 },
- { "type": "width", "value": 72 }
- ],
- "comments": "Slightly damaged at the bottom right corner."
- },
- {
- "part": "canvas",
- "measurements": [
- { "type": "height", "value": 35 },
- { "type": "width", "value": 60 }
- ]
- }
- ]
-}
-```
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-To index the array of dimension objects and the nested array of measurement
-objects, you can use an inverted index and `search-view` View definition like
-the following, using arangosh:
-
-```js
-db..ensureIndex({
- name: "inv-nest-deep",
- type: "inverted",
- fields: [
- {
- name: "dimensions",
- nested: [
- {
- name: "measurements",
- nested: [
- { name: "type" },
- { name: "value" }
- ]
- },
- "part",
- {
- name: "comments",
- analyzer: "text_en"
- }
- ]
- }
- ]
-});
-
-db._createView("viewName", "search-alias", { indexes: [
- { collection: "", index: "inv-nest-deep" }
-]});
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-To index the array of dimension objects and the nested array of measurement
-objects, you can use an `arangosearch` View definition like the following:
-
-```json
-{
- "links": {
- "": {
- "fields": {
- "dimensions": {
- "nested": {
- "measurements": {
- "nested": {
- "type": {},
- "value": {}
- }
- },
- "part": {},
- "comments": {
- "analyzers": [
- "text_en"
- ]
- }
- }
- }
- }
- }
- }
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-The default `identity` Analyzer is used for the `type`, `value`, and `part`
-attributes, and the built-in `text_en` is used for the `comments`.
-
-A possible query is to search for frames with damaged corners that are not wider
-than 80, using a question mark operator to check the `part` and `comments`, and
-a nested question mark operator to check the `type` and `value`:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.dimensions[? FILTER CURRENT.part == "frame" AND
- ANALYZER(TOKENS("corner damage", "text_en") ALL == CURRENT.comments, "text_en") AND
- CURRENT.measurements[? FILTER CURRENT.type == "width" AND CURRENT.value <= 80]]
- RETURN doc
-```
-
-The conditions of the inner question mark operator need to be satisfied by a
-single measurement object. The conditions of the outer question mark operator
-need to be satisfied by a single dimension object, including the measurement
-conditions of the inner operator. The example document does match these
-conditions.
diff --git a/site/content/3.10/index-and-search/arangosearch/performance.md b/site/content/3.10/index-and-search/arangosearch/performance.md
deleted file mode 100644
index b7296c94ba..0000000000
--- a/site/content/3.10/index-and-search/arangosearch/performance.md
+++ /dev/null
@@ -1,671 +0,0 @@
----
-title: Optimizing View and inverted index query performance
-menuTitle: Performance
-weight: 75
-description: >-
- You can improve the performance of View and inverted index queries with a
- primary sort order, stored values and other optimizations
----
-## Primary Sort Order
-
-Inverted indexes and `arangosearch` Views can have a primary sort order.
-A direction can be specified upon their creation for each uniquely named
-attribute (ascending or descending), to enable an optimization for AQL
-queries which iterate over a collection or View and sort by one or multiple of the
-indexed attributes. If the field(s) and the sorting direction(s) match, then the
-data can be read directly from the index without actual sort operation.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-You can define a primary sort order when creating inverted indexes and utilize it
-using inverted indexes standalone or via `search-alias` Views.
-
-Definition of an inverted index with a `primarySort` property:
-
-```js
-db.coll.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["text", "date"],
- primarySort: {
- fields: [
- { field: "date", direction: "desc" }
- ]
- }
-});
-```
-
-AQL query example:
-
-```aql
-FOR doc IN coll OPTIONS { indexHint: "inv-idx", forceIndexHint: true }
- SORT doc.date DESC
- RETURN doc
-```
-
-Execution plan **without** a sorted index being used:
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateCollectionNode 0 - FOR doc IN coll /* full collection scan */
- 3 CalculationNode 0 - LET #1 = doc.`date` /* attribute expression */ /* collections used: doc : coll */
- 4 SortNode 0 - SORT #1 DESC /* sorting strategy: standard */
- 5 ReturnNode 0 - RETURN doc
-```
-
-Execution plan with the primary sort order of the index being utilized:
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 6 IndexNode 0 - FOR doc IN coll /* reverse inverted index scan, index scan + document lookup */
- 5 ReturnNode 0 - RETURN doc
-```
-
-You can add the inverted index to a `search-alias` View. Queries against the
-View can benefit from the primary sort order, too:
-
-```js
-db._createView("viewName", "search-alias", { indexes: [
- { collection: "coll", index: "inv-idx" }
-] });
-
-db._query(`FOR doc IN viewName
- SORT doc.date DESC
- RETURN doc`);
-```
-
-To define more than one attribute to sort by, use multiple sub-objects in the
-`primarySort` array:
-
-```js
-db.coll.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["text", "date"],
- primarySort: {
- fields: [
- { field: "date", direction: "desc" },
- { field: "text", direction: "asc" }
- ]
- }
-});
-```
-
-{{< info >}}
-If you mix directions in the primary sort order, the inverted index cannot be
-utilized for fully optimizing out a matching `SORT` operation if you use the
-inverted index standalone.
-{{< /info >}}
-
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-
-{{< youtube id="bKeKzexInm0" >}}
-
-```json
-{
- "links": {
- "coll1": {
- "fields": {
- "text": {}
- }
- },
- "coll2": {
- "fields": {
- "text": {}
- }
- },
- "primarySort": [
- {
- "field": "date",
- "direction": "desc"
- }
- ]
- }
-}
-```
-
-You can only set the `primarySort` option and the related
-`primarySortCompression` and `primarySortCache` options on View creation.
-
-AQL query example:
-
-```aql
-FOR doc IN viewName
- SORT doc.date DESC
- RETURN doc
-```
-
-Execution plan **without** a sorted index being used:
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateViewNode 1 - FOR doc IN viewName /* view query */
- 3 CalculationNode 1 - LET #1 = doc.`date` /* attribute expression */
- 4 SortNode 1 - SORT #1 DESC /* sorting strategy: standard */
- 5 ReturnNode 1 - RETURN doc
-```
-
-Execution plan with the primary sort order of the index being utilized:
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateViewNode 1 - FOR doc IN viewName SORT doc.`date` DESC /* view query */
- 5 ReturnNode 1 - RETURN doc
-```
-
-To define more than one attribute to sort by, use multiple sub-objects in the
-`primarySort` array:
-
-```json
-{
- "links": {
- "coll1": {
- "fields": {
- "text": {}
- }
- },
- "coll2": {
- "fields": {
- "text": {}
- }
- },
- "primarySort": [
- {
- "field": "date",
- "direction": "desc"
- },
- {
- "field": "text",
- "direction": "asc"
- }
- ]
- }
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-The optimization can be applied to queries which sort by both fields as
-defined (`SORT doc.date DESC, doc.text`), but also if they sort in descending
-order by the `date` attribute only (`SORT doc.date DESC`). Queries which sort
-by `text` alone (`SORT doc.text`) are not eligible, because the index is sorted
-by `date` first. This is similar to persistent indexes, but inverted sorting
-directions are not covered by the View index
-(e.g. `SORT doc.date, doc.text DESC`).
-
-You can disable the **primary sort compression** on View or index creation to
-trade space for speed. The primary sort data is LZ4-compressed by default (`"lz4"`).
-
-- `arangosearch` Views: `primarySortCompression: "none"`
-- Inverted indexes: `primarySort: { compression: "none" }`
-
-You can additionally enable the **primary sort cache** to always cache the primary
-sort columns in memory, which can improve the query performance.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-For inverted indexes, set the `cache` option of the
-[`primarySort` property](../../develop/http-api/indexes/inverted.md) to `true`.
-
-```js
-db.coll.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["text", "date"],
- primarySort: {
- fields: [
- { field: "date", direction: "desc" },
- { field: "text", direction: "asc" }
- ],
- cache: true
- }
-});
-
-db._createView("myView", "search-alias", { indexes: [
- { collection: "coll", index: "inv-idx" }
-] });
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-Set the [`primarySortCache` View property](arangosearch-views-reference.md#view-properties)
-to `true`.
-
-```json
-{
- "links": {
- "coll1": {
- "fields": {
- "text": {},
- "date": {}
- }
- },
- "coll2": {
- "fields": {
- "text": {}
- }
- },
- "primarySort": [
- {
- "field": "date",
- "direction": "desc"
- },
- {
- "field": "text",
- "direction": "asc"
- }
- ],
- "primarySortCache": true
- }
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Stored Values
-
-It is possible to directly store the values of document attributes in
-`arangosearch` View indexes and inverted indexes with the `storedValues`
-property (not to be confused with `storeValues`). You can only set this
-option on View and index creation.
-
-View indexes and inverted indexes may fully cover search queries by using
-stored values, improving the query performance.
-While late document materialization reduces the amount of fetched documents,
-this optimization can avoid to access the storage engine entirely.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
-db.articles.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["categories[*]"],
- primarySort: {
- fields: [
- { field: "publishedAt", direction: "desc" }
- ]
- },
- storedValues: [
- {
- fields: [ "title", "categories" ]
- }
- ]
-});
-
-db._createView("articlesView", "search-alias", { indexes: [
- { collection: "articles", index: "inv-idx" }
-] });
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```json
-{
- "links": {
- "articles": {
- "fields": {
- "categories": {}
- }
- }
- },
- "primarySort": [
- { "field": "publishedAt", "direction": "desc" }
- ],
- "storedValues": [
- { "fields": [ "title", "categories" ] }
- ]
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-In above View definitions, the document attribute `categories` is indexed for
-searching, `publishedAt` is used as primary sort order, and `title` as well as
-`categories` are stored in the index using the new `storedValues` property.
-
-```aql
-FOR doc IN articlesView
- SEARCH doc.categories == "recipes"
- SORT doc.publishedAt DESC
- RETURN {
- title: doc.title,
- date: doc.publishedAt,
- tags: doc.categories
- }
-```
-
-The query searches for articles which contain a certain tag in the `categories`
-array and returns title, date and tags. All three values are stored in the View
-(`publishedAt` via `primarySort` and the two other via `storedValues`), thus
-no documents need to be fetched from the storage engine to answer the query.
-This is shown in the execution plan as a comment to the `EnumerateViewNode`:
-`/* view query without materialization */`
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateViewNode 1 - FOR doc IN articlesView SEARCH (doc.`categories` == "recipes") SORT doc.`publishedAt` DESC LET #1 = doc.`publishedAt` LET #7 = doc.`categories` LET #5 = doc.`title` /* view query without materialization */
- 5 CalculationNode 1 - LET #3 = { "title" : #5, "date" : #1, "tags" : #7 } /* simple expression */
- 6 ReturnNode 1 - RETURN #3
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 move-calculations-up-2
- 3 handle-arangosearch-views
-```
-
-The stored values data is LZ4-compressed by default (`"lz4"`).
-Set it to `"none"` on View or index creation to trade space for speed.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
-db.articles.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["categories[*]"],
- primarySort: {
- fields: [
- { field: "publishedAt", direction: "desc" }
- ]
- },
- storedValues: [
- {
- fields: [ "title", "categories"],
- compression: "none"
- }
- ]
-});
-
-db._createView("articlesView", "search-alias", { indexes: [
- { collection: "articles", index: "inv-idx" }
-] });
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```json
-{
- "links": {
- "articles": {
- "fields": {
- "categories": {}
- }
- }
- },
- "primarySort": [
- { "field": "publishedAt", "direction": "desc" }
- ],
- "storedValues": [
- { "fields": [ "title", "categories" ], "compression": "none" }
- ]
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-You can additionally enable the ArangoSearch column cache for stored values by
-setting the `cache` option in the `storedValues` definition of
-`arangosearch` Views or inverted indexes to `true`. This always caches
-stored values in memory, which can improve the query performance.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
-db.articles.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["categories[*]"],
- primarySort: {
- fields: [
- { field: "publishedAt", direction: "desc" }
- ]
- },
- storedValues: [
- {
- fields: [ "title", "categories"],
- cache: true
- }
- ]
-});
-
-db._createView("articlesView", "search-alias", { indexes: [
- { collection: "articles", index: "inv-idx" }
-] });
-```
-
-See the [inverted index `storedValues` property](../../develop/http-api/indexes/inverted.md)
-for details.
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```json
-{
- "links": {
- "articles": {
- "fields": {
- "categories": {}
- }
- }
- },
- "primarySort": [
- { "field": "publishedAt", "direction": "desc" }
- ],
- "storedValues": [
- { "fields": [ "title", "categories" ], "cache": true }
- ]
-}
-```
-
-See the [`storedValues` View property](arangosearch-views-reference.md#view-properties)
-for details.
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Condition Optimization Options
-
-The `SEARCH` operation in AQL accepts an option `conditionOptimization` to
-give you control over the search criteria optimization:
-
-```aql
-FOR doc IN myView
- SEARCH doc.val > 10 AND doc.val > 5 /* more conditions */
- OPTIONS { conditionOptimization: "none" }
- RETURN doc
-```
-
-By default, all conditions get converted into disjunctive normal form (DNF).
-Numerous optimizations can be applied, like removing redundant or overlapping
-conditions (such as `doc.val > 10` which is included by `doc.val > 5`).
-However, converting to DNF and optimizing the conditions can take quite some
-time even for a low number of nested conditions which produce dozens of
-conjunctions / disjunctions. It can be faster to just search the index without
-optimizations.
-
-Also see the [`SEARCH` operation](../../aql/high-level-operations/search.md#search-options).
-
-## Count Approximation
-
-The `SEARCH` operation in AQL accepts an option `countApproximate` to control
-how the total count of rows is calculated if the `fullCount` option is enabled
-for a query or when a `COLLECT WITH COUNT` clause is executed.
-
-By default, rows are actually enumerated for a precise count. In some cases, an
-estimate might be good enough, however. You can set `countApproximate` to
-`"cost"` for a cost-based approximation. It does not enumerate rows and returns
-an approximate result with O(1) complexity. It gives a precise result if the
-`SEARCH` condition is empty or if it contains a single term query only
-(e.g. `SEARCH doc.field == "value"`), the usual eventual consistency
-of Views aside.
-
-```aql
-FOR doc IN viewName
- SEARCH doc.name == "Carol"
- OPTIONS { countApproximate: "cost" }
- COLLECT WITH COUNT INTO count
- RETURN count
-```
-
-Also see [Faceted Search with ArangoSearch](faceted-search.md).
-
-## Field normalization value caching and caching of Geo Analyzer auxiliary data
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.9.5, v3.10.2
-
-Normalization values are computed for fields which are processed with Analyzers
-that have the [`"norm"` feature](../analyzers.md#analyzer-features) enabled.
-These values are used to score fairer if the same tokens occur repeatedly, to
-emphasize these documents less.
-
-You can set the `cache` option to `true` for individual View links or fields of
-`arangosearch` Views, as well as for inverted indexes as the default or for
-specific fields, to always cache the field normalization values in memory.
-This can improve the performance of scoring and ranking queries.
-
-You can also enable this option to always cache auxiliary data used for querying
-fields that are indexed with Geo Analyzers in memory, as the default or for
-specific fields. This can improve the performance of geo-spatial queries.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
-db.coll1.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: [
- {
- name: "attr",
- analyzer: "text_en",
- cache: true
- }
- ]
-});
-
-db.coll2.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- analyzer: "text_en",
- fields: ["attr1", "attr2"],
- cache: true
-});
-
-db._createView("myView", "search-alias", { indexes: [
- { collection: "coll1", index: "inv-idx" },
- { collection: "coll2", index: "inv-idx" }
-] });
-```
-
-See the [inverted index `cache` property](../../develop/http-api/indexes/inverted.md) for details.
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```json
-{
- "links": {
- "coll1": {
- "fields": {
- "attr": {
- "analyzers": ["text_en"],
- "cache": true
- }
- }
- },
- "coll2": {
- "includeAllFields": true,
- "analyzers": ["text_en"],
- "cache": true
- }
- }
-}
-```
-
-See the [`cache` Link property](arangosearch-views-reference.md#link-properties)
-for details.
-{{< /tab >}}
-
-{{< /tabs >}}
-
-The `"norm"` Analyzer feature has performance implications even if the cache is
-used. You can create custom Analyzers without this feature to disable the
-normalization and improve the performance. Make sure that the result ranking
-still matches your expectations without normalization. It is recommended to
-use normalization for a good scoring behavior.
-
-## Primary key caching
-
-Introduced in: v3.9.6, v3.10.2
-
-You can always cache the primary key columns in memory. This can improve the
-performance of queries that return many documents, making it faster to map
-document IDs in the index to actual documents.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-To enable this feature for inverted indexes and by extension `search-alias` Views,
-set the [`primaryKeyCache` property](../../develop/http-api/indexes/inverted.md)
-to `true` when creating inverted indexes.
-
-```js
-db.articles.ensureIndex({
- name: "inv-idx",
- type: "inverted",
- fields: ["categories[*]"],
- primaryKeyCache: true
-});
-
-db._createView("articlesView", "search-alias", { indexes: [
- { collection: "articles", index: "inv-idx" }
-] });
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-To enable this feature for `arangosearch` Views, set the
-[`primaryKeyCache` View property](arangosearch-views-reference.md#view-properties)
-to `true` on View creation.
-
-```json
-{
- "links": {
- "articles": {
- "fields": {
- "categories": {}
- }
- }
- },
- "primaryKeyCache": true
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
diff --git a/site/content/3.10/index-and-search/arangosearch/search-highlighting.md b/site/content/3.10/index-and-search/arangosearch/search-highlighting.md
deleted file mode 100644
index 63e8ea7176..0000000000
--- a/site/content/3.10/index-and-search/arangosearch/search-highlighting.md
+++ /dev/null
@@ -1,215 +0,0 @@
----
-title: Search highlighting with ArangoSearch
-menuTitle: Search highlighting
-weight: 60
-description: >-
- You can retrieve the positions of matches within strings when querying
- Views with ArangoSearch, to highlight what was found in search results
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-ArangoSearch lets you search for terms and phrases in full-text, and more.
-It only returns matching documents, however. With search highlighting, you can
-get the exact locations of the matches.
-
-A common use case is to emphasize the matching parts in client applications,
-for example, with a background color or an underline, so that users can easily
-see and understand the matches.
-
-## How to use search highlighting
-
-To use search highlighting, you need to index the respective attributes with
-Analyzers that have the `offset` feature enabled. The built-in `text` Analyzers
-don't have this feature enabled, you need to create custom Analyzers.
-
-You can get the substring offsets of matches by calling the
-[`OFFSET_INFO()` function](../../aql/functions/arangosearch.md#offset_info) in
-search queries. It takes the document emitted by the View (`FOR doc IN viewName`)
-and a list of paths like `"field.nested"` or `"array[0].field"`, defining for
-what attributes or array elements you want to retrieve the offsets for. For
-every path, it returns a list comprised of a `name` and `offsets`.
-
-The `name` is the path of the value, but in a different form than you passed to
-the function, like `["field", "nested"]` or `["array", 0, "field"]`. You can
-look up the value with the [`VALUE()` function](../../aql/functions/document-object.md#value)
-using this path description.
-
-The `offsets` are a list of offset pairs, one for every match. Each pair is an
-array with two numbers, with the start offset and length of the match. There can be
-multiple matches per path. You can optionally cap how many matches are collected
-per path by setting limits when calling the `OFFSET_INFO()` function.
-
-{{< warning >}}
-The start offsets and lengths describe the positions in bytes, not characters.
-You may need to account for characters encoded using multiple bytes.
-{{< /warning >}}
-
-### Term and phrase search with highlighting
-
-#### Dataset
-
-A collection called `food` with the following documents:
-
-```json
-{ "name": "avocado", "description": { "en": "The avocado is a medium-sized, evergreen tree, native to the Americas." } }
-{ "name": "carrot", "description": { "en": "The carrot is a root vegetable, typically orange in color, native to Europe and Southwestern Asia." } }
-{ "name": "chili pepper", "description": { "en": "Chili peppers are varieties of the berry-fruit of plants from the genus Capsicum, cultivated for their pungency." } }
-{ "name": "tomato", "description": { "en": "The tomato is the edible berry of the tomato plant." } }
-```
-
-#### Custom Analyzer
-
-If you want to use an `arangosearch` View,
-create a `text` Analyzer in arangosh to tokenize text, like the built-in
-`text_en` Analyzer, but additionally set the `offset` feature, enabling
-search highlighting:
-
-```js
----
-name: analyzerTextOffset
-description: ''
----
-var analyzers = require("@arangodb/analyzers");
-analyzers.save("text_en_offset", "text", { locale: "en", stopwords: [] }, ["frequency", "position", "offset"]);
-~analyzers.remove("text_en_offset");
-```
-
-The `frequency`, `position`, and `offset` [Analyzer features](../analyzers.md#analyzer-features)
-are set because the examples on this page require them for the `OFFSET_INFO()`
-search highlighting function to work, and the `PHRASE()` filter function also
-requires the former two.
-
-You can skip this step if you want to use a `search-alias` View, because the
-Analyzer features can be overwritten in the inverted index definition.
-
-#### View definition
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
-db.food.ensureIndex({
- name: "inv-text-offset",
- type: "inverted",
- fields: [
- { name: "description.en", analyzer: "text_en", features: ["frequency", "position", "offset"] }
- ]
-});
-
-db._createView("food_view", "search-alias", { indexes: [ { collection: "food", index: "inv-text-offset" } ] });
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```json
-{
- "links": {
- "food": {
- "fields": {
- "description": {
- "fields": {
- "en": {
- "analyzers": [
- "text_en_offset"
- ]
- }
- }
- }
- }
- }
- }
-}
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-#### AQL queries
-
-Search the View for descriptions that contain the tokens `avocado` or `tomato`,
-the phrase `cultivated ... pungency` with two arbitrary tokens between the two
-words, and for words that start with `cap`. Get the matching positions, and use
-this information to extract the substrings with the
-[`SUBSTRING_BYTES()` function](../../aql/functions/string.md#substring_bytes).
-
-The [`OFFSET_INFO()` function](../../aql/functions/arangosearch.md#offset_info)
-returns a `name` that describes the path of the attribute or array element with
-the match. You can use the [`VALUE()` function](../../aql/functions/document-object.md#value)
-to dynamically get the respective value.
-
-{{< tabs "view-definition">}}
-
-{{< tab "`search-alias` View" >}}
-```js
----
-name: searchHighlighting_2
-description: ''
----
-var coll = db._create("food");
-var docs = db.food.save([
- { name: "avocado", description: { en: "The avocado is a medium-sized, evergreen tree, native to the Americas." } },
- { name: "carrot", description: { en: "The carrot is a root vegetable, typically orange in color, native to Europe and Southwestern Asia." } },
- { name: "chili pepper", description: { en: "Chili peppers are varieties of the berry-fruit of plants from the genus Capsicum, cultivated for their pungency." } },
- { name: "tomato", description: { en: "The tomato is the edible berry of the tomato plant." } }
-]);
-var idx = db.food.ensureIndex({ name: "inv-text-offset", type: "inverted", fields: [ { name: "description.en", analyzer: "text_en", features: ["frequency", "position", "offset"] } ] });
-var view = db._createView("food_view", "search-alias", { indexes: [ { collection: "food", index: "inv-text-offset" } ] });
-~assert(db._query(`FOR d IN food_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 4);
-db._query(`FOR doc IN food_view
- SEARCH
- TOKENS("avocado tomato", "text_en") ANY == doc.description.en OR
- PHRASE(doc.description.en, "cultivated", 2, "pungency") OR
- STARTS_WITH(doc.description.en, "cap")
- FOR offsetInfo IN OFFSET_INFO(doc, ["description.en"])
- RETURN {
- description: doc.description,
- name: offsetInfo.name,
- matches: offsetInfo.offsets[* RETURN {
- offset: CURRENT,
- match: SUBSTRING_BYTES(VALUE(doc, offsetInfo.name), CURRENT[0], CURRENT[1])
- }]
- }`).toArray();
-~db._dropView("food_view");
-~db._drop("food");
-```
-{{< /tab >}}
-
-{{< tab "`arangosearch` View" >}}
-```js
----
-name: searchHighlighting_1
-description: ''
----
-var coll = db._create("food");
-var docs = db.food.save([
- { name: "avocado", description: { en: "The avocado is a medium-sized, evergreen tree, native to the Americas." } },
- { name: "carrot", description: { en: "The carrot is a root vegetable, typically orange in color, native to Europe and Southwestern Asia." } },
- { name: "chili pepper", description: { en: "Chili peppers are varieties of the berry-fruit of plants from the genus Capsicum, cultivated for their pungency." } },
- { name: "tomato", description: { en: "The tomato is the edible berry of the tomato plant." } }
-]);
-var analyzers = require("@arangodb/analyzers");
-var analyzer = analyzers.save("text_en_offset", "text", { locale: "en", stopwords: [] }, ["frequency", "position", "offset"]);
-var view = db._createView("food_view", "arangosearch", { links: { food: { fields: { description: { fields: { en: { analyzers: ["text_en_offset"] } } } } } } });
-~assert(db._query(`FOR d IN food_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 4);
-db._query(`FOR doc IN food_view
- SEARCH ANALYZER(
- TOKENS("avocado tomato", "text_en_offset") ANY == doc.description.en OR
- PHRASE(doc.description.en, "cultivated", 2, "pungency") OR
- STARTS_WITH(doc.description.en, "cap")
- , "text_en_offset")
- FOR offsetInfo IN OFFSET_INFO(doc, ["description.en"])
- RETURN {
- description: doc.description,
- name: offsetInfo.name,
- matches: offsetInfo.offsets[* RETURN {
- offset: CURRENT,
- match: SUBSTRING_BYTES(VALUE(doc, offsetInfo.name), CURRENT[0], CURRENT[1])
- }]
- }`).toArray();
-~db._dropView("food_view");
-~db._drop("food");
-~analyzers.remove(analyzer.name);
-```
-{{< /tab >}}
-
-{{< /tabs >}}
diff --git a/site/content/3.10/operations/administration/reduce-memory-footprint.md b/site/content/3.10/operations/administration/reduce-memory-footprint.md
deleted file mode 100644
index d38c42a797..0000000000
--- a/site/content/3.10/operations/administration/reduce-memory-footprint.md
+++ /dev/null
@@ -1,698 +0,0 @@
----
-title: Reducing the Memory Footprint of ArangoDB servers
-menuTitle: Reduce Memory Footprint
-weight: 35
-description: ''
----
-{{< warning >}}
-The changes suggested here can be useful to reduce the memory usage of
-ArangoDB servers, but they can cause side-effects on performance and other
-aspects.
-Do not apply any of the changes suggested here before you have tested them in
-in a development or staging environment.
-{{< /warning >}}
-
-Usually, a database server tries to use all the memory it can get to
-improve performance by caching or buffering. Therefore, it is important
-to tell an ArangoDB process how much memory it is allowed to use.
-ArangoDB detects the available RAM on the server and divides this up
-amongst its subsystems in some default way, which is suitable for a wide
-range of applications. This detected RAM size can be overridden with
-the environment variable
-[`ARANGODB_OVERRIDE_DETECTED_TOTAL_MEMORY`](../../components/arangodb-server/environment-variables.md).
-
-However, there may be situations in which there
-are good reasons why these defaults are not suitable and the
-administrator wants to fine tune memory usage. The two
-major reasons for this are:
-
- - something else (potentially another `arangod` server) is running on
- the same machine so that your `arangod` is supposed to use less than
- the available RAM, and/or
- - the actual usage scenario makes it necessary to increase the memory
- allotted to some subsystem at the cost of another to achieve higher
- performance for specific tasks.
-
-To a lesser extent, the same holds true for CPU usage, but operating
-systems are generally better in automatically distributing available CPU
-capacity amongst different processes and different subsystems.
-
-There are settings to make ArangoDB run on systems with very
-limited resources, but they may also be interesting for your
-development machine if you want to make it less taxing for
-the hardware and do not work with much data. For production
-environments, it is recommended to use less restrictive settings, to
-[benchmark](https://www.arangodb.com/performance/) your setup and
-fine-tune the settings for maximal performance.
-
-## Limiting the overall RAM usage
-
-A first simple approach could be to simply tell the `arangod` process
-as a whole to use only a certain amount of memory. This is done by
-overriding the detected memory size using the
-[`ARANGODB_OVERRIDE_DETECTED_TOTAL_MEMORY`](../../components/arangodb-server/environment-variables.md)
-environment variable.
-
-This essentially scales down `arangod`'s memory usage to the
-given value. This is for example a first measure if more than
-one `arangod` server are running on the same machine.
-
-Note, however, that certain subsystems will then still be able to use
-an arbitrary amount of RAM, depending on the load from the user. If you
-want to protect your server against such misusages and for more detailed
-tuning of the various subsystems, consult the sections below.
-
-Before getting into the nitty-gritty details, check the overview over
-the different subsystems of ArangoDB that are using significant amounts of RAM.
-
-## Overview over RAM usage in ArangoDB
-
-This section explains the various areas which use RAM and what they are
-for.
-
-Broadly, ArangoDB uses significant amounts of RAM for the following subsystems:
-
-- Storage engine including the block cache
-- HTTP server and queues
-- Edge caches and other caches
-- AQL queries
-- V8 (JavaScript features)
-- ArangoSearch
-- AgencyCache/ClusterInfo (cluster meta data)
-- Cluster-internal replication
-
-Of these, the storage engine itself has some subsystems which contribute
-towards its memory usage:
-
-- RocksDB block cache
-- Data structures to read data (Bloom filters, index blocks, table readers)
-- Data structures to write data (write buffers, table builders, transaction data)
-- RocksDB background compaction
-
-It is important to understand that all these have a high impact on
-performance. For example, the block cache is there such that already
-used blocks can be retained in RAM for later access. The larger the
-cache, the more blocks can be retained and so the higher the probability
-that a subsequent access does not need to reach out to disk and thus
-incurs no additional cost on performance and IO capacity.
-
-RocksDB can cache bloom filters and index blocks of its SST files in RAM.
-It usually needs approx. 1% of the total size of all SST files in RAM
-just for caching these. As a result, most random accesses need only a
-single actual disk read to find the data stored under a single RocksDB
-key! If not all Bloom filters and block indexes can be held in RAM, then
-this can easily slow down random accesses by a factor of 10 to 100,
-since suddenly many disk reads are needed to find a single
-RocksDB key!
-
-Other data structures for reading (table readers) and writing (write
-batches and write buffers) are also needed for smooth operation. If one
-cuts down on these too harshly, reads can be slowed down considerably
-and writes can be delayed. The latter can lead to overall slowdowns and
-could even end up in total write stalls and stops until enough write
-buffers have been written out to disk.
-
-Since RocksDB is a log structured merge tree (LSM), there must be
-background threads to perform compactions. They merge different SST
-files and throw out old, no-longer used data. These compaction
-operations also need considerable amounts of RAM. If this is limited too
-much (by reducing the number of concurrent compaction operations), then
-a compaction debt can occur, which also results in write stalls or
-stops.
-
-The other subsystems outside the storage engine also need RAM. If an
-ArangoDB server queues up too many requests (for example, if more
-requests arrive per time than can be executed), then the data of these
-requests (headers, bodies, etc.) are stored in RAM until they can be
-processed.
-
-Furthermore, there are multiple different caches which are sitting in
-front of the storage engine, the most prominent one being the edge cache
-which helps to speed up graph traversals. The larger these caches,
-the more data can be cached and the higher the likelihood that data
-which is needed repeatedly is found to be available in cache.
-
-Essentially, all AQL queries are executed in RAM. That means that every
-single AQL query needs some RAM - both on Coordinators and on DB-Servers.
-It is possible to limit the memory usage of a single AQL query as well
-as the global usage for all AQL queries running concurrently. Obviously,
-if either of these limits is reached, an AQL query can fail due to a lack
-of RAM, which is then reported back to the user.
-
-Everything which executes JavaScript (only on Coordinators, user defined
-AQL functions and Foxx services), needs RAM, too. If JavaScript is not
-to be used, memory can be saved by reducing the number of V8 contexts.
-
-ArangoSearch uses memory in different ways:
-
-- Writes which are committed in RocksDB but have not yet been **committed** to
- the search index are buffered in RAM,
-- The search indexes use memory for **consolidation** of multiple smaller
- search index segments into fewer larger ones,
-- The actual indexed search data resides in memory mapped files, which
- also need RAM to be cached.
-
-Finally, the cluster internal management uses RAM in each `arangod`
-instance. The whole meta data of the cluster is kept in the AgencyCache
-and in the ClusterInfo cache. There is very little one can change about
-this memory usage.
-
-Furthermore, the cluster-internal replication needs memory to perform
-its synchronization and replication work. Again, there is not a lot one
-can do about that.
-
-The following sections show the various subsystems which the
-administrator can influence and explain how this can be done.
-
-## Write ahead log (WAL)
-
-RocksDB writes all changes first to the write ahead log (WAL). This is
-for crash recovery, the data will then later be written in an orderly
-fashion to disk. The WAL itself does not need a lot of RAM, but limiting
-its total size can lead to the fact that write buffers are flushed
-earlier to make older WAL files obsolete. Therefore, adjusting the
-option
-
-```
---rocksdb.max-total-wal-size
-```
-
-to some value smaller than its default of 80MB can potentially help
-to reduce RAM usage. However, the effect is rather indirect.
-
-## Write Buffers
-
-RocksDB writes into
-[memory buffers mapped to on-disk blocks](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager)
-first. At some point, the memory buffers will be full and have to be
-flushed to disk. In order to support high write loads, RocksDB might
-open a lot of these memory buffers.
-
-By default, the system may use up to 10 write buffers per column family
-and the default write buffer size is 64MB. Since normal write loads will
-only hit the documents, edge, primary index and vpack index column
-families, this effectively limits the RAM usage to something like 2.5GB.
-However, there is a global limit which works across column families,
-which can be used to limit the total amount of memory for write buffers
-(set to unlimited by default!):
-
-```
---rocksdb.total-write-buffer-size
-```
-
-Note that it does not make sense to set this limit smaller than
-10 times the size of a write buffer, since there are currently 10 column
-families and each will need at least one write buffer.
-
-Additionally, RocksDB might keep some write buffers in RAM, which are
-already flushed to disk. This is to speed up transaction conflict
-detection. This only happens if the option
-
-```
---rocksdb.max-write-buffer-size-to-maintain
-```
-
-is set to a non-zero value. By default, this is set to 0.
-
-The other options to configure write buffer usage are:
-
-```
---rocksdb.write-buffer-size
---rocksdb.max-write-buffer-number
---rocksdb.max-write-buffer-number-definitions
---rocksdb.max-write-buffer-number-documents
---rocksdb.max-write-buffer-number-edge
---rocksdb.max-write-buffer-number-fulltext
---rocksdb.max-write-buffer-number-geo
---rocksdb.max-write-buffer-number-primary
---rocksdb.max-write-buffer-number-replicated-logs
---rocksdb.max-write-buffer-number-vpack
-```
-
-However, adjusting these usually not helps with RAM usage.
-
-## RocksDB Block Cache
-
-The RocksDB block cache has a number of options for configuration. First
-of all, its maximal size can be adjusted with the following option:
-
-```
---rocksdb.block-cache-size
-```
-
-The default is 30% of (R - 2GB) where R is the total detected RAM. This
-is a sensible default for many cases, but in particular if other
-services are running on the same system, this can be too large. On the
-other hand, if lots of other things are accounted with the block cache
-(see options below), the value can also be too small.
-
-Sometimes, the system can temporarily use a bit more than the configured
-upper limit. If you want to strictly enforce the block cache size limit,
-you can set the option
-
-```
---rocksdb.enforce-block-cache-size-limit
-```
-
-to `true`, but it is not recommended since it might lead to failed
-operations if the cache is full and we have observed that RocksDB
-instances can get stuck in this case. You have been warned.
-
-There are a number of things for which RocksDB needs RAM. It is possible
-to make it so that all of these RAM usages count towards the block cache
-usage. This is usually sensible, since it effectively allows to keep
-RocksDB RAM usage under a certain configured limit (namely the block
-cache size limit). If these things are not accounted in the block cache
-usage, they are allocated anyway and this can lead to too much memory
-usage. On the other hand, if they are accounted in the block cache
-usage, then the block cache has less capacity for its core operations,
-the caching of data blocks.
-
-The following options control accounting for RocksDB RAM usage:
-
-```
---rocksdb.cache-index-and-filter-blocks
---rocksdb.cache-index-and-filter-blocks-with-high-priority
---rocksdb.reserve-table-builder-memory
---rocksdb.reserve-table-reader-memory
-```
-
-They are for Bloom filter and block indexes, table
-building (RAM usage for building SST files, this happens when flushing
-memtables to level 0 SST files, during compaction and on recovery)
-and table reading (RAM usage for read operations) respectively.
-
-There are additional options you can enable to avoid that the index and filter
-blocks get evicted from cache:
-
-```
---rocksdb.pin-l0-filter-and-index-blocks-in-cache
---rocksdb.pin-top-level-index-and-filter
-```
-
-The first does level 0 Bloom filters and index blocks and its default is
-`false`. The second does top level of partitioned index blocks and Bloom
-filters into the cache, its default is `true`.
-
-The block cache basically trades increased RAM usage for less disk I/O, so its
-size does not only affect memory usage, but can also affect read performance.
-
-See also:
-- [RocksDB Server Options](../../components/arangodb-server/options.md#rocksdb)
-- [Write Buffer Manager](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager)
-
-## Transactions
-
-Before commit, RocksDB builds up transaction data in RAM. This happens
-in so-called "memtables" and "write batches". Note that from
-v3.12 onwards, this memory usage is accounted for in writing AQL queries and
-other write operations, and can thus be limited there. When there are
-many or large open transactions, this can sum up to a large amount of
-RAM usage.
-
-A further limit on RAM usage can be imposed by setting the option
-
-```
---rocksdb.max-transaction-size
-```
-
-which is by default unlimited. This setting limits the total size of
-a single RocksDB transaction. If the memtables exceed this size,
-the transaction is automatically aborted. Note that this cannot guard
-against **many** simultaneously uncommitted transactions.
-
-Another way to limit actual transaction size is "intermediate commits".
-This is a setting which leads to the behavior that ArangoDB
-automatically commits large write operations while they are executed. This of
-course goes against the whole concept of a transaction, since parts of a
-transaction which have already been committed cannot be rolled back any
-more. Therefore, this is a rather desperate measure to prevent RAM
-overusage. You can control this with the following options:
-
-```
---rocksdb.intermediate-commit-count
---rocksdb.intermediate-commit-size
-```
-
-The first option configures automatic intermediate commits based on the number
-of documents touched in the transaction (default is `1000000`). The second
-configures intermediate commits based on the total size of the documents
-touched in the transaction (default is `512MB`).
-
-## RocksDB compactions
-
-RocksDB compactions are necessary, but use RAM. You can control how many
-concurrent compactions can happen by configuring the number of
-background threads RocksDB can use. This can either be done with the
-
-```
---rocksdb.max-background-jobs
-```
-
-option whose default is the number of detected cores. Half
-of that number will be the default value for the option
-
-```
---rocksdb.num-threads-priority-low
-```
-
-and that many compactions can happen concurrently. You can also leave
-the total number of background jobs and just adjust the latter option.
-Fewer concurrent compaction jobs will use less RAM, but will also lead
-to slower compaction overall, which can lead to write stalls and even
-stops, if a compaction debt builds up under a high write load.
-
-## Scheduler queues
-
-If too much memory is used to queue requests in the scheduler queues,
-one can simply limit the queue length with this option:
-
-```
---server.scheduler-queue-size
-```
-
-The default is `4096`, which is quite a lot. For small requests, the
-memory usage for a full queue will not be significant, but since
-individual requests can be large, it may sometimes be necessary to
-limit the queue size a lot more to avoid RAM over usage on the queue.
-
-## Index Caches
-
-There are in-RAM caches for edge indexes and other indexes. The total
-RAM limit for them can be configured with this option:
-
-```
---cache.size
-```
-
-By default, this is set to 25% of (R - 2GB) where R is the total
-detected available RAM (or 256MB if that total is at most 4GB).
-You can disable the in-memory index
-[caches](../../components/arangodb-server/options.md#cache), by setting the limit to 0.
-
-If you do not have a graph use case and do not use edge collections,
-nor the optional hash cache for persistent indexes, it is possible to
-use no cache (or a minimal cache size) without a performance impact. In
-general, this should correspond to the size of the hot-set of edges and
-cached lookups from persistent indexes.
-
-There are a number of options to pre-fill the caches under certain
-circumstances:
-
-```
---rocksdb.auto-fill-index-caches-on-startup
---rocksdb.auto-refill-index-caches-on-modify
---rocksdb.auto-refill-index-caches-on-followers
---rocksdb.auto-refill-index-caches-queue-capacity
-```
-
-The first leads to the caches being automatically filled on startup, the
-second on any modifications. Both are set to `false` by default, so
-setting these to `true` will - in general - use more RAM rather than
-less. However, it will not lead to usage of more RAM than the configured
-limits for all caches.
-
-The third option above is by default `true`, so that caches on
-followers will automatically be refilled if any of the first two options
-is set to `true`. Setting this to `false` can save RAM usage on
-followers. Of course, this means that in case of a failover the caches
-of the new leader will be cold!
-
-Finally, the amount of write operations being queued for index refill
-can be limited with `--rocksdb.auto-refill-index-caches-queue-capacity`
-to avoid over-allocation if the indexing cannot keep up with the writes.
-The default for this value is 131072.
-
-## AQL Query Memory Usage
-
-In addition to all the buffers and caches above, AQL queries will use additional
-memory during their execution to process your data and build up result sets.
-This memory is used during the query execution only and will be released afterwards,
-in contrast to the held memory for buffers and caches.
-
-By default, queries will build up their full results in memory. While you can
-fetch the results batch by batch by using a cursor, every query needs to compute
-the entire result first before you can retrieve the first batch. The server
-also needs to hold the results in memory until the corresponding cursor is fully
-consumed or times out. Building up the full results reduces the time the server
-has to work with collections at the cost of main memory.
-
-In ArangoDB version 3.4 we introduced
-[streaming cursors](../../release-notes/version-3.4/whats-new-in-3-4.md#streaming-aql-cursors) with
-somewhat inverted properties: less peak memory usage, longer access to the
-collections. Streaming is possible on document level, which means that it cannot
-be applied to all query parts. For example, a *MERGE()* of all results of a
-subquery cannot be streamed (the result of the operation has to be built up fully).
-Nonetheless, the surrounding query may be eligible for streaming.
-
-Aside from streaming cursors, ArangoDB offers the possibility to specify a
-memory limit which a query should not exceed. If it does, the query will be
-aborted. Memory statistics are checked between execution blocks, which
-correspond to lines in the *explain* output. That means queries which require
-functions may require more memory for intermediate processing, but this will not
-kill the query because the memory.
-The startup option to restrict the peak memory usage for each AQL query is
-`--query.memory-limit`. This is a per-query limit, i.e. at maximum each AQL query is allowed
-to use the configured amount of memory. To set a global memory limit for
-all queries together, use the `--query.global-memory-limit` setting.
-
-You can also use *LIMIT* operations in AQL queries to reduce the number of documents
-that need to be inspected and processed. This is not always what happens under
-the hood, as some operations may lead to an intermediate result being computed before
-any limit is applied.
-
-## Statistics
-
-The server collects
-[statistics](../../components/arangodb-server/options.md#--serverstatistics) regularly,
-which is displayed in the web interface. You will have a light query load every
-few seconds, even if your application is idle, because of the statistics. If required, you can
-turn it off via:
-
-```
---server.statistics false
-```
-This setting will disable both the background statistics gathering and the statistics
-APIs. To only turn off the statistics gathering, you can use
-```
---server.statistics-history false
-```
-That leaves all statistics APIs enabled but still disables all background work
-done by the statistics gathering.
-
-## JavaScript & Foxx
-
-[JavaScript](../../components/arangodb-server/options.md#javascript) is executed in the ArangoDB
-process using the embedded V8 engine:
-
-- Backend parts of the web interface
-- Foxx Apps
-- Foxx Queues
-- GraphQL
-- JavaScript-based transactions
-- User-defined AQL functions
-
-There are several *V8 contexts* for parallel execution. You can think of them as
-a thread pool. They are also called *isolates*. Each isolate has a heap of a few
-gigabytes by default. You can restrict V8 if you use no or very little
-JavaScript:
-
-```
---javascript.v8-contexts 2
---javascript.v8-max-heap 512
-```
-
-This will limit the number of V8 isolates to two. All JavaScript related
-requests will be queued up until one of the isolates becomes available for the
-new task. It also restricts the heap size to 512 MByte, so that both V8 contexts
-combined cannot use more than 1 GByte of memory in the worst case.
-
-### V8 for the Desperate
-
-You should not use the following settings unless there are very good reasons,
-like a local development system on which performance is not critical or an
-embedded system with very limited hardware resources!
-
-```
---javascript.v8-contexts 1
---javascript.v8-max-heap 256
-```
-
-Using the settings above, you can reduce the memory usage of V8 to 256 MB and just
-one thread. There is a chance that some operations will be aborted because they run
-out of memory in the web interface for instance. Also, JavaScript requests will be
-executed one by one.
-
-If you are very tight on memory, and you are sure that you do not need V8, you
-can disable it completely:
-
-```
---javascript.enabled false
---foxx.queues false
-```
-
-In consequence, the following features will not be available:
-
-- Backend parts of the web interface
-- Foxx Apps
-- Foxx Queues
-- GraphQL
-- JavaScript-based transactions
-- User-defined AQL functions
-
-Note that JavaScript / V8 is automatically disabled for DB-Server and Agency
-nodes in a cluster without these limitations. They apply only to single server
-instances and Coordinator nodes. You should not disable V8 on Coordinators
-because certain cluster operations depend on it.
-
-## Concurrent operations
-
-Starting with ArangoDB 3.8 one can limit the number of concurrent
-operations being executed on each Coordinator. Reducing the amount of
-concurrent operations can lower the RAM usage on Coordinators. The
-startup option for this is:
-
-```
---server.ongoing-low-priority-multiplier
-```
-
-The default for this option is 4, which means that a Coordinator with `t`
-scheduler threads can execute up to `4 * t` requests concurrently. The
-minimal value for this option is 1.
-
-Also see the [_arangod_ startup options](../../components/arangodb-server/options.md#--serverongoing-low-priority-multiplier).
-
-## CPU usage
-
-We cannot really reduce CPU usage, but the number of threads running in parallel.
-Again, you should not do this unless there are very good reasons, like an
-embedded system. Note that this will limit the performance for concurrent
-requests, which may be okay for a local development system with you as only
-user.
-
-The number of background threads can be limited in the following way:
-
-```
---arangosearch.threads-limit 1
---rocksdb.max-background-jobs 4
---server.maintenance-threads 3
---server.maximal-threads 5
---server.minimal-threads 1
-```
-
-This will usually not be good for performance, though.
-
-In general, the number of threads is determined automatically to match the
-capabilities of the target machine. However, each thread requires at most 8 MB
-of stack memory when running ArangoDB on Linux (most of the time a lot less),
-so having a lot of concurrent
-threads around will need a lot of memory, too.
-Reducing the number of server threads as in the example above can help reduce the
-memory usage by thread, but will sacrifice throughput.
-
-In addition, the following option will make logging synchronous, saving one
-dedicated background thread for the logging:
-
-```
---log.force-direct true
-```
-
-This is not recommended unless you only log errors and warnings.
-
-## Examples
-
-If you don't want to go with the default settings, you should first adjust the
-size of the block cache and the edge cache. If you have a graph use case, you
-should go for a larger edge cache. For example, split the memory 50:50 between
-the block cache and the edge cache. If you have no edges, then go for a minimal
-edge cache and use most of the memory for the block cache.
-
-For example, if you have a machine with 40 GByte of memory and you want to
-restrict ArangoDB to 20 GB of that, use 10 GB for the edge cache and 10 GB for
-the block cache if you use graph features.
-
-Please keep in mind that during query execution additional memory will be used
-for query results temporarily. If you are tight on memory, you may want to go
-for 7 GB each instead.
-
-If you have an embedded system or your development laptop, you can use all of
-the above settings to lower the memory footprint further. For normal operation,
-especially production, these settings are not recommended.
-
-## Linux System Configuration
-
-The main deployment target for ArangoDB is Linux. As you have learned above
-ArangoDB and its innards work a lot with memory. Thus its vital to know how
-ArangoDB and the Linux kernel interact on that matter. The linux kernel offers
-several modes of how it will manage memory. You can influence this via the proc
-filesystem, the file `/etc/sysctl.conf` or a file in `/etc/sysctl.conf.d/` which
-your system will apply to the kernel settings at boot time. The settings as
-named below are intended for the sysctl infrastructure, meaning that they map
-to the `proc` filesystem as `/proc/sys/vm/overcommit_memory`.
-
-A `vm.overcommit_memory` setting of **2** can cause issues in some environments
-in combination with the bundled memory allocator ArangoDB ships with (jemalloc).
-
-The allocator demands consecutive blocks of memory from the kernel, which are
-also mapped to on-disk blocks. This is done on behalf of the server process
-(*arangod*). The process may use some chunks of a block for a long time span, but
-others only for a short while and therefore release the memory. It is then up to
-the allocator to return the freed parts back to the kernel. Because it can only
-give back consecutive blocks of memory, it has to split the large block into
-multiple small blocks and can then return the unused ones.
-
-With an `vm.overcommit_memory` kernel settings value of **2**, the allocator may
-have trouble with splitting existing memory mappings, which makes the *number*
-of memory mappings of an arangod server process grow over time. This can lead to
-the kernel refusing to hand out more memory to the arangod process, even if more
-physical memory is available. The kernel will only grant up to `vm.max_map_count`
-memory mappings to each process, which defaults to 65530 on many Linux
-environments.
-
-Another issue when running jemalloc with `vm.overcommit_memory` set to **2** is
-that for some workloads the amount of memory that the Linux kernel tracks as
-*committed memory* also grows over time and never decreases. Eventually,
-*arangod* may not get any more memory simply because it reaches the configured
-overcommit limit (physical RAM * `overcommit_ratio` + swap space).
-
-The solution is to
-[modify the value of `vm.overcommit_memory`](../installation/linux/operating-system-configuration.md#overcommit-memory)
-from **2** to either **0** or **1**. This will fix both of these problems.
-We still observe ever-increasing *virtual memory* consumption when using
-jemalloc regardless of the overcommit setting, but in practice this should not
-cause any issues. **0** is the Linux kernel default and also the setting we recommend.
-
-For the sake of completeness, let us also mention another way to address the
-problem: use a different memory allocator. This requires to compile ArangoDB
-from the source code without jemalloc (`-DUSE_JEMALLOC=Off` in the call to cmake).
-With the system's libc allocator you should see quite stable memory usage. We
-also tried another allocator, precisely the one from `libmusl`, and this also
-shows quite stable memory usage over time. What holds us back to change the
-bundled allocator are that it is a non-trivial change and because jemalloc has
-very nice performance characteristics for massively multi-threaded processes
-otherwise.
-
-## Testing the Effects of Reduced I/O Buffers
-
-
-
-- 15:50 – Start bigger import
-- 16:00 – Start writing documents of ~60 KB size one at a time
-- 16:45 – Add similar second writer
-- 16:55 – Restart ArangoDB with the RocksDB write buffer configuration suggested above
-- 17:20 – Buffers are full, write performance drops
-- 17:38 – WAL rotation
-
-What you see in above performance graph are the consequences of restricting the
-write buffers. Until we reach a 90% fill rate of the write buffers the server
-can almost follow the load pattern for a while at the cost of constantly
-increasing buffers. Once RocksDB reaches 90% buffer fill rate, it will
-significantly throttle the load to ~50%. This is expected according to the
-[upstream documentation](https://github.com/facebook/rocksdb/wiki/Write-Buffer-Manager):
-
-> […] a flush will be triggered […] if total mutable memtable size exceeds 90%
-> of the limit. If the actual memory is over the limit, more aggressive flush
-> may also be triggered even if total mutable memtable size is below 90%.
-
-Since we only measured the disk I/O bytes, we do not see that the document save
-operations also doubled in request time.
diff --git a/site/content/3.10/operations/administration/user-management/_index.md b/site/content/3.10/operations/administration/user-management/_index.md
deleted file mode 100644
index 481b95ec5a..0000000000
--- a/site/content/3.10/operations/administration/user-management/_index.md
+++ /dev/null
@@ -1,433 +0,0 @@
----
-title: Managing Users
-menuTitle: User Management
-weight: 25
-description: >-
- User management is possible in the web interface and in _arangosh_ in the
- context of the `_system` database
----
-Authentication needs to be enabled on the server in order to employ user
-permissions. Authentication is turned on by default in ArangoDB. You should
-make sure that it was not turned off manually however. Check the configuration
-file (normally named `/etc/arangodb.conf`) and make sure it contains the
-following line in the `[server]` section:
-
-```
-authentication = true
-```
-
-This will make ArangoDB require authentication for every request (including
-requests to Foxx apps depending on the option below). If you want to run Foxx
-apps without HTTP authentication, but activate HTTP authentication for the built-in
-server APIs, you can add the following line in the `[server]` section of the
-configuration:
-
-```
-authentication-system-only = true
-```
-
-The above will bypass authentication for requests to Foxx apps.
-
-When finished making changes, you need to restart ArangoDB, e.g.:
-
-```
-service arangodb restart
-```
-
-User management is possible in the [web interface](../../../components/web-interface/users.md)
-while logged on to the `_system` database and in
-[arangosh](in-arangosh.md), as well as via the
-[HTTP API](../../../develop/http-api/users.md).
-
-There is a built-in user account `root` which cannot be removed. Note that it
-has an empty password by default, so make sure to set a strong password
-immediately. Additional users can be created and granted different actions and
-access levels. ArangoDB user accounts are valid throughout a server instance
-(across databases).
-
-## Actions and Access Levels
-
-An ArangoDB server contains a list of users. It also defines various
-access levels that can be assigned to a user (for details, see below)
-and that are needed to perform certain actions. These actions can be grouped
-into three categories:
-
-- server actions
-- database actions
-- collection actions
-
-The **server actions** are
-
-- **create user**: allows to create a new user.
-
-- **update user**: allows to change the access levels and details of an existing
-user.
-
-- **drop user**: allows to delete an existing user.
-
-- **create database**: allows to create a new database.
-
-- **drop database**: allows to delete an existing database.
-
-- **shutdown server**: remove server from cluster and shutdown
-
-The **database actions** are tied to a given database, and access
-levels must be set
-for each database individually. For a given database the actions are
-
-- **create collection**: allows to create a new collection in the given database.
-
-- **update collection**: allows to update properties of an existing collection.
-
-- **drop collection**: allows to delete an existing collection.
-
-- **create index**: allows to create an index for an existing collection in the
-given database.
-
-- **drop index**: allows to delete an index of an existing collection in the given
-database.
-
-The **collection actions** are tied to a given collection of a given
-database, and access levels must be set for each collection individually.
-For a given collection the actions are
-
-- **read document**: read a document of the given collection.
-
-- **create document**: creates a new document in the given collection.
-
-- **modify document**: modifies an existing document of the given collection,
-this can be an update or replace operation.
-
-- **drop document**: deletes an existing document of the given collection.
-
-- **truncate collection**: deletes all documents of a given collection.
-
-To perform actions on the server level the user needs at least the following
-access levels. The access levels are *Administrate* and
-*No access*:
-
-| server action | server level |
-|---------------------------|--------------|
-| create a database | Administrate |
-| drop a database | Administrate |
-| create a user | Administrate |
-| update a user | Administrate |
-| update user access level | Administrate |
-| drop a user | Administrate |
-| shutdown server | Administrate |
-
-To perform actions in a specific database (like creating or dropping collections),
-a user needs at least the following access level.
-The possible access levels for databases are *Administrate*, *Access* and *No access*.
-The access levels for collections are *Read/Write*, *Read Only* and *No Access*.
-
-| database action | database level | collection level |
-|------------------------------|----------------|------------------|
-| create collection | Administrate | Read/Write |
-| list collections | Access | Read Only |
-| rename collection | Administrate | Read/Write |
-| modify collection properties | Administrate | Read/Write |
-| read properties | Access | Read Only |
-| drop collection | Administrate | Read/Write |
-| create an index | Administrate | Read/Write |
-| drop an index | Administrate | Read/Write |
-| see index definition | Access | Read Only |
-
-Note that the access level *Access* for a database is always required to perform
-any action on a collection in that database.
-
-For collections a user needs the following access
-levels to the given database and the given collection. The access levels for
-the database are *Administrate*, *Access* and *No access*. The access levels
-for the collection are *Read/Write*, *Read Only* and *No Access*.
-
-| action | collection level | database level |
-|-----------------------|-------------------------|------------------------|
-| read a document | Read/Write or Read Only | Administrate or Access |
-| create a document | Read/Write | Administrate or Access |
-| modify a document | Read/Write | Administrate or Access |
-| drop a document | Read/Write | Administrate or Access |
-| truncate a collection | Read/Write | Administrate or Access |
-
-**Example**
-
-For example, given
-
-- a database *example*
-- a collection *data* in the database *example*
-- a user *JohnSmith*
-
-If the user *JohnSmith* is assigned the access level *Access* for the database
-*example* and the level *Read/Write* for the collection *data*, then the user
-is allowed to read, create, modify or delete documents in the collection
-*data*. But the user is, for example, not allowed to create indexes for the
-collection *data* nor create new collections in the database *example*.
-
-## Granting Access Levels
-
-Access levels can be managed via the [web interface](../../../components/web-interface/users.md)
-or in [arangosh](in-arangosh.md).
-
-In order to grant an access level to a user, you can assign one of
-three access levels for each database and one of three levels for each
-collection in a database. The server access level for the user follows
-from the database access level in the `_system` database, it is
-*Administrate* if and only if the database access level is
-*Administrate*. Note that this means that database access level
-*Access* does not grant a user server access level *Administrate*.
-
-### Initial Access Levels
-
-When a user creates a database, the access level of the user for that database
-is set to *Administrate*. The same is true for creating a collection, in this
-case the user gets *Read/Write* access to the collection.
-
-### Wildcard Database Access Level
-
-With the above definition, one must define the database access level for
-all database/user pairs in the server, which would be very tedious. In
-order to simplify this process, it is possible to define a wildcard
-database access level for a user. This wildcard is used if the database
-access level is *not* explicitly defined for a certain database. Each new
-created user has an initial database wildcard of *No Access*.
-
-Changing the wildcard database access level for a user will change the
-access level for all databases that have no explicitly defined
-access level. Note that this includes databases which will be created
-in the future and for which no explicit access levels are set for that
-user!
-
-If you delete the wildcard, the default access level is defined as *No Access*.
-
-The `root` user has an initial database wildcard of *Administrate*.
-
-If a user has the access level *Access* or *Administrate* for the `_system`
-database but a lower wildcard database access level, then the `_system` database
-access level is granted for all databases that do not have an explicit
-access level defined.
-
-See [Permission Resolution](#permission-resolution) for details.
-
-**Example**
-
-Assume user *JohnSmith* has the following database access levels:
-
-| | Access level |
-|--------------------|--------------|
-| database `_system` | No Access |
-| database `shop1` | Administrate |
-| database `shop2` | No Access |
-| database `*` | Access |
-
-This gives the user *JohnSmith* the following database level access:
-
-- database `_system`: *No Access*
-- database `shop1`: *Administrate*
-- database `shop2`: *No Access*
-- database `something`: *Access*
-
-If the wildcard `*` is changed from *Access* to *No Access*, then the
-permissions change as follows:
-
-- database `_system`: *No Access*
-- database `shop1`: *Administrate*
-- database `shop2`: *No Access*
-- database `something`: *No Access*
-
-If the `_system` database access level is changed from *No Access* to
-*Administrate*, then the permissions change again for databases with no
-explicitly defined access level:
-
-- database `_system`: *Administrate*
-- database `shop1`: *Administrate*
-- database `shop2`: *No Access*
-- database `something`: *Administrate*
-
-### Wildcard Collection Access Level
-
-For each user and database, there is a wildcard collection access level.
-This level is used for all collections of a database without an explicitly
-defined collection access level. Note that this includes collections
-which will be created in the future and for which no explicit access
-levels are set for a that user! Each new created user has an initial
-collection wildcard of *No Access*.
-
-If you delete the wildcard, the system defaults to *No Access*.
-
-The `root` user has an initial collection wildcard of *Read/Write* in every database.
-
-When creating a user through
-[`db._createDatabase()`](../../../develop/javascript-api/@arangodb/db-object.md#db_createdatabasename--options--users),
-the access level of the user for this database is set to *Administrate* and the
-wildcard for all collections within this database are set to *Read/Write*.
-
-If a user has the access level *Access* or *Administrate* for the `_system`
-database but a lower wildcard database access level or wildcard collection
-access level, then the `_system` database access level is granted for all
-collections that do not have an explicit access level defined.
-
-See [Permission Resolution](#permission-resolution) for details.
-
-{{< security >}}
-It is recommended to use explicitly defined access levels for all databases and
-collections instead of wildcard grants to avoid accidentally granting more
-permissions than intended.
-{{< /security >}}
-
-**Examples**
-
-Assume user *JohnSmith* has the following database access levels:
-
-| | Access level |
-|--------------------|--------------|
-| database `_system` | No Access |
-| database `*` | Access |
-
-And the following collection access levels:
-
-| | Access level |
-|-----------------------------------------|--------------|
-| database `*`, collection `*` | Read/Write |
-| database `shop1`, collection `products` | Read-Only |
-| database `shop1`, collection `*` | No Access |
-| database `shop2`, collection `reviews` | No Access |
-
-Then the user *JohnSmith* gets the following collection access levels:
-
-- database `shop1`, collection `products`: *Read-Only*
-- database `shop1`, collection `customers`: *Read/Write*
-- database `shop2`, collection `reviews`: *No Access*
-
-Explanation:
-
-Database `shop1`, collection `products` directly matches a defined
-collection access level and the database access level is higher
-than *No Access*, leading to *Read-Only* access for the collection.
-
-Database `shop1`, collection `customers` does not match a defined access
-level. There is a wildcard collection access level of *No Access*, but it cannot
-lower the access level granted by the wildcard combination of database `*`,
-collection `*`, leading to *Read/Write* access for the collection.
-
-Database `shop2` does not match a defined access level. However, the database
-matches the database wildcard access level of *Access*. The access level for all
-collections with no defined access level would be *Read/Write* because of the
-wildcard combination of database `*`, collection `*`, but the `reviews`
-collection has a defined access level of *No Access*, leading to no access for
-this collection.
-
-Assume user *JohnSmith* has the following database access levels:
-
-| | Access level |
-|--------------------|--------------|
-| database `_system` | Access |
-| database `shop2` | Administrate |
-| database `*` | No Access |
-
-And the following collection access levels:
-
-| | Access level |
-|------------------------------------------|--------------|
-| database `shop1`, collection `customers` | No Access |
-| database `shop1`, collection `*` | No Access |
-
-Then the user *JohnSmith* gets the following collection access levels:
-
-- database `shop1`, collection `products`: *Read-Only*
-- database `shop1`, collection `customers`: *No Access*
-- database `shop2`, collection `reviews`: *Read/Write*
-
-Explanation:
-
-Database `shop1`, collection `products` does not match a defined access
-level. There is a wildcard collection access level of *No Access*, but it cannot
-lower the access level granted via the `_system` database, leading to *Read-Only*
-access for the collection.
-
-Database `shop1`, collection `customers` directly matches a defined
-collection access level. The database access level is higher than *No Access*
-but the explicitly defined collection access level of *No Access* leads to no
-access for the collection.
-
-Database `shop2` has a defined access level of *Administrate*. No access level
-is defined for its collections, which is equal to a wildcard collection
-access level of *No Access*. However, the *Administrate* database access level
-leads to *Read-Write* access for all collections in the database, including the
-`reviews` collection.
-
-### Permission Resolution
-
-The access levels for databases and collections are resolved in the following way:
-
-For a database "*foo*":
-1. Check if there is a specific database grant for *foo*.
- If yes, use the granted access level.
-2. Choose the higher access level of:
- - A wildcard database grant (like `grantDatabase('user', '*', 'rw')`).
- - A database grant on the `_system` database.
-
-For a collection named "*bar*" in a database "*foo*":
-1. Check whether the effective access level for the database *foo* is higher than
- *No Access* (see above). If not, then you cannot access the collection *bar*.
-2. Check if there is a specific collection grant for *bar*.
- If yes, use the granted collection access level for *bar*.
-3. Choose the higher access level of:
- - A wildcard collection grant in the same database
- (like `grantCollection('user', 'foo', '*', 'rw')`).
- - A wildcard database grant (like `grantDatabase('user', '*', 'rw')`).
- - The access level for the current database (like `grantDatabase('user', 'foo', 'rw'`).
- - The access level for the `_system` database.
-
-An exception to this are system collections, where only the access level for the
-database is used.
-
-### System Collections
-
-The access level for system collections cannot be changed. They follow
-different rules than user defined collections and may change without further
-notice. The system collections follow these rules:
-
-| Collection | Access level |
-|--------------------------------------|--------------|
-| `_users` (in the `_system` database) | No Access |
-| `_queues` | Read-Only |
-| `_frontend` | Read/Write |
-| `*` (default) | *based on the current database* |
-
-All other system collections have access level *Read/Write* if the
-user has *Administrate* access to the database. They have access level
-*Read/Only* if the user has *Access* to the database.
-
-To modify these system collections, you should always use the
-specialized APIs provided by ArangoDB. For example,
-no user has access to the `_users` collection in the `_system`
-database. All changes to the access levels must be done using the
-[`@arangodb/users` module of the JavaScript API](in-arangosh.md),
-the [`/_api/user` HTTP API endpoints](../../../develop/http-api/users.md),
-or the web interface.
-
-### LDAP Users
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-ArangoDB supports LDAP as an external authentication system. For detailed
-information please have look into the
-[LDAP configuration guide](../../../components/arangodb-server/ldap.md).
-
-There are a few differences to *normal* ArangoDB users:
-- ArangoDB does not "*know*" LDAP users before they first authenticate.
- Calls to various APIs using endpoints in `_api/users/*` will **fail** until
- the user first logs-in.
-- Access levels of each user are periodically updated. This will happen by
- default every *5 minutes*.
-- It is not possible to change permissions on LDAP users directly, only on **roles**
-- LDAP users cannot store configuration data per user
- (affects for example custom settings in the graph viewer).
-
-To grant access for an LDAP user you will need to create *roles* within the
-ArangoDB server. A role is just a user with the `:role:` prefix in its name.
-Role users cannot login as database users, the `:role:` prefix ensures this.
-Your LDAP users will need to have at least one role; once users log in they
-will be automatically granted the union of all access rights of all their roles.
-Note that a lower right grant in one role will be overwritten by a higher
-access grant in a different role.
diff --git a/site/content/3.10/operations/backup-and-restore.md b/site/content/3.10/operations/backup-and-restore.md
deleted file mode 100644
index 942a1fbdbe..0000000000
--- a/site/content/3.10/operations/backup-and-restore.md
+++ /dev/null
@@ -1,351 +0,0 @@
----
-title: Backup and Restore
-menuTitle: Backup & Restore
-weight: 215
-description: >-
- Physical backups, logical backups with _arangodump_ and _arangorestore_,
- hot backups with _arangobackup_
----
-ArangoDB supports three backup methods:
-
-1. Physical (raw or "cold") backups
-2. Logical backups
-3. Hot backups
-
-These backup methods save the data which is in the database system. In addition,
-make sure to backup things like configuration files, startup scripts, Foxx
-services, access tokens, secrets, certificates etc. and store them in a
-different location securely.
-
-Performing frequent backups is important and a recommended best practices that
-can allow you to recover your data in case unexpected problems occur.
-Hardware failures, system crashes, or users mistakenly deleting data can always
-happen. Furthermore, while a big effort is put into the development and testing
-of ArangoDB (in all its deployment modes), ArangoDB, as any other software
-product, might include bugs or errors and data loss could occur.
-It is therefore important to regularly backup your data to be able to recover
-and get up and running again in case of serious problems.
-
-Creating backups of your data before an ArangoDB upgrade is also a best practice.
-
-{{< warning >}}
-Making use of a high availability deployment mode of ArangoDB, like Active Failover,
-Cluster or Datacenter-to-Datacenter Replication, does not remove the need of
-taking frequent backups, which are recommended also when using such deployment modes.
-{{< /warning >}}
-
-## Physical backups
-
-Physical (raw or "cold") backups can be done when the ArangoDB Server is not running
-by making a raw copy of the ArangoDB data directory.
-
-Such backups are extremely fast as they only involve file copying.
-
-If ArangoDB runs in Active Failover or Cluster mode, it is necessary
-to copy the data directories of all the involved processes (_Agents_, _Coordinators_ and
-_DB-Servers_).
-
-{{< warning >}}
-It is extremely important that physical backups are taken only after all the ArangoDB
-processes have been shut down and the processes are not running anymore.
-Otherwise files might still be written to, likely resulting in a corrupt and incomplete backup.
-{{< /warning >}}
-
-It is not always possible to take a physical backup as this method requires a shutdown
-of the ArangoDB processes. However in some occasions such backups are useful, often
-in conjunction to the backup coming from another backup method.
-
-## Logical Backups
-
-Logical backups can be created and restored with the tools
-[_arangodump_](../components/tools/arangodump/_index.md) and
-[_arangorestore_](../components/tools/arangorestore/_index.md).
-
-{{< tip >}}
-In order to speed up the _arangorestore_ performance in a Cluster environment,
-the [Fast Cluster Restore](../components/tools/arangorestore/fast-cluster-restore.md)
-procedure is recommended.
-{{< /tip >}}
-
-## Hot Backups
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Hot backup and restore associated operations can be performed with the
-[_arangobackup_](../components/tools/arangobackup/_index.md) client tool and the
-[Hot Backup HTTP API](../develop/http-api/hot-backups.md).
-
-Many operations cannot afford downtimes and thus require administrators and
-operators to create consistent freezes of the data during normal operation.
-Such use cases imply that near instantaneous hot backups must be
-obtained in sync across say a cluster's deployment. For this purpose the
-hot backup mechanism was created.
-
-The process of creating hot backups is ideally an instantaneous event during
-normal operations, that consists of a few subsequent steps behind the scenes:
-
-- Stop all write accesses to the entire installation using a write transaction lock.
-- Create a new local directory under `/backups/_`.
-- Create hard links to the active database files in `` in the newly
- created backup directory.
-- Release the write transaction lock to resume normal operation.
-- Report success of the operation.
-
-The above quite precisely describes the tasks in a single instance installation
-and could technically finish in under a millisecond. The unknown factor above is
-of course, when the hot backup process is able to obtain the write transaction lock.
-
-When considering the ArangoDB cluster two more steps need to integrate while
-others just become slightly more exciting. On the Coordinator tasked with the
-hot backup the following is done:
-
-- Using the Agency, make sure that no two hot backups collide.
-- Obtain a dump of the Agency's `Plan` key.
-- Stop all write access to the **entire cluster** installation using a
- global write transaction lock, this amounts to get each local write
- transaction lock on each DB-Server, all at the same time.
-- Getting all the locks on the DB-Servers is tried using subsequently growing
- time periods, and if not all local locks can be acquired during a period,
- all locks are released again to allow writes to continue. If it is not
- possible to acquire all local locks in the same period, and this continues
- for an extended, configurable amount of time, the Coordinator gives
- up. With the `allowInconsistent` option set to `true`, it proceeds instead
- to create a potentially non-consistent hot backup.
-- **On each DB-Server** create a new local directory under
- `/backups/_`.
-- **On each DB-Server** create hard links to the active database files
- in `` in the newly created backup directory.
-- **On each DB-Server** store a redundant copy of the above Agency dump.
-- Release the global write transaction lock to resume normal operation.
-- Report success of the operation.
-
-Again under good conditions, a complete hot backup could be obtained from a
-cluster with many DB-Servers within a very short time in the range
-of that of the single server installation.
-
-### Technical Details
-
-- **The Global Write Transaction Lock**
-
- To create a consistent snapshot of an ArangoDB single server or
- cluster deployment, all transactions need to be suspended in order for the
- state of a deployment to be consistent. However, there is no way for ArangoDB
- to know by its own when this time comes. This is why a hot backup needs to
- acquire a global write transaction lock in order to create the backup in a
- consistent state.
-
- On a single server instance, this lock is eventually obtained and the hot
- backup is then created within a very short amount of time.
-
- However, in a cluster, this process is more complex. One Coordinator tries to
- obtain the global write transaction lock on all _DB-Servers_ simultaneously.
- Depending on the activity in the cluster, it can take some time for the
- Coordinator to acquire all the locks the cluster needs. Grabbing all the
- necessary locks at once might not always be successful, leading to times
- when it seems like the cluster's write operations are suspended.
-
- This process can happen multiple times until all locks are obtained.
- The system administrator has control over the length of the time during which
- the lock is tried to be obtained each time, prolonging the last wait time by
- 10% (which gives more time for the global write transaction lock to resolve).
-
-- **Agency Lock**
-
- Less of a variable, however equally important is to obtain a freeze on the
- cluster's structure itself. This is done through the creation of a simple key
- lock in the cluster's configuration to stop all ongoing background tasks,
- which are there to handle fail overs, shard movings, server removals etc.
- Its role is also to prevent multiple simultaneous hot backup operations.
- The acquisition of this key is predictably done within a matter of a few seconds.
-
-- **Operation's Time Scope**
-
- Once the global write transaction lock is obtained, everything goes very quickly.
- A new backup directory is created, the write ahead lock is flushed and
- hard links are made on file system level to all persistent files.
- The duration is not affected by the amount of data in ArangoDB and is near
- instantaneous.
-
-- **Point in Time Recovery**
-
- One of the great advantages of the method is the consistent snapshot nature.
- It gives the operator of the database the ability to persist a true and
- complete time freeze at near zero impact on the ongoing operation.
- The recovery is easy and restores the entire ArangoDB installation to a
- desired snapshot.
-
- Apart from the ability of creating such snapshots it offers a great and easy
- to use opportunity to experiment with ArangoDB with a means to protect
- against data loss or corruption.
-
-- **Remote Upload and Download**
-
- We have fully integrated the
- [rclone](https://rclone.org/) sync for cloud storage. Rclone is a very
- versatile inter site sync facility, which opens up a vast field of transport
- protocols and remote syncing APIs from Amazon's S3 over Dropbox, WebDAV,
- all the way to the local file system and network storage.
-
- One can use the upload and download functionalities to migrate entire cluster
- installations in this way, copy cluster and single server snapshots all
- over the world, create an intuitive and easy to use quick access safety
- backbone of the data operation.
-
- Rclone is open source and available under the MIT license, is battle tested
- and has garnered close to 15k stars on GitHub professing to the confidence
- of lots of users.
-
-### Hot Backup Limitations
-
-ArangoDB hot backups impose limitations with respect to storage engine,
-storage usage, upgrades, deployment scheme, etc. Please review the below
-list of limitations closely to conclude which operations it might or might
-not be suited for.
-
-- **Global Scope**
-
- In order to be able to create hot backups instantaneously, they are created
- on the file system level and thus well below any structural entity related to
- databases, collections, indexes, users, etc.
-
- As a consequence, a hot backup is a backup of the entire ArangoDB single server
- or cluster. In other words, one cannot restore to an older hot backup of a
- single collection or database. With every restore, one restores the entire
- deployment including of course the `_system` database.
-
- Note that this applies in particular in the case that a certain user
- might have admin access for the `_system` database, but explicitly has
- no access to certain collections. The backup still extends across
- **all** collections!
-
- {{< danger >}}
- A restore to an earlier hot backup snapshot also reverts users, graphs,
- Foxx apps - everything - back to that at the time of the hot backup!
- {{< /danger >}}
-
-- **Cluster's Special Limitations**
-
- Creating hot backups can only be done while the internal structure of the
- cluster remains unaltered. The background of this limitation lies in the
- distributed nature and the asynchronicity of creation, alteration and
- dropping of cluster databases, collections and indexes.
-
- It must be ensured that for the hot backup no such changes are made to the
- cluster's inventory, as this could lead to inconsistent hot backups.
-
-- **Active Failover Special Limitations**
-
- When restoring hot backups in Active Failover setups, it is necessary to
- prevent that a non-restored follower becomes leader by temporarily setting
- the maintenance mode:
-
- 1. `curl -X PUT /_admin/cluster/maintenance -d'"on"'`
- 2. Restore the Hot Backup
- 3. `curl -X PUT /_admin/cluster/maintenance -d'"off"'`
-
- Substitute `` with the actual endpoint of the **leader**
- single server instance.
-
-- **Restoring from a different version**
-
- Hot backups share the same limitations with respect to different versions
- as ArangoDB itself. This means that a hot backup created with some version
- `a.b.c` can without any limitations be restored on any version `a.b.d` with
- `d` not equal to `c`, that is, the patch level can be changed arbitrarily.
- With respect to minor versions (second number, `b`), one can only upgrade
- and **not downgrade**. That is, a hot backup created with a version `a.b.c`
- can be restored on a version `a.d.e` for `d` greater than `b` but not for `d`
- less than `b`. At this stage, we do not guarantee any compatibility between
- versions with a different major version number (first number).
-
-- **Identical Topology**
-
- Unlike dumps created with [_arangodump_](../components/tools/arangodump/_index.md) and restored
- with [_arangorestore_](../components/tools/arangorestore/_index.md),
- hot backups can only be restored to the same type and structure of deployment.
- This means that one cannot restore a 3-node ArangoDB cluster's hot backup to
- any other deployment than another 3-node ArangoDB cluster of the same version.
-
-- **Storage Space**
-
- Without the creation of hot backups, RocksDB keeps compacting the file system
- level files as the operation continues. Compacted files are subsequently
- deleted automatically. Every hot backup needs to hold on to the
- files as they were at the moment of the hot backup creation, thus preventing
- the deletions and consequently growing the storage space of the ArangoDB
- data directory. That growth of course depends on the amount of write operations
- per time.
-
- This is a crucial factor for sustained operation and might require
- significantly higher storage reservation for ArangoDB instances involved and
- a much more fine grained monitoring of storage usage than before.
-
- Also note that in a cluster, each RocksDB instance is backed up
- individually and hence the overall storage space is the sum of all
- RocksDB instances (i.e., data which is replicated between instances is
- not de-duplicated for performance reasons).
-
-- **Global Transaction Lock**
-
- In order to be able to create consistent hot backups, it is mandatory to get
- a very brief global transaction lock across the entire installation.
- In single server deployments, constant invocation of very long running
- transactions could prevent that from ever happening during a timeout period.
- The same holds true for clusters, where this lock must now be obtained on all
- DB-Servers at the same time.
-
- Especially in the cluster, the result of these successively longer tries to
- obtain the global transaction lock might become visible in periods of apparent
- dead time. Locks might be obtained on some machines and and not on others, so
- that the process has to be retried over and over. Every unsuccessful try would
- then lead to the release of all partial locks.
-
- {{< info >}}
- The _arangobackup_ tool provides a `--force` option
- that can be used to abort ongoing write transactions and thus to more quickly
- obtain the global transaction lock.
- {{< /info >}}
-
- At this stage, index creation constitutes a write transactions, which means
- that during index creation one cannot create a hot backup. We intend to lift
- this limitation in a future version.
-
-- **Services on Single Server**
-
- On a single server, the installed Foxx microservices are not backed up and are
- therefore also not restored. This is because in single server mode
- the service installation is done locally in the file system and does not
- track the information in the `_apps` collection.
-
- In a cluster, the Coordinators eventually restore the state of the
- services from the `_apps` and `_appbundles` collections after a backup is
- restored.
-
-- **Encryption at Rest**
-
- The hot backup simply takes a snapshot of the database files.
- If you use encryption at rest, then the backed up files are
- encrypted, with the encryption key that has been used in the
- instance which created the backup.
-
- Such an encrypted backup can only be restored to an instance using the
- same encryption key.
-
-- **Replication and Hot Backup**
-
- Hot backups are not automatically replicated between instances. This is
- true for both the Active Failover setup with 2 (or more) single servers
- and for the Datacenter-to-Datacenter Replication between clusters.
- Simply take hot backups on all instances.
-
- {{< info >}}
- The DC2DC replication needs to be stopped before restoring a Hot Backup.
-
- 1. Stop the DC2DC synchronization with `arangosync stop sync ...`.
- 2. Restore the Hot Backup.
- 3. Restart the DC2DC synchronization with `arangosync configure sync ...`.
- {{< /info >}}
-
-- **Known Issues**
-
- See the list of [Known Issues](../release-notes/version-3.10/known-issues-in-3-10.md#hot-backup).
diff --git a/site/content/3.10/operations/installation/_index.md b/site/content/3.10/operations/installation/_index.md
deleted file mode 100644
index 0678048be7..0000000000
--- a/site/content/3.10/operations/installation/_index.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-title: Installation
-menuTitle: Installation
-weight: 210
-description: >-
- You can install ArangoDB by downloading and running the official packages,
- as well as run ArangoDB using Docker images
----
-To install ArangoDB, as first step, please download a package for your operating
-system from the official [Download](https://www.arangodb.com/download)
-page of the ArangoDB web site.
-
-You can find packages for various operating systems, including _RPM_ and _Debian_
-packages for Linux, including `tar.gz` archives. For macOS, only client tools `tar.gz`
-packages are available. For Windows, _Installers_ and `zip` archives are available.
-
-- [Linux](linux/_index.md)
-- [macOS](macos.md)
-- [Windows](windows.md)
-
-{{< tip >}}
-You can also use the official [Docker images](https://hub.docker.com/_/arangodb/)
-to run ArangoDB in containers on Linux, macOS, and Windows. For more information,
-see the [Docker](docker.md) section.
-{{< /tip >}}
-
-If you prefer to compile ArangoDB from source, please refer to the [Compiling](compiling/_index.md)
-section.
-
-For detailed information on how to deploy ArangoDB, once it has been installed,
-please refer to the [Deploy](../../deploy/_index.md) chapter.
-
-## Supported platforms and architectures
-
-Work with ArangoDB on Linux, macOS, and Windows, and run it in production on Linux.
-
-{{< info >}}
-ArangoDB requires systems with Little Endian byte order.
-{{< /info >}}
-
-{{< tip >}}
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-is a fully-managed service and requires no installation. It's the easiest way
-to run ArangoDB in the cloud.
-{{< /tip >}}
-
-### Linux
-
-ArangoDB is available for the following architectures:
-
-- **x86-64**: The processor(s) must support the **x86-64** architecture with the
- **SSE 4.2** and **AVX** instruction set extensions (Intel Sandy Bridge or better,
- AMD Bulldozer or better, etc.).
-- **ARM**: The processor(s) must be 64-bit ARM chips (**AArch64**). The minimum
- requirement is **ARMv8** with **Neon** (SIMD extension).
-
-The official Linux release executables of ArangoDB require the operating system
-to use a page size of **4096 bytes** or less.
-
-## macOS
-
-ArangoDB is available for the following architectures:
-
-- **x86-64**: The processor(s) must support the **x86-64** architecture with the
- **SSE 4.2** and **AVX** instruction set extensions (Intel Sandy Bridge or better,
- AMD Bulldozer or better, etc.).
-- **ARM**: The processor(s) must be 64-bit Apple silicon (**M1** or later) based on
- ARM (**AArch64**).
-
-## Windows
-
-ArangoDB is available for the following architectures:
-
-- **x86-64**: The processor(s) must support the **x86-64** architecture with the
- **SSE 4.2** and **AVX** instruction set extensions (Intel Sandy Bridge or better,
- AMD Bulldozer or better, etc.).
-
\ No newline at end of file
diff --git a/site/content/3.10/operations/installation/compiling/compile-on-windows.md b/site/content/3.10/operations/installation/compiling/compile-on-windows.md
deleted file mode 100644
index c89c5d3347..0000000000
--- a/site/content/3.10/operations/installation/compiling/compile-on-windows.md
+++ /dev/null
@@ -1,190 +0,0 @@
----
-title: Compiling ArangoDB under Windows
-menuTitle: Compile on Windows
-weight: 10
-description: >-
- This guide describes how to compile ArangoDB 3.4 and onwards under Windows
----
-## Install chocolatey
-
-With ArangoDB 3.0 a complete cmake environment was introduced. This also
-streamlines the dependencies on Windows. We suggest to use
-[chocolatey.org](https://chocolatey.org/) to install most of
-the dependencies. For sure most projects offer their own setup & install
-packages, chocolatey offers a simplified way to install them with less user
-interactions. You can even use chocolatey via
-[ansibles 2.7 winrm facility](https://docs.ansible.com/ansible/latest/user_guide/windows.html)
-to do unattended installations of some software on Windows.
-
-First install the choco package manager by pasting this tiny cmdlet into a
-command window with Administrator privileges (press Windows key, type `cmd`
-and hit Ctrl+Shift+Enter):
-
- @powershell -NoProfile -ExecutionPolicy Bypass -Command "iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1'))" && SET PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin
-
-### Visual Studio and its Compiler
-
-Since choco currently fails to alter the environment for
-[Microsoft Visual Studio](https://www.visualstudio.com/en-us/products/visual-studio-community-vs.aspx),
-we suggest to download and install Visual Studio by hand.
-
-ArangoDB v3.7 requires Visual Studio 2019 v16.5.0 or later.
-
-{{< warning >}}
-You need to make sure that it installs the **Desktop development with C++** preset,
-else cmake will fail to detect it later on. Furthermore, the **Windows 8.1 SDK and UCRT SDK**
-optional component is required to be selected during Visual Studio installation, else V8
-will fail to compile later on.
-{{< /warning >}}
-
-After it successfully installed, start it once, so it can finish its setup.
-
-### More Dependencies
-
-Now you can invoke the choco package manager for an unattended install of the dependencies
-*(needs to be run with Administrator privileges again)*:
-
- choco install -y cmake.portable nsis python2 procdump windbg wget
-
-Then we need to install the [OpenSSL](https://openssl.org) library from its sources or using precompiled
-[Third Party OpenSSL Related Binary Distributions](https://wiki.openssl.org/index.php/Binaries).
-
-### Optional Dependencies
-
-_Remember that you need to run below commands with Administrator privileges!_
-
-If you want to checkout the code with git, install it like this:
-
- choco install -y git
-
-You need to allow and
-[enable symlinks for your user](https://github.com/git-for-windows/git/wiki/Symbolic-Links#allowing-non-administrators-to-create-symbolic-links).
-
-If you intend to run the unit tests, you also need the following:
-
- choco install -y winflexbison ruby
-
-Close and reopen the Administrator command window in order to continue with the ruby devkit:
-
- choco install -y ruby2.devkit
-
-And manually install the requirements via the `Gemfile` fetched from the ArangoDB Git repository
-*(needs to be run with Administrator privileges)*:
-
- wget https://raw.githubusercontent.com/arangodb/arangodb/devel/tests/rb/HttpInterface/Gemfile
- setx PATH %PATH%;C:\tools\DevKit2\bin;C:\tools\DevKit2\mingw\bin
- gem install bundler
- bundler
-
-Note that the V8 build scripts and gyp aren't compatible with Python 3.x hence you need python2!
-
-## Building ArangoDB
-
-Download and extract the release tarball from
-[www.arangodb.com/download/](https://www.arangodb.com/download/)
-
-Or clone the GitHub repository and checkout the branch or tag you need (e.g. `devel`)
-
- git clone https://github.com/arangodb/arangodb.git -b devel
- cd arangodb
-
-Generate the Visual studio project files, and check back that cmake discovered all components on your system:
-
- mkdir Build64
- cd Build64
- cmake -G "Visual Studio 15 2017 Win64" ..
-
-Note that in some cases cmake struggles to find the proper python interpreter
-(i.e. the cygwin one won't work). You can force overrule it by appending:
-
- -DPYTHON_EXECUTABLE:FILEPATH=C:/Python27/python.exe
-
-You can now load these in the Visual Studio IDE or use cmake to start the build:
-
- cmake --build . --config RelWithDebInfo
-
-The binaries need the ICU datafile `icudt54l.dat`, which is automatically copied into the directory containing the
-executable.
-
-If you intend to use the machine for development purposes, it may be more practical to copy it to a common place:
-
- cd 3rdParty/V8/v*/third_party/icu/source/data/in && cp icudt*.dat /cygdrive/c/Windows/
-
-And configure your environment (yes this instruction remembers to the hitchhikers guide to the galaxy...) so that
-`ICU_DATA` points to `c:\\Windows`. You do that by opening the explorer,
-right click on `This PC` in the tree on the left, choose `Properties` in the opening window `Advanced system settings`,
-in the Popup `Environment Variables`, another popup opens, in the `System Variables` part you click `New`,
-And variable name: `ICU_DATA` to the value: `c:\\Windows`
-
-
-
-## Unit tests (Optional)
-
-The unit tests require a [cygwin](https://www.cygwin.com/) environment.
-
-You need at least `make` from cygwin. Cygwin also offers a `cmake`. Do **not** install the cygwin cmake.
-
-You should also issue these commands to generate user information for the cygwin commands:
-
- mkpasswd > /etc/passwd
- mkgroup > /etc/group
-
-Turning ACL off (noacl) for all mounts in cygwin fixes permissions troubles that may appear in the build:
-
- # /etc/fstab
- #
- # This file is read once by the first process in a Cygwin process tree.
- # To pick up changes, restart all Cygwin processes. For a description
- # see https://cygwin.com/cygwin-ug-net/using.html#mount-table
-
- # noacl = Ignore Access Control List and let Windows handle permissions
- C:/cygwin64/bin /usr/bin ntfs binary,auto,noacl 0 0
- C:/cygwin64/lib /usr/lib ntfs binary,auto,noacl 0 0
- C:/cygwin64 / ntfs override,binary,auto,noacl 0 0
- none /cygdrive cygdrive binary,posix=0,user,noacl 0 0
-
-### Enable native symlinks for Cygwin and git
-
-Cygwin will create proprietary files as placeholders by default instead of
-actually symlinking files. The placeholders later tell Cygwin where to resolve
-paths to. It does not intercept every access to the placeholders however, so
-that 3rd party scripts break. Windows Vista and above support real symlinks,
-and Cygwin can be configured to make use of it:
-
- # use actual symlinks to prevent documentation build errors
- # (requires elevated rights!)
- export CYGWIN="winsymlinks:native"
-
-Note that you must run Cygwin as administrator or change the Windows group
-policies to allow user accounts to create symlinks (`gpedit.msc` if available).
-
-BTW: You can create symlinks manually on Windows like:
-
- mklink /H target/file.ext source/file.ext
- mklink /D target/path source/path
- mklink /J target/path source/path/for/junction
-
-And in Cygwin:
-
- ln -s source target
-
-### Running Unit tests
-
-You can then run the integration tests in the cygwin shell like that:
-
- Build64/bin/RelWithDebInfo/arangosh.exe \
- -c etc/relative/arangosh.conf \
- --log.level warning \
- --server.endpoint tcp://127.0.0.1:1024 \
- --javascript.execute UnitTests/unittest.js \
- -- \
- all \
- --build Build64 \
- --buildType RelWithDebInfo \
- --skipNondeterministic true \
- --skipTimeCritical true \
- --skipBoost true \
- --skipGeo true
-
-Additional options `--ruby c:/tools/ruby25/bin/ruby` and `--rspec c:/tools/ruby25/bin/rspec`
-should be used only if Ruby is not in the *PATH*.
diff --git a/site/content/3.10/operations/security/audit-logging.md b/site/content/3.10/operations/security/audit-logging.md
deleted file mode 100644
index 651d1917da..0000000000
--- a/site/content/3.10/operations/security/audit-logging.md
+++ /dev/null
@@ -1,287 +0,0 @@
----
-title: Audit logging
-menuTitle: Audit logging
-weight: 20
-description: >-
- Audit logs capture interactions with the database system and allow you to
- check who accessed it, what actions were performed, and how the system
- responded
-pageToc:
- maxHeadlineLevel: 3
----
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-{{< tip >}}
-A similar feature is also available in the
-[ArangoGraph Insights Platform](../../arangograph/security-and-access-control/_index.md#using-an-audit-log).
-{{< /tip >}}
-
-## Configuration
-
-To enable audit logging, set the `--audit.output` startup option to either a
-file path (`file://`) or a syslog server (`syslog://`).
-
-For information about the startup options for audit logging, see
-[ArangoDB Server Options](../../components/arangodb-server/options.md#audit).
-
-## Log format
-
-The general format of audit logs is as follows:
-
-```
- | | | | | | | | | ...
-```
-
-- `time-stamp`: When the event occurred. The timezone is GMT. This allows you to
- easily match log entries from servers in different timezones.
-
-- `server`: The server name. You can specify a custom name on startup with the
- [`--audit.hostname`](../../components/arangodb-server/options.md#--audithostname)
- startup option. Otherwise, the default hostname is used.
-
-- `topic`: The log topic of the entry. A topic can be suppressed via the
- `--log.level` startup option or the REST API.
-
-- `username`: The (authenticated or unauthenticated) name supplied by the client.
- A dash (`-`) indicates that no name was given by the client.
-
-- `database`: The database that was accessed. Note that there are no
- database-crossing queries. Each access is restricted to one database in ArangoDB.
-
-- `client-ip`: The source network address of the request.
-
-- `authentication`: The method used to authenticate the user.
-
-- Details about the request in the additional fields.
- Any additional fields (e.g. `text1` and `text2`) are determined by the type
- of log message. Most messages include a status of `ok` or `failed`.
-
-## Events
-
-Unless otherwise noted, all events are logged to their respective topics at the
-`info` level. To suppress events from a given topic, set the topic to the `warn`
-level or higher. By default, each topic is set to the most verbose level
-at which events are logged (either `debug` or `info`), so that all events are
-logged.
-
-### Authentication
-
-#### Unknown authentication methods
-
-```
-2016-10-03 15:44:23 | server1 | audit-authentication | n/a | database1 | 127.0.0.1:61525 | n/a | unknown authentication method | /_api/version
-```
-
-This message occurs when a request contains an `Authorization` header with
-an unknown authentication method. Typically, only `basic` and `bearer` are
-accepted.
-
-#### Missing credentials
-
-```
-2016-10-03 15:39:49 | server1 | audit-authentication | n/a | database1 | 127.0.0.1:61498 | n/a | credentials missing | /_api/version
-```
-
-This message occurs when authentication is enabled and a request omits an
-`Authorization` header. Note that this may naturally occur when making an
-initial request to e.g. log in or load the web interface. For this reason, such
-low-priority events are logged at the `debug` level.
-
-#### Wrong credentials
-
-```
-2016-10-03 17:21:22 | server1 | audit-authentication | root | database1 | 127.0.0.1:64214 | http jwt | user 'root' wrong credentials | /_open/auth
-```
-
-This message occurs when a user makes an attempt to log in with incorrect
-credentials, or passes a JWT with invalid credentials.
-
-Note that the user given as fourth part is the user that requested the login.
-In general, it may be unavailable:
-
-```
-2016-10-03 15:47:26 | server1 | audit-authentication | n/a | database1 | 127.0.0.1:61528 | http basic | credentials wrong | /_api/version
-```
-
-#### JWT login succeeded
-
-```
-2016-10-03 17:21:22 | server1 | audit-authentication | root | database1 | 127.0.0.1:64214 | http jwt | user 'root' authenticated | /_open/auth
-```
-
-The message occurs when a user successfully logs in and is given a JWT token
-for further use.
-
-Note that the user given as fourth part is the user that requested the login.
-
-### Authorization
-
-#### User not authorized to access database
-
-```
-2016-10-03 16:20:52 | server1 | audit-authorization | user1 | database2 | 127.0.0.1:62262 | http basic | not authorized | /_api/version
-```
-
-This message occurs when a user attempts to access a database in a manner in
-which they have not been granted access.
-
-### Databases
-
-#### Database created
-
-```
-2016-10-04 15:33:25 | server1 | audit-database | user1 | database1 | 127.0.0.1:56920 | http basic | create database 'database1' | ok | /_api/database
-```
-
-This message occurs whenever a user attempts to create a database. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Database dropped
-
-```
-2016-10-04 15:33:25 | server1 | audit-database | user1 | database1 | 127.0.0.1:56920 | http basic | delete database 'database1' | ok | /_api/database
-```
-
-This message occurs whenever a user attempts to drop a database. If
-successful, the status is `ok`, otherwise `failed`.
-
-### Collections
-
-#### Collection created
-
-```
-2016-10-05 17:35:57 | server1 | audit-collection | user1 | database1 | 127.0.0.1:51294 | http basic | create collection 'collection1' | ok | /_api/collection
-```
-
-This message occurs whenever a user attempts to create a collection. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Collection truncated
-
-```
-2016-10-05 17:36:08 | server1 | audit-collection | user1 | database1 | 127.0.0.1:51294 | http basic | truncate collection 'collection1' | ok | /_api/collection/collection1/truncate
-```
-
-This message occurs whenever a user attempts to truncate a collection. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Collection dropped
-
-```
-2016-10-05 17:36:30 | server1 | audit-collection | user1 | database1 | 127.0.0.1:51294 | http basic | delete collection 'collection1' | ok | /_api/collection/collection1
-```
-
-This message occur whenever a user attempts to drop a collection. If
-successful, the status is `ok`, otherwise `failed`.
-
-### Indexes
-
-#### Index created
-
-```
-2016-10-05 18:19:40 | server1 | audit-collection | user1 | database1 | 127.0.0.1:52467 | http basic | create index in 'collection1' | ok | {"fields":["a"],"sparse":false,"type":"persistent","unique":false} | /_api/index?collection=collection1
-```
-
-This message occurs whenever a user attempts to create an index. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Index dropped
-
-```
-2016-10-05 18:18:28 | server1 | audit-collection | user1 | database1 | 127.0.0.1:52464 | http basic | drop index 'collection1/44051' | ok | /_api/index/collection1/44051
-```
-
-This message occurs whenever a user attempts to drop an index. If
-successful, the status is `ok`, otherwise `failed`.
-
-### Documents
-
-If statistics are enabled, the system will periodically perform several document
-operations on a few system collections. These low-priority operations are logged
-to the `audit-document` topic at the `debug` level.
-
-#### Single document read
-
-```
-2016-10-04 12:27:55 | server1 | audit-document | user1 | database1 | 127.0.0.1:53699 | http basic | read document in 'collection1' | ok | /_api/document/collection1
-```
-
-This message occurs whenever a user attempts to read a document. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Single document created
-
-```
-2016-10-04 12:27:55 | server1 | audit-document | user1 | database1 | 127.0.0.1:53699 | http basic | create document in 'collection1' | ok | /_api/document/collection1
-```
-
-This message occurs whenever a user attempts to create a document. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Single document replaced
-
-```
-2016-10-04 12:28:08 | server1 | audit-document | user1 | database1 | 127.0.0.1:53699 | http basic | replace document 'collection1/21456' | ok | /_api/document/collection1/21456?ignoreRevs=false
-```
-
-This message occurs whenever a user attempts to replace a document. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Single document updated
-
-```
-2016-10-04 12:28:15 | server1 | audit-document | user1 | database1 | 127.0.0.1:53699 | http basic | modify document 'collection1/21456' | ok | /_api/document/collection1/21456?keepNull=true&ignoreRevs=false
-```
-
-This message occurs whenever a user attempts to update a document. If
-successful, the status is `ok`, otherwise `failed`.
-
-#### Single document deleted
-
-```
-2016-10-04 12:28:23 | server1 | audit-document | user1 | database1 | 127.0.0.1:53699 | http basic | delete document 'collection1/21456' | ok | /_api/document/collection1/21456?ignoreRevs=false
-```
-
-This message occurs whenever a user attempts to delete a document. If
-successful, the status is `ok`, otherwise `failed`.
-
-### Queries
-
-```
-2016-10-06 12:12:10 | server1 | audit-document | user1 | database1 | 127.0.0.1:54232 | http basic | query document | ok | for i in collection1 return i | /_api/cursor
-```
-
-This message occurs whenever a user attempts to execute a query. If
-successful, the status is `ok`, otherwise `failed`.
-
-### Hot Backups
-
-There are three operations which are put into the audit log with respect
-to Hot Backups.
-
-#### Hot Backup created
-
-```
-2020-01-21 15:29:06 | tux | audit-hotbackup | root | n/a | (internal) | n/a | Hotbackup taken with ID 2020-01-21T15:29:06Z_a98422de-03ab-4b94-8ed9-e084bfd4bae1, result: 0
-```
-
-This message occurs whenever a user attempts to create a Hot Backup.
-If successful, the status is `0`, otherwise some numerical error code.
-
-#### Hot Backup restored
-
-```
-2020-01-21 15:29:42 | tux | audit-hotbackup | root | n/a | (internal) | n/a | Hotbackup restored with ID 2020-01-21T15.29.06Z_a98422de-03ab-4b94-8ed9-e084bfd4bae1, result: 0
-```
-
-This message occurs whenever a user attempts to restore from a Hot Backup.
-If successful, the status is `0`, otherwise some numerical error code.
-
-#### Hot Backup deleted
-
-```
-2020-01-21 15:32:37 | tux | audit-hotbackup | root | n/a | (internal) | n/a | Hotbackup deleted with ID 2020-01-21T15.32.27Z_cf1e3cb1-32c0-41d2-9a3f-528c9b43cbf9, result: 0
-```
-
-This message occurs whenever a user attempts to delete a Hot Backup.
-If successful, the status is `0`, otherwise some numerical error code.
diff --git a/site/content/3.10/operations/security/encryption-at-rest.md b/site/content/3.10/operations/security/encryption-at-rest.md
deleted file mode 100644
index 9821486e56..0000000000
--- a/site/content/3.10/operations/security/encryption-at-rest.md
+++ /dev/null
@@ -1,145 +0,0 @@
----
-title: Encryption at Rest
-menuTitle: Encryption at Rest
-weight: 15
-description: >-
- You can secure the physical storage media of an ArangoDB deployment by letting
- it encrypt the database directories
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-When you store sensitive data in your ArangoDB database, you want to protect
-that data under all circumstances. At runtime you will protect it with SSL
-transport encryption and strong authentication, but when the data is already
-on disk, you also need protection. That is where the Encryption feature comes
-in.
-
-The Encryption feature of ArangoDB will encrypt all data that ArangoDB is
-storing in your database before it is written to disk.
-
-The data is encrypted with AES-256-CTR, which is a strong encryption algorithm,
-that is very suitable for multi-processor environments. This means that your
-data is safe, but your database is still fast, even under load.
-
-Hardware acceleration for encryption and decryption is automatically used if
-available. The required AES-NI instruction set (Advanced Encryption Standard
-New Instructions) is available on the majority of Intel and AMD processors from
-the last decade. The benefits over a software-only implementation are better
-performance and resistance to side-channel attacks.
-
-The encryption feature is supported by all ArangoDB deployment modes.
-
-{{< info >}}
-The ArangoGraph Insights Platform has encryption at rest as well as in transit
-set on by default and cannot be disabled. For more information, see the
-[ArangoGraph documentation](../../arangograph/_index.md).
-{{< /info >}}
-
-## Limitations
-
-The encryption feature has the following limitations:
-
-- Encrypting a single collection is not supported: all the databases are
- encrypted.
-- It is not possible to enable encryption at runtime: if you have existing
- data you will need to take a backup first, then enable encryption and
- start your server on an empty data-directory, and finally restore your
- backup.
-
-## Encryption keys
-
-The encryption feature of ArangoDB requires a single 32-byte key per server.
-It is recommended to use a different key for each server (when operating in a
-cluster configuration).
-
-{{< security >}}
-Make sure to protect the encryption keys! That means:
-
-- Do not write them to persistent disks or your server(s), always store them on
- an in-memory (`tmpfs`) filesystem.
-
-- Transport your keys safely to your server(s). There are various tools for
- managing secrets like this (e.g.
- [vaultproject.io](https://www.vaultproject.io/)).
-
-- Store a copy of your key offline in a safe place. If you lose your key, there
- is NO way to get your data back.
-{{< /security >}}
-
-## Configuration
-
-To activate encryption of your database, you need to supply an encryption key
-to the server.
-
-Make sure to pass this option the very first time you start your database.
-You cannot encrypt a database that already exists.
-
-### Encryption key stored in file
-
-Pass the following option to `arangod`:
-
-```
-$ arangod --rocksdb.encryption-keyfile=/mytmpfs/mySecretKey ...
-```
-The file `/mytmpfs/mySecretKey` must contain the encryption key. This
-file must be secured, so that only `arangod` can access it. You should
-also ensure that in case someone steals the hardware, they will not be
-able to read the file. For example, by encrypting `/mytmpfs` or
-creating an in-memory file-system under `/mytmpfs`.
-
-### Encryption key generated by a program
-
-Pass the following option to `arangod`:
-
-```
-$ arangod --rocksdb.encryption-key-generator=path-to-my-generator ...
-```
-
-The program `path-to-my-generator` output the encryption on standard
-output and exit.
-
-### Kubernetes encryption secret
-
-If you use _kube-arangodb_ then use the `spec.rocksdb.encryption.keySecretName`
-setting to specify the name of the Kubernetes secret to be used for encryption.
-See the [kube-arangodb documentation](https://arangodb.github.io/kube-arangodb/docs/api/ArangoDeployment.V1#specrocksdbencryptionkeysecretname).
-
-## Creating keys
-
-The encryption keyfile must contain 32 bytes of random data.
-
-You can create it with a command line this.
-
-```
-dd if=/dev/random bs=1 count=32 of=yourSecretKeyFile
-```
-
-For security, it is best to create these keys offline (away from your database
-servers) and directly store them in your secret management tool.
-
-## Rotating encryption keys
-
-ArangoDB supports rotating the user supplied encryption at rest key.
-This is implemented via key indirection. At initial startup, the first found
-user-supplied key is used as the internal master key. Alternatively, the internal
-master key can be generated from random characters if the startup option
-`--rocksdb.encryption-gen-internal-key` is set to `true`.
-
-It is possible to change the user supplied encryption at rest key via the
-[HTTP API](../../develop/http-api/security.md#encryption-at-rest). This API
-is disabled by default, but can be turned on by setting the startup option
-`--rocksdb.encryption-key-rotation` to `true`.
-
-To enable smooth rollout of new keys you can use the new option
-`--rocksdb.encryption-keyfolder` to provide a set of secrets.
-_arangod_ will then store the master key encrypted with the provided secrets.
-
-```
-$ arangod --rocksdb.encryption-keyfolder=/mytmpfs/mySecrets ...
-```
-
-To start an arangod instance only one of the secrets needs to be correct,
-this should guard against service interruptions during the rotation process.
-
-Please be aware that the encryption at rest key rotation is an **experimental**
-feature, and its APIs and behavior are still subject to change.
diff --git a/site/content/3.10/operations/upgrading/os-specific-information/macos.md b/site/content/3.10/operations/upgrading/os-specific-information/macos.md
deleted file mode 100644
index 3e010631cc..0000000000
--- a/site/content/3.10/operations/upgrading/os-specific-information/macos.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Upgrading on macOS
-menuTitle: macOS
-weight: 10
-description: >-
- How to upgrade an ArangoDB single server installation installed via a DMG package
-aliases:
- - upgrading-on-macos
----
-If you installed ArangoDB on macOS using a _DMG_ package for a single server
-installation, follow the instructions below to upgrade the deployment.
-
-{{< warning >}}
-It is highly recommended to take a backup of your data before upgrading ArangoDB
-using [_arangodump_](../../../components/tools/arangodump/_index.md).
-{{< /warning >}}
-
-## Upgrading via Package
-
-[Download](https://www.arangodb.com/download/) the latest
-ArangoDB macOS package and install it as usual by mounting the `.dmg` file.
-Drag and drop the `ArangoDB3-CLI` (Community Edition) or the `ArangoDB3e-CLI`
-(Enterprise Edition) file onto the shown `Applications` folder.
-You are asked if you want to replace the old file with the newer one.
-
-
-
-Select `Replace` to install the new ArangoDB version.
-
-## Upgrading more complex environments
-
-The procedure described above is a first step to upgrade more complex
-deployments such as
-[Cluster](../../../deploy/cluster/_index.md)
-and [Active Failover](../../../deploy/active-failover/_index.md).
diff --git a/site/content/3.10/operations/upgrading/os-specific-information/windows.md b/site/content/3.10/operations/upgrading/os-specific-information/windows.md
deleted file mode 100644
index b43686c6e1..0000000000
--- a/site/content/3.10/operations/upgrading/os-specific-information/windows.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-title: Upgrading on Windows
-menuTitle: Windows
-weight: 15
-description: >-
- How to upgrade a single server installation using an installer or zip archive
-aliases:
- - upgrading-on-windows
----
-As there are different ways to install ArangoDB on Windows, the upgrade
-method depends on the installation method that was used.
-
-In general, it will be needed to:
-
-- Install (or unpack) the new ArangoDB binaries on the system
-- Upgrade the current database (or perform a restore)
-- Optional (but suggested) to keep the system clean (unless there are specific
- reasons to not do so): remove the old binaries from the system
-
-Some of the above steps may be done automatically, depending on your
-specific situation.
-
-{{< warning >}}
-It is highly recommended to take a backup of your data before upgrading ArangoDB
-using [_arangodump_](../../../components/tools/arangodump/_index.md).
-{{< /warning >}}
-
-## Upgrading via the Installer
-
-If you have installed via the _Installer_, to upgrade:
-
-- Download the new _Installer_ and run it.
-- The _Installer_ will ask if you want to update your current database: select
- the option "_Automatically update existing ArangoDB database_" so that the database
- files will be upgraded.
-
-
-
-{{< info >}}
-Upgrading via the Installer, when the old data is kept, will keep your
-password and choice of storage engine as it is.
-{{< /info >}}
-
-- After installing the new package, you will have both packages installed.
-- You can uninstall the old one manually (make a copy of your old configuration
-file first).
-
-
-
-{{< danger >}}
-When uninstalling the old package, please make sure the option
-"_Delete databases with uninstallation_" is **not** checked.
-{{< /danger >}}
-
-
-
-{{< danger >}}
-When upgrading, the Windows Installer does not use the old configuration file
-for the installed _Single Instance_ but a new (default) one ([Issue #3773](https://github.com/arangodb/arangodb/issues/3773)).
-To use the old configuration, it is currently needed to:
-- Stop the server
-- Replace the new with the old configuration file
-- Restart the server
-{{< /danger >}}
-
-## Manual upgrade of a 'ZIP archive' installation
-
-There are two ways to upgrade a _Single Instance_ that has been started
-from a _ZIP_ package:
-
-- In-Place upgrade
-- Logical upgrade
-
-### In-Place upgrade
-
-{{< info >}} This method works easier if:
-- You are using a data directory which is located outside of the directory
- created when extracting the _ZIP_ archive (data directory can be set via
- the server option *--database.directory*)
-- You are using a configuration file which is located outside of the directory
- created when extracting the _ZIP_ archive (a configuration file can be passed via
- the server option *--configuration*)
-{{< /info >}}
-
-Assuming that:
-- Your data directory is _directory1_ (e.g. "D:\arango\data")
-- Your configuration file is _file_ (e.g. "D:\arango\conf\arangod.conf")
-- Your old binaries are on _directory2_ (e.g. "C:\tools\arangodb-3.4.0")
-
-to perform the upgrade of a _Single Instance_:
-
-1. Download and extract the new _ZIP_ package into a new directory (e.g
- _directory3_ "C:\tools\arangodb-3.4.1")
-2. Stop your old server
-3. Start again the server (this time using the binary located in _directory3_)
- passing:
- - _directory1_ as *--database.directory*,
- - _file_ as *--configuration*
- - the option *--database.auto-upgrade* (so that the old data directory will
- be upgraded)
-4. When the previous step is finished the server will stop automatically; you
- can now start your server again as done in the previous step but without
- passing the *--database.auto-upgrade* option
-5. Optionally remove the old server package by dropping the corresponding
- directory when you are confident enough that all is working fine.
-
-### Logical upgrade
-
-To perform the upgrade of a _Single Instance_:
-
-1. Download the new package and extract it on a different location than the
- previous one
-2. Stop writes to the old server (e.g. block incoming connections)
-3. Take a backup of the data using _arangodump_
-4. Stop the old server
-5. Optional (depending on whether or not you modified default configuration),
- copy old ArangoDB configuration file to the new server (or just edit
- the new configuration file)
-6. Start the new server (with a fresh data directory, by default it will be
- inside the directory created when extracting the _ZIP_ archive)
-7. Restore the backup into the new server using _arangorestore_
-8. Re-enable the writes (e.g. allow again incoming connections)
-9. Optionally remove the old server package by dropping the corresponding
- directory when you are confident enough that all is working fine.
diff --git a/site/content/3.10/release-notes/version-3.6/whats-new-in-3-6.md b/site/content/3.10/release-notes/version-3.6/whats-new-in-3-6.md
deleted file mode 100644
index 02b89c84bb..0000000000
--- a/site/content/3.10/release-notes/version-3.6/whats-new-in-3-6.md
+++ /dev/null
@@ -1,872 +0,0 @@
----
-title: Features and Improvements in ArangoDB 3.6
-menuTitle: What's New in 3.6
-weight: 5
-description: >-
- Multiple performance improvements to AQL queries, dynamic search expressions,
- a new cluster deployment mode
----
-The following list shows in detail which features have been added or improved in
-ArangoDB 3.6. ArangoDB 3.6 also contains several bug fixes that are not listed
-here.
-
-## AQL
-
-### Early pruning of non-matching documents
-
-Previously, AQL queries with filter conditions that could not be satisfied by
-any index required all documents to be copied from the storage engine into the
-AQL scope in order to be fed into the filter.
-
-An example query execution plan for such query from ArangoDB 3.5 looks like this:
-
-```aql
-Query String (75 chars, cacheable: true):
- FOR doc IN test FILTER doc.value1 > 9 && doc.value2 == 'test854' RETURN doc
-
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateCollectionNode 100000 - FOR doc IN test /* full collection scan */
- 3 CalculationNode 100000 - LET #1 = ((doc.`value1` > 9) && (doc.`value2` == "test854"))
- 4 FilterNode 100000 - FILTER #1
- 5 ReturnNode 100000 - RETURN doc
-```
-
-ArangoDB 3.6 adds an optimizer rule `move-filters-into-enumerate` which allows
-applying the filter condition directly while scanning the documents, so copying
-of any documents that don't match the filter condition can be avoided.
-
-The query execution plan for the above query from 3.6 with that optimizer rule
-applied looks as follows:
-
-```aql
-Query String (75 chars, cacheable: true):
- FOR doc IN test FILTER doc.value1 > 9 && doc.value2 == 'test854' RETURN doc
-
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateCollectionNode 100000 - FOR doc IN test /* full collection scan */ FILTER ((doc.`value1` > 9) && (doc.`value2` == "test854")) /* early pruning */
- 5 ReturnNode 100000 - RETURN doc
-```
-
-Note that in this execution plan the scanning and filtering are combined in one
-node, so the copying of all non-matching documents from the storage engine into
-the AQL scope is completely avoided.
-
-This optimization will be beneficial if the filter condition is very selective
-and will filter out many documents, and if documents are large. In this case a
-lot of copying will be avoided.
-
-The optimizer rule also works if an index is used, but there are also filter
-conditions that cannot be satisfied by the index alone. Here is a 3.5 query
-execution plan for a query using a filter on an indexed value plus a filter on
-a non-indexed value:
-
-```aql
-Query String (101 chars, cacheable: true):
- FOR doc IN test FILTER doc.value1 > 10000 && doc.value1 < 30000 && doc.value2 == 'test854' RETURN
- doc
-
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 6 IndexNode 26666 - FOR doc IN test /* hash index scan */
- 7 CalculationNode 26666 - LET #1 = (doc.`value2` == "test854")
- 4 FilterNode 26666 - FILTER #1
- 5 ReturnNode 26666 - RETURN doc
-
-Indexes used:
- By Name Type Collection Unique Sparse Selectivity Fields Ranges
- 6 idx_1649353982658740224 hash test false false 100.00 % [ `value1` ] ((doc.`value1` > 10000) && (doc.`value1` < 30000))
-```
-
-In 3.6, the same query will be executed using a combined index scan & filtering
-approach, again avoiding any copies of non-matching documents:
-
-```aql
-Query String (101 chars, cacheable: true):
- FOR doc IN test FILTER doc.value1 > 10000 && doc.value1 < 30000 && doc.value2 == 'test854' RETURN
- doc
-
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 6 IndexNode 26666 - FOR doc IN test /* hash index scan */ FILTER (doc.`value2` == "test854") /* early pruning */
- 5 ReturnNode 26666 - RETURN doc
-
-Indexes used:
- By Name Type Collection Unique Sparse Selectivity Fields Ranges
- 6 idx_1649353982658740224 hash test false false 100.00 % [ `value1` ] ((doc.`value1` > 10000) && (doc.`value1` < 30000))
-```
-
-### Subquery Splicing Optimization
-
-In earlier versions of ArangoDB, on every execution of a subquery the following
-happened for each input row:
-
-- The subquery tree issues one initializeCursor cascade through all nodes
-- The subquery node pulls rows until the subquery node is empty for this input
-
-On subqueries with many results per input row (10000 or more) the above steps
-did not contribute significantly to query execution time. On subqueries with
-few results per input, there was a serious performance impact.
-
-Subquery splicing inlines the execution of subqueries using an optimizer rule
-called `splice-subqueries`. Only suitable queries can be spliced.
-A subquery becomes unsuitable if it contains a `LIMIT` node or a
-`COLLECT WITH COUNT INTO …` construct (but not due to a
-`COLLECT var = WITH COUNT INTO …`). A subquery *also* becomes
-unsuitable if it is contained in a (sub)query containing unsuitable parts
-*after* the subquery.
-
-Consider the following query to illustrate the difference.
-
-```aql
-FOR x IN c1
- LET firstJoin = (
- FOR y IN c2
- FILTER y._id == x.c2_id
- LIMIT 1
- RETURN y
- )
- LET secondJoin = (
- FOR z IN c3
- FILTER z.value == x.value
- RETURN z
- )
- RETURN { x, firstJoin, secondJoin }
-```
-
-The execution plan **without** subquery splicing:
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateCollectionNode 0 - FOR x IN c1 /* full collection scan */
- 9 SubqueryNode 0 - LET firstJoin = ... /* subquery */
- 3 SingletonNode 1 * ROOT
- 18 IndexNode 0 - FOR y IN c2 /* primary index scan */
- 7 LimitNode 0 - LIMIT 0, 1
- 8 ReturnNode 0 - RETURN y
- 15 SubqueryNode 0 - LET secondJoin = ... /* subquery */
- 10 SingletonNode 1 * ROOT
- 11 EnumerateCollectionNode 0 - FOR z IN c3 /* full collection scan */
- 12 CalculationNode 0 - LET #11 = (z.`value` == x.`value`) /* simple expression */ /* collections used: z : c3, x : c1 */
- 13 FilterNode 0 - FILTER #11
- 14 ReturnNode 0 - RETURN z
- 16 CalculationNode 0 - LET #13 = { "x" : x, "firstJoin" : firstJoin, "secondJoin" : secondJoin } /* simple expression */ /* collections used: x : c1 */
- 17 ReturnNode 0 - RETURN #13
-
-Optimization rules applied:
- Id RuleName
- 1 use-indexes
- 2 remove-filter-covered-by-index
- 3 remove-unnecessary-calculations-2
-```
-
-Note in particular the `SubqueryNode`s, followed by a `SingletonNode` in
-both cases.
-
-When using the optimizer rule `splice-subqueries` the plan is as follows:
-
-```aql
-Execution plan:
- Id NodeType Est. Comment
- 1 SingletonNode 1 * ROOT
- 2 EnumerateCollectionNode 0 - FOR x IN c1 /* full collection scan */
- 9 SubqueryNode 0 - LET firstJoin = ... /* subquery */
- 3 SingletonNode 1 * ROOT
- 18 IndexNode 0 - FOR y IN c2 /* primary index scan */
- 7 LimitNode 0 - LIMIT 0, 1
- 8 ReturnNode 0 - RETURN y
- 19 SubqueryStartNode 0 - LET secondJoin = ( /* subquery begin */
- 11 EnumerateCollectionNode 0 - FOR z IN c3 /* full collection scan */
- 12 CalculationNode 0 - LET #11 = (z.`value` == x.`value`) /* simple expression */ /* collections used: z : c3, x : c1 */
- 13 FilterNode 0 - FILTER #11
- 20 SubqueryEndNode 0 - ) /* subquery end */
- 16 CalculationNode 0 - LET #13 = { "x" : x, "firstJoin" : firstJoin, "secondJoin" : secondJoin } /* simple expression */ /* collections used: x : c1 */
- 17 ReturnNode 0 - RETURN #13
-
-Optimization rules applied:
- Id RuleName
- 1 use-indexes
- 2 remove-filter-covered-by-index
- 3 remove-unnecessary-calculations-2
- 4 splice-subqueries
-```
-
-The first subquery is unsuitable for the optimization because it contains
-a `LIMIT` statement and is therefore not spliced. The second subquery is
-suitable and hence is spliced – which one can tell from the different node
-type `SubqueryStartNode` (beginning of spliced subquery). Note how it is
-not followed by a `SingletonNode`. The end of the spliced subquery is
-marked by a `SubqueryEndNode`.
-
-### Late document materialization (RocksDB)
-
-With the _late document materialization_ optimization ArangoDB tries to
-read only documents that are absolutely necessary to compute the query result,
-reducing load to the storage engine. This is only supported for the RocksDB
-storage engine.
-
-In 3.6 the optimization can only be applied to queries retrieving data from a
-collection or an ArangoSearch View and that contain a `SORT`+`LIMIT`
-combination.
-
-For the collection case the optimization is possible if and only if:
-- there is an index of type `primary`, `hash`, `skiplist`, `persistent`
- or `edge` picked by the optimizer
-- all attribute accesses can be covered by indexed attributes
-
-```aql
-// Given we have a persistent index on attributes [ "foo", "bar", "baz" ]
-FOR d IN myCollection
- FILTER d.foo == "someValue" // hash index will be picked to optimize filtering
- SORT d.baz DESC // field "baz" will be read from index
- LIMIT 100 // only 100 documents will be materialized
- RETURN d
-```
-
-For the ArangoSearch View case the optimization is possible if and only if:
-- all attribute accesses can be covered by attributes stored in the View index
- (e.g. using `primarySort`)
-- the primary sort order optimization is not applied, because it voids the need
- for late document materialization
-
-```aql
-// Given primarySort is {"field": "foo", "asc": false}, i.e.
-// field "foo" covered by index but sort optimization not applied
-// (sort order of primarySort and SORT operation mismatch)
-FOR d IN myView
- SORT d.foo
- LIMIT 100 // only 100 documents will be materialized
- RETURN d
-```
-
-```aql
-// Given primarySort contains field "foo"
-FOR d IN myView
- SEARCH d.foo == "someValue"
- SORT BM25(d) DESC // BM25(d) will be evaluated by the View node above
- LIMIT 100 // only 100 documents will be materialized
- RETURN d
-```
-
-```aql
-// Given primarySort contains fields "foo" and "bar", and "bar" is not the
-// first field or at least not sorted by in descending order, i.e. the sort
-// optimization cannot be applied but the late document materialization instead
-FOR d IN myView
- SEARCH d.foo == "someValue"
- SORT d.bar DESC // field "bar" will be read from the View
- LIMIT 100 // only 100 documents will be materialized
- RETURN d
-```
-
-The respective optimizer rules are called `late-document-materialization`
-(collection source) and `late-document-materialization-arangosearch`
-(ArangoSearch View source). If applied, you will find `MaterializeNode`s
-in [execution plans](../../aql/execution-and-performance/query-optimization.md#execution-nodes).
-
-### Parallelization of cluster AQL queries
-
-ArangoDB 3.6 can parallelize work in many cluster AQL queries when there are
-multiple DB-Servers involved. The parallelization is done in the
-*GatherNode*, which then can send parallel cluster-internal requests to the
-DB-Servers attached. The DB-Servers can then work fully parallel
-for the different shards involved.
-
-When parallelization is used, one or multiple *GatherNode*s in a query's
-execution plan will be tagged with `parallel` as follows:
-
-```aql
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 2 EnumerateCollectionNode DBS 1000000 - FOR doc IN test /* full collection scan, 5 shard(s) */
- 6 RemoteNode COOR 1000000 - REMOTE
- 7 GatherNode COOR 1000000 - GATHER /* parallel */
- 3 ReturnNode COOR 1000000 - RETURN doc
-```
-
-Parallelization is currently restricted to certain types and parts of queries.
-*GatherNode*s will go into parallel mode only if the DB-Server query part
-above it (in terms of query execution plan layout) is a terminal part of the
-query. To trigger the optimization, there must not be other nodes of type
-*ScatterNode*, *GatherNode* or *DistributeNode* present in the query.
-
-Please note that the parallelization of AQL execution may lead to a different
-resource usage pattern for eligible AQL queries in the cluster. In isolation,
-queries are expected to complete faster with parallelization than when executing
-their work serially on all involved DB-Servers. However, working on
-multiple DB-Servers in parallel may also mean that more work than before
-is happening at the very same time. If this is not desired because of resource
-scarcity, there are options to control the parallelization:
-
-The startup option `--query.parallelize-gather-writes` can be used to control
-whether eligible write operation parts will be parallelized. This option
-defaults to `true`, meaning that eligible write operations are also parallelized
-by default. This can be turned off so that potential I/O overuse can be avoided
-for write operations when used together with a high replication factor.
-
-Additionally, the startup option `--query.optimizer-rules` can be used to
-globally toggle the usage of certain optimizer rules for all queries.
-By default, all optimizations are turned on. However, specific optimizations
-can be turned off using the option.
-
-For example, to turn off the parallelization entirely (including parallel
-gather writes), one can use the following configuration:
-
-```
---query.optimizer-rules "-parallelize-gather"
-```
-
-This toggle works for any other non-mandatory optimizer rules as well.
-To specify multiple optimizer rules, the option can be used multiple times, e.g.
-
-```
---query.optimizer-rules "-parallelize-gather" --query.optimizer-rules "-splice-subqueries"
-```
-
-You can overrule which optimizer rules to use or not use on a per-query basis
-still. `--query.optimizer-rules` merely defines a default. However,
-`--query.parallelize-gather-writes false` turns off parallel gather writes
-completely and it cannot be re-enabled for individual queries.
-
-### Optimizations for simple UPDATE and REPLACE queries
-
-Cluster query execution plans for simple `UPDATE` and `REPLACE` queries that
-modify multiple documents and do not use `LIMIT` are now more efficient as
-several steps were removed. The existing optimizer rule
-`undistribute-remove-after-enum-coll` has been extended to cover these cases
-too, in case the collection is sharded by `_key` and the `UPDATE`/`REPLACE`
-operation is using the full document or the `_key` attribute to find it.
-
-For example, a query such as:
-
-```aql
-FOR doc IN test UPDATE doc WITH { updated: true } IN test
-```
-
-… is executed as follows in 3.5:
-
-```aql
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 3 CalculationNode DBS 1 - LET #3 = { "updated" : true }
- 2 EnumerateCollectionNode DBS 1000000 - FOR doc IN test /* full collection scan, 5 shard(s) */
- 11 RemoteNode COOR 1000000 - REMOTE
- 12 GatherNode COOR 1000000 - GATHER
- 5 DistributeNode COOR 1000000 - DISTRIBUTE /* create keys: false, variable: doc */
- 6 RemoteNode DBS 1000000 - REMOTE
- 4 UpdateNode DBS 0 - UPDATE doc WITH #3 IN test
- 7 RemoteNode COOR 0 - REMOTE
- 8 GatherNode COOR 0 - GATHER
-```
-
-In 3.6 the execution plan is streamlined to just:
-
-```aql
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 3 CalculationNode DBS 1 - LET #3 = { "updated" : true }
- 13 IndexNode DBS 1000000 - FOR doc IN test /* primary index scan, index only, projections: `_key`, 5 shard(s) */
- 4 UpdateNode DBS 0 - UPDATE doc WITH #3 IN test
- 7 RemoteNode COOR 0 - REMOTE
- 8 GatherNode COOR 0 - GATHER /* parallel */
-```
-
-As can be seen above, the benefit of applying the optimization is that the extra
-communication between the Coordinator and DB-Server is removed. This will
-mean less cluster-internal traffic and thus can result in faster execution.
-As an extra benefit, the optimization will also make the affected queries
-eligible for parallel execution. It is only applied in cluster deployments.
-
-The optimization will also work when a filter is involved:
-
-```aql
-Query String (79 chars, cacheable: false):
- FOR doc IN test FILTER doc.value == 4 UPDATE doc WITH { updated: true } IN test
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 5 CalculationNode DBS 1 - LET #5 = { "updated" : true }
- 2 EnumerateCollectionNode DBS 1000000 - FOR doc IN test /* full collection scan, projections: `_key`, `value`, 5 shard(s) */
- 3 CalculationNode DBS 1000000 - LET #3 = (doc.`value` == 4)
- 4 FilterNode DBS 1000000 - FILTER #3
- 6 UpdateNode DBS 0 - UPDATE doc WITH #5 IN test
- 9 RemoteNode COOR 0 - REMOTE
- 10 GatherNode COOR 0 - GATHER
-```
-
-### AQL Date functionality
-
-AQL now enforces a valid date range for working with date/time in AQL.
-The valid date ranges for any AQL date/time function are:
-
-- for string date/time values: `"0000-01-01T00:00:00.000Z"` (including) up to
- `"9999-12-31T23:59:59.999Z"` (including)
-- for numeric date/time values: -62167219200000 (including) up to 253402300799999
- (including). These values are the numeric equivalents of
- `"0000-01-01T00:00:00.000Z"` and `"9999-12-31T23:59:59.999Z"`.
-
-Any date/time values outside the given range that are passed into an AQL date
-function will make the function return `null` and trigger a warning in the
-query, which can optionally be escalated to an error and stop the query.
-
-Any date/time operations that produce date/time outside the valid ranges stated
-above will make the function return `null` and trigger a warning too.
-An example for this is:
-
-```aql
-DATE_SUBTRACT("2018-08-22T10:49:00+02:00", 100000, "years")
-```
-
-The performance of AQL date operations that work on
-[date strings](../../aql/functions/date.md) has been improved
-compared to previous versions.
-
-Finally, ArangoDB 3.6 provides a new [AQL function](../../aql/functions/date.md#date_round)
-`DATE_ROUND()` to bin a date/time into a set of equal-distance buckets.
-
-### Miscellaneous AQL changes
-
-In addition, ArangoDB 3.6 provides the following new AQL functionality:
-
-- a function `GEO_AREA()` for [area calculations](../../aql/functions/geo.md#geo_area)
- (also added to v3.5.1)
-
-- a [`maxRuntime` query option](../../aql/how-to-invoke-aql/with-arangosh.md#maxruntime)
- to restrict the execution to a given time in seconds (also added to v3.5.4).
- Also see [HTTP interfaces for AQL queries](../../develop/http-api/queries/aql-queries.md#create-a-cursor).
-
-- a startup option `--query.optimizer-rules` to turn certain AQL query optimizer
- rules off (or on) by default. This can be used to turn off certain optimizations
- that would otherwise lead to undesired changes in server resource usage patterns.
-
-## ArangoSearch
-
-### Analyzers
-
-- Added UTF-8 support and ability to mark beginning/end of the sequence to
- the [`ngram` Analyzer type](../../index-and-search/analyzers.md#ngram).
-
- The following optional properties can be provided for an `ngram` Analyzer
- definition:
-
- - `startMarker` : ``, default: ""\
- this value will be prepended to _n_-grams at the beginning of input sequence
-
- - `endMarker` : ``, default: ""\
- this value will be appended to _n_-grams at the beginning of input sequence
-
- - `streamType` : `"binary"|"utf8"`, default: "binary"\
- type of the input stream (support for UTF-8 is new)
-
-- Added _edge n-gram_ support to the [`text` Analyzer type](../../index-and-search/analyzers.md#text).
- The input gets tokenized as usual, but then _n_-grams are generated from each
- token. UTF-8 encoding is assumed (whereas the `ngram` Analyzer has a
- configurable stream type and defaults to binary).
-
- The following optional properties can be provided for a `text`
- Analyzer definition:
-
- - `edgeNgram` (object, _optional_):
- - `min` (number, _optional_): minimal _n_-gram length
- - `max` (number, _optional_): maximal _n_-gram length
- - `preserveOriginal` (boolean, _optional_): include the original token
- if its length is less than *min* or greater than *max*
-
-### Dynamic search expressions with arrays
-
-ArangoSearch now accepts [SEARCH expressions](../../aql/high-level-operations/search.md#syntax)
-with array comparison operators in the form of:
-
-```
- [ ALL|ANY|NONE ] [ <=|<|==|!=|>|>=|IN ] doc.
-```
-
-i.e. the left-hand side operand is always an array, which can be dynamic.
-
-```aql
-LET tokens = TOKENS("some input", "text_en") // ["some", "input"]
-FOR doc IN myView SEARCH tokens ALL IN doc.title RETURN doc // dynamic conjunction
-FOR doc IN myView SEARCH tokens ANY IN doc.title RETURN doc // dynamic disjunction
-FOR doc IN myView SEARCH tokens NONE IN doc.title RETURN doc // dynamic negation
-FOR doc IN myView SEARCH tokens ALL > doc.title RETURN doc // dynamic conjunction with comparison
-FOR doc IN myView SEARCH tokens ANY <= doc.title RETURN doc // dynamic disjunction with comparison
-```
-
-In addition, both the `TOKENS()` and the `PHRASE()` functions were
-extended with array support for convenience.
-
-[TOKENS()](../../aql/functions/string.md#tokens) accepts recursive arrays of
-strings as the first argument:
-
-```aql
-TOKENS("quick brown fox", "text_en") // [ "quick", "brown", "fox" ]
-TOKENS(["quick brown", "fox"], "text_en") // [ ["quick", "brown"], ["fox"] ]
-TOKENS(["quick brown", ["fox"]], "text_en") // [ ["quick", "brown"], [["fox"]] ]
-```
-
-In most cases you will want to flatten the resulting array for further usage,
-because nested arrays are not accepted in `SEARCH` statements such as
-` ALL IN doc.`:
-
-```aql
-LET tokens = TOKENS(["quick brown", ["fox"]], "text_en") // [ ["quick", "brown"], [["fox"]] ]
-LET tokens_flat = FLATTEN(tokens, 2) // [ "quick", "brown", "fox" ]
-FOR doc IN myView SEARCH ANALYZER(tokens_flat ALL IN doc.title, "text_en") RETURN doc
-```
-
-[PHRASE()](../../aql/functions/arangosearch.md#phrase) accepts an array as the
-second argument:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick brown fox"], "text_en") RETURN doc
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick", "brown", "fox"], "text_en") RETURN doc
-
-LET tokens = TOKENS("quick brown fox", "text_en") // ["quick", "brown", "fox"]
-FOR doc IN myView SEARCH PHRASE(doc.title, tokens, "text_en") RETURN doc
-```
-
-It is equivalent to the more cumbersome and static form:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 0, "brown", 0, "fox", "text_en") RETURN doc
-```
-
-You can optionally specify the number of _skipTokens_ in the array form before
-every string element:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick", 1, "fox", "jumps"], "text_en") RETURN doc
-```
-
-It is the same as the following:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 1, "fox", 0, "jumps", "text_en") RETURN doc
-```
-
-### SmartJoins and Views
-
-ArangoSearch Views are now eligible for [SmartJoins](../../develop/smartjoins.md) in AQL,
-provided that their underlying collections are eligible too.
-
-All collections forming the View must be sharded equally. The other join
-operand can be a collection or another View.
-
-## OneShard
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Not all use cases require horizontal scalability. In such cases, a OneShard
-deployment offers a practicable solution that enables significant performance
-improvements by massively reducing cluster-internal communication.
-
-A database created with OneShard enabled is limited to a single DB-Server node
-but still replicated synchronously to ensure resilience. This configuration
-allows running transactions with ACID guarantees on shard leaders.
-
-This setup is highly recommended for most graph use cases and join-heavy
-queries.
-
-Unlike a (flexibly) sharded cluster, where the Coordinator distributes access
-to shards across different DB-Server nodes, collects and processes partial
-results, the Coordinator in a OneShard setup moves the query execution directly
-to the respective DB-Server for local query execution. The Coordinator receives
-only the final result. This can drastically reduce resource consumption and
-communication effort for the Coordinator.
-
-An entire cluster, selected databases or selected collections can be made
-eligible for the OneShard optimization. See [OneShard cluster architecture](../../deploy/oneshard.md)
-for details and usage examples.
-
-## HTTP API
-
-The following APIs have been expanded / changed:
-
-- [Database creation API](../../develop/http-api/databases.md#create-a-database),\
- HTTP route `POST /_api/database`
-
- The database creation API now handles the `replicationFactor`, `writeConcern`
- and `sharding` attributes. All these attributes are optional, and only
- meaningful in a cluster.
-
- The values provided for the attributes `replicationFactor` and `writeConcern`
- will be used as default values when creating collections in that database,
- allowing to omit these attributes when creating collections. However, the
- values set here are just defaults for new collections in the database.
- The values can still be adjusted per collection when creating new collections
- in that database via the web UI, the arangosh or drivers.
-
- In an Enterprise Edition cluster, the `sharding` attribute can be given a
- value of `"single"`, which will make all new collections in that database use
- the same shard distribution and use one shard by default (OneShard
- configuration). This can still be overridden by setting the values of
- `numberOfShards` and `distributeShardsLike` when creating new collections in
- that database via the web UI, arangosh or drivers (unless the startup option
- `--cluster.force-one-shard` is enabled).
-
-- [Database properties API](../../develop/http-api/databases.md#get-information-about-the-current-database),\
- HTTP route `GET /_api/database/current`
-
- The database properties endpoint returns the new additional attributes
- `replicationFactor`, `writeConcern` and `sharding` in a cluster.
- A description of these attributes can be found above.
-
-- [Collection](../../develop/http-api/collections.md) / [Graph APIs](../../develop/http-api/graphs/named-graphs.md#management),\
- HTTP routes `POST /_api/collection`, `GET /_api/collection/{collection-name}/properties`
- and various `/_api/gharial/*` endpoints
-
- `minReplicationFactor` has been renamed to `writeConcern` for consistency.
- The old attribute name is still accepted and returned for compatibility.
-
-- [Hot Backup API](../../develop/http-api/hot-backups.md#create-a-backup),\
- HTTP route `POST /_admin/backup/create`
-
- New attribute `force`, see [Hot Backup](#hot-backup) below.
-
-- New [Metrics API](../../develop/http-api/monitoring/metrics.md#metrics-api),\
- HTTP route `GET /_admin/metrics`
-
- Returns the instance's current metrics in Prometheus format. The returned
- document collects all instance metrics, which are measured at any given
- time and exposes them for collection by Prometheus.
-
- The new endpoint can be used instead of the additional tool
- [arangodb-exporter](https://github.com/arangodb-helper/arangodb-exporter).
-
-## Web interface
-
-The web interface now shows the shards of all collections (including system
-collections) in the shard distribution view. Displaying system collections here
-is necessary to access the prototype collections of a collection sharded via
-`distributeShardsLike` in case the prototype is a system collection, and the
-prototype collection shall be moved to another server using the web interface.
-
-The web interface now also allows setting a default replication factor when a
-creating a new database. This default replication factor will be used for all
-collections created in the new database, unless explicitly overridden.
-
-## Startup options
-
-### Metrics API option
-
-The new [option](../../components/arangodb-server/options.md#--serverexport-metrics-api)
-`--server.export-metrics-api` allows you to disable the metrics API by setting
-it to `false`, which is otherwise turned on by default.
-
-### OneShard cluster option
-
-The [option](../../components/arangodb-server/options.md#--clusterforce-one-shard)
-`--cluster.force-one-shard` enables the new OneShard feature for the entire
-cluster deployment. It forces the cluster into creating all future collections
-with only a single shard and using the same DB-Server as these collections'
-shards leader. All collections created this way will be eligible for specific
-AQL query optimizations that can improve query performance and provide advanced
-transactional guarantees.
-
-### Cluster upgrade option
-
-The new [option](../../components/arangodb-server/options.md#--clusterupgrade) `--cluster.upgrade`
-toggles the cluster upgrade mode for Coordinators. It supports the following
-values:
-
-- `auto`:
- perform a cluster upgrade and shut down afterwards if the startup option
- `--database.auto-upgrade` is set to true. Otherwise, don't perform an upgrade.
-
-- `disable`:
- never perform a cluster upgrade, regardless of the value of
- `--database.auto-upgrade`.
-
-- `force`:
- always perform a cluster upgrade and shut down, regardless of the value of
- `--database.auto-upgrade`.
-
-- `online`:
- always perform a cluster upgrade but don't shut down afterwards
-
-The default value is `auto`. The option only affects Coordinators. It does not
-have any affect on single servers, Agents or DB-Servers.
-
-### Other cluster options
-
-The following [options](../../components/arangodb-server/options.md#cluster) have been added:
-
-- `--cluster.max-replication-factor`: maximum replication factor for new
- collections. A value of `0` means that there is no restriction.
- The default value is `10`.
-
-- `--cluster.min-replication-factor`: minimum replication factor for new
- collections. The default value is `1`. This option can be used to prevent the
- creation of collections that do not have any or enough replicas.
-
-- `--cluster.write-concern`: default write concern value used for new
- collections. This option controls the number of replicas that must
- successfully acknowledge writes to a collection. If any write operation gets
- less acknowledgements than configured here, the collection will go into
- read-only mode until the configured number of replicas are available again.
- The default value is `1`, meaning that writes to just the leader are
- sufficient. To ensure that there is at least one extra copy (i.e. one
- follower), set this option to `2`.
-
-- `--cluster.max-number-of-shards`: maximum number of shards allowed for new
- collections. A value of `0` means that there is no restriction.
- The default value is `1000`.
-
-Note that the above options only have an effect when set for Coordinators, and
-only for collections that are created after the options have been set. They do
-not affect already existing collections.
-
-Furthermore, the following network related [options](../../components/arangodb-server/options.md#network)
-have been added:
-
-- `--network.idle-connection-ttl`: default time-to-live for idle cluster-internal
- connections (in milliseconds). The default value is `60000`.
-
-- `--network.io-threads`: number of I/O threads for cluster-internal network
- requests. The default value is `2`.
-
-- `--network.max-open-connections`: maximum number of open network connections
- for cluster-internal requests. The default value is `1024`.
-
-- `--network.verify-hosts`: if set to `true`, this will verify peer certificates
- for cluster-internal requests when TLS is used. The default value is `false`.
-
-### RocksDB exclusive writes option
-
-The new option `--rocksdb.exclusive-writes` allows to make all writes to the
-RocksDB storage exclusive and therefore avoids write-write conflicts.
-This option was introduced to open a way to upgrade from MMFiles to RocksDB
-storage engine without modifying client application code. Otherwise it should
-best be avoided as the use of exclusive locks on collections will introduce a
-noticeable throughput penalty.
-
-Note that the MMFiles engine is deprecated
-from v3.6.0 on and will be removed in a future release. So will be this option,
-which is a stopgap measure only.
-
-### AQL options
-
-The new startup option `--query.optimizer-rules` can be used to to selectively
-enable or disable AQL query optimizer rules by default. The option can be
-specified multiple times, and takes the same input as the query option of the
-same name.
-
-For example, to turn off the rule `use-indexes-for-sort`, use
-
-```
---query.optimizer-rules "-use-indexes-for-sort"
-```
-
-The purpose of this [startup option](../../components/arangodb-server/options.md#--queryoptimizer-rules)
-is to be able to enable potential future experimental optimizer rules, which
-may be shipped in a disabled-by-default state.
-
-## Hot Backup
-
-- Force Backup
-
- When creating backups there is an additional option `--force` for
- [arangobackup](../../components/tools/arangobackup/examples.md) and in the HTTP API.
- This option **aborts** ongoing write transactions to obtain the global lock
- for creating the backup. Most likely this is _not_ what you want to do
- because it will abort valid ongoing write operations, but it makes sure that
- backups can be acquired more quickly. The force flag currently only aborts
- [Stream Transactions](../../develop/http-api/transactions/stream-transactions.md) but no
- [JavaScript Transactions](../../develop/http-api/transactions/javascript-transactions.md).
-
-- View Data
-
- HotBackup now includes View data. Previously the Views had to be rebuilt
- after a restore. Now the Views are available immediately.
-
-## TLS v1.3
-
-Added support for TLS 1.3 for the [arangod server](../../components/arangodb-server/options.md#--sslprotocol)
-and the client tools (also added to v3.5.1).
-
-The arangod server can be started with option `--ssl.protocol 6` to make it require
-TLS 1.3 for incoming client connections. The server can be started with option
-`--ssl.protocol 5` to make it require TLS 1.2, as in previous versions of arangod.
-
-The default TLS protocol for the arangod server is now generic TLS
-(`--ssl.protocol 9`), which will allow the negotiation of the TLS version between
-the client and the server.
-
-All client tools also support TLS 1.3, by using the `--ssl.protocol 6` option when
-invoking them. The client tools will use TLS 1.2 by default, in order to be
-compatible with older versions of ArangoDB that may be contacted by these tools.
-
-To configure the TLS version for arangod instances started by the ArangoDB starter,
-one can use the `--all.ssl.protocol=VALUE` startup option for the ArangoDB starter,
-where VALUE is one of the following:
-
-- 4 = TLSv1
-- 5 = TLSv1.2
-- 6 = TLSv1.3
-- 9 = generic TLS
-
-Note: TLS v1.3 support has been added in ArangoDB v3.5.1 already, but the default TLS
-version in ArangoDB 3.5 was still TLS v1.2. ArangoDB v3.6 uses "generic TLS" as its
-default TLS version, which will allows clients to negotiate the TLS version with the
-server, dynamically choosing the **highest** mutually supported version of TLS.
-
-## Miscellaneous
-
-- Remove operations for documents in the cluster will now use an optimization,
- if all sharding keys are specified. Should the sharding keys not match the
- values in the actual document, a not found error will be returned.
-
-- [Collection names](../../concepts/data-structure/collections.md#collection-names)
- in ArangoDB can now be up to 256 characters long, instead of 64 characters in
- previous versions.
-
-- Disallow using `_id` or `_rev` as shard keys in clustered collections.
-
- Using these attributes for sharding was not supported before, but didn't trigger
- any errors. Instead, collections were created and silently using `_key` as
- the shard key, without making the caller aware of that an unsupported shard
- key was used.
-
-- Make the scheduler enforce the configured queue lengths. The values of the
- options `--server.scheduler-queue-size`, `--server.prio1-size` and
- `--server.maximal-queue-size` will now be honored and not exceeded.
-
- The default queue sizes in the scheduler for requests buffering have
- also been changed as follows:
-
- ```
- request type before now
- -----------------------------------
- high priority 128 4096
- medium priority 1048576 4096
- low priority 4096 4096
- ```
-
- The queue sizes can still be adjusted at server start using the above-
- mentioned startup options.
-
-## Internal
-
-Release packages for Linux are now built using inter-procedural
-optimizations (IPO).
-
-We have moved from C++14 to C++17, which allows us to use some of the
-simplifications, features and guarantees that this standard has in stock.
-To compile ArangoDB 3.6 from source, a compiler that supports C++17 is now
-required.
-
-The bundled JEMalloc memory allocator used in ArangoDB release packages has
-been upgraded from version 5.2.0 to version 5.2.1.
-
-The bundled version of the Boost library has been upgraded from 1.69.0 to
-1.71.0.
-
-The bundled version of xxhash has been upgraded from 0.5.1 to 0.7.2.
diff --git a/site/content/3.11/_index.md b/site/content/3.11/_index.md
deleted file mode 100644
index a50a8ab193..0000000000
--- a/site/content/3.11/_index.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: Recommended Resources
-menuTitle: '3.11'
-weight: 0
-layout: default
----
-{{< cloudbanner >}}
-
-{{< cards >}}
-
-{{% card title="What is ArangoDB?" link="about-arangodb/" %}}
-Get to know graphs, ArangoDB's use cases and features.
-{{% /card %}}
-
-{{% card title="Get started" link="get-started/" %}}
-Learn about ArangoDB's core concepts, how to interact with the database system,
-and get a server instance up and running.
-{{% /card %}}
-
-{{% card title="ArangoGraph Insights Platform" link="arangograph/" %}}
-Try out ArangoDB's fully-managed cloud offering for a faster time to value.
-{{% /card %}}
-
-{{% card title="AQL" link="aql/" %}}
-ArangoDB's Query Language AQL lets you use graphs, JSON documents, and search
-via a single, composable query language.
-{{% /card %}}
-
-{{% card title="Data Science" link="data-science/" %}}
-Discover the graph analytics and machine learning features of ArangoDB.
-{{% /card %}}
-
-{{% card title="Deploy" link="deploy/" %}}
-Find the right deployment mode and set up your ArangoDB instance.
-{{% /card %}}
-
-{{% card title="Develop" link="develop/" %}}
-See the in-depth feature and API documentation to start developing applications
-with ArangoDB as your backend.
-{{% /card %}}
-
-{{< /cards >}}
diff --git a/site/content/3.11/about-arangodb/_index.md b/site/content/3.11/about-arangodb/_index.md
deleted file mode 100644
index 9b96a70c37..0000000000
--- a/site/content/3.11/about-arangodb/_index.md
+++ /dev/null
@@ -1,75 +0,0 @@
----
-title: What is ArangoDB?
-menuTitle: About ArangoDB
-weight: 5
-description: >-
- ArangoDB is a scalable graph database system to drive value from connected
- data, faster
-aliases:
- - introduction
- - introduction/about-arangodb
----
-
-
-ArangoDB combines the analytical power of native graphs with an integrated
-search engine, JSON support, and a variety of data access patterns via a single,
-composable query language.
-
-ArangoDB is available in an open-source and a commercial [edition](features/_index.md).
-You can use it for on-premises deployments, as well as a fully managed
-cloud service, the [ArangoGraph Insights Platform](../arangograph/_index.md).
-
-## What are Graphs?
-
-Graphs are information networks comprised of nodes and relations.
-
-
-
-A social network is a common example of a graph. People are represented by nodes
-and their friendships by relations.
-
-
-
-Nodes are also called vertices (singular: vertex), and relations are edges that
-connect vertices.
-A vertex typically represents a specific entity (a person, a book, a sensor
-reading, etc.) and an edge defines how one entity relates to another.
-
-
-
-This paradigm of storing data feels natural because it closely matches the
-cognitive model of humans. It is an expressive data model that allows you to
-represent many problem domains and solve them with semantic queries and graph
-analytics.
-
-## Beyond Graphs
-
-Not everything is a graph use case. ArangoDB lets you equally work with
-structured, semi-structured, and unstructured data in the form of schema-free
-JSON objects, without having to connect these objects to form a graph.
-
-
-
-
-
-Depending on your needs, you may mix graphs and unconnected data.
-ArangoDB is designed from the ground up to support multiple data models with a
-single, composable query language.
-
-```aql
-FOR book IN Books
- FILTER book.title == "ArangoDB"
- FOR person IN 2..2 INBOUND book Sales, OUTBOUND People
- RETURN person.name
-```
-
-ArangoDB also comes with an integrated search engine for information retrieval,
-such as full-text search with relevance ranking.
-
-ArangoDB is written in C++ for high performance and built to work at scale, in
-the cloud or on-premises.
-
-
diff --git a/site/content/3.11/about-arangodb/features/_index.md b/site/content/3.11/about-arangodb/features/_index.md
deleted file mode 100644
index 0d303e5ba6..0000000000
--- a/site/content/3.11/about-arangodb/features/_index.md
+++ /dev/null
@@ -1,126 +0,0 @@
----
-title: Features and Capabilities
-menuTitle: Features
-weight: 20
-description: >-
- ArangoDB is a graph database with a powerful set of features for data management and analytics,
- supported by a rich ecosystem of integrations and drivers
-aliases:
- - ../introduction/features
----
-## On-premises versus Cloud
-
-### Fully managed cloud service
-
-The fully managed multi-cloud
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-is the easiest and fastest way to get started. It runs the Enterprise Edition
-of ArangoDB, lets you deploy clusters with just a few clicks, and is operated
-by a dedicated team of ArangoDB engineers day and night. You can choose from a
-variety of support plans to meet your needs.
-
-- Supports many of the AWS and GCP cloud deployment regions
-- High availability featuring multi-region zone clusters, managed backups,
- and zero-downtime upgrades
-- Integrated monitoring, alerting, and log management
-- Highly secure with encryption at transit and at rest
-- Includes elastic scalability for all deployment models (OneShard and Sharded clusters)
-
-To learn more, go to the [ArangoGraph documentation](../../arangograph/_index.md).
-
-### Self-managed in the cloud
-
-ArangoDB can be self-deployed on AWS or other cloud platforms, too. However, when
-using a self-managed deployment, you take full control of managing the resources
-needed to run it in the cloud. This involves tasks such as configuring,
-provisioning, and monitoring the system. For more details, see
-[self-deploying ArangoDB in the cloud](../../deploy/in-the-cloud.md).
-
-ArangoDB supports Kubernetes through its official
-[Kubernetes Operator](../../deploy/kubernetes.md) that allows you to easily
-deploy and manage clusters within a Kubernetes environment.
-
-### On-premises
-
-Running ArangoDB on-premises means that ArangoDB is installed locally, on your
-organization's computers and servers, and involves managing all the necessary
-resources within the organization's environment, rather than using external
-services.
-
-You can install ArangoDB locally by downloading and running the
-[official packages](https://arangodb.com/download/) or run it using
-[Docker images](../../operations/installation/docker.md).
-
-You can deploy it on-premises as a
-[single server](../../deploy/single-instance/_index.md)
-or as a [cluster](../../deploy/cluster/_index.md)
-comprised of multiple nodes with synchronous replication and automatic failover
-for high availability and resilience. For the highest level of data safety,
-you can additionally set up off-site replication for your entire cluster
-([Datacenter-to-Datacenter Replication](../../deploy/arangosync/_index.md)).
-
-ArangoDB also integrates with Kubernetes, offering a
-[Kubernetes Operator](../../deploy/kubernetes.md) that lets you deploy in your
-Kubernetes cluster.
-
-## ArangoDB Editions
-
-### Community Edition
-
-ArangoDB is freely available in a **Community Edition** under the Apache 2.0
-open-source license. It is a fully-featured version without time or size
-restrictions and includes cluster support.
-
-- Open source under a permissive license
-- One database core for all graph, document, key-value, and search needs
-- A single composable query language for all data models
-- Extensible through microservices with custom REST APIs and user-definable
- query functions
-- Cluster deployments for high availability and resilience
-
-See all [Community Edition Features](community-edition.md).
-
-### Enterprise Edition
-
-ArangoDB is also available in a commercial version, called the
-**Enterprise Edition**. It includes additional features for performance and
-security, such as for scaling graphs and managing your data safely.
-
-- Includes all Community Edition features
-- Performance options to smartly shard and replicate graphs and datasets for
- optimal data locality
-- Multi-tenant deployment option for the transactional guarantees and
- performance of a single server
-- Enhanced data security with on-disk and backup encryption, key rotation,
- audit logging, and LDAP authentication
-- Incremental backups without downtime and off-site replication
-
-See all [Enterprise Edition Features](enterprise-edition.md).
-
-### Differences between the Editions
-
-| Community Edition | Enterprise Edition |
-|-------------------|--------------------|
-| Apache 2.0 License | Commercial License |
-| Sharding using consistent hashing on the default or custom shard keys | In addition, **smart sharding** for improved data locality |
-| Only hash-based graph sharding | **SmartGraphs** to intelligently shard large graph datasets and **EnterpriseGraphs** with an automatic sharding key selection |
-| Only regular collection replication without data locality optimizations | **SatelliteCollections** to replicate collections on all cluster nodes and data locality optimizations for queries |
-| No optimizations when querying sharded graphs and replicated collections together | **SmartGraphs using SatelliteCollections** to enable more local execution of graph queries |
-| Only regular graph replication without local execution optimizations | **SatelliteGraphs** to execute graph traversals locally on a cluster node |
-| Collections can be sharded alike but joins do not utilize co-location | **SmartJoins** for co-located joins in a cluster using identically sharded collections |
-| Graph traversals without parallel execution | **Parallel execution of traversal queries** with many start vertices |
-| Graph traversals always load full documents | **Traversal projections** optimize the data loading of AQL traversal queries if only a few document attributes are accessed |
-| Iterative graph processing (Pregel) for single servers | **Pregel graph processing for clusters** and single servers |
-| Inverted indexes and Views without support for search highlighting and nested search | **Search highlighting** for getting the substring positions of matches and **nested search** for matching arrays with all the conditions met by a single object |
-| Only standard Jaccard index calculation | **Jaccard similarity approximation** with MinHash for entity resolution, such as for finding duplicate records, based on how many common elements they have |{{% comment %}} Experimental feature
-| No fastText model support | Classification of text tokens and finding similar tokens using supervised **fastText word embedding models** |
-{{% /comment %}}
-| Only regular cluster deployments | **OneShard** deployment option to store all collections of a database on a single cluster node, to combine the performance of a single server and ACID semantics with a fault-tolerant cluster setup |
-| ACID transactions for multi-document / multi-collection queries on single servers, for single document operations in clusters, and for multi-document queries in clusters for collections with a single shard | In addition, ACID transactions for multi-collection queries using the OneShard feature |
-| Always read from leader shards in clusters | Optionally allow dirty reads to **read from followers** to scale reads |
-| TLS key and certificate rotation | In addition, **key rotation for JWT secrets** and **server name indication** (SNI) |
-| Built-in user management and authentication | Additional **LDAP authentication** option |
-| Only server logs | **Audit log** of server interactions |
-| No on-disk encryption | **Encryption at Rest** with hardware-accelerated on-disk encryption and key rotation |
-| Only regular backups | **Datacenter-to-Datacenter Replication** for disaster recovery |
-| Only unencrypted backups and basic data masking for backups | **Hot Backups**, **encrypted backups**, and **enhanced data masking** for backups |
diff --git a/site/content/3.11/about-arangodb/use-cases.md b/site/content/3.11/about-arangodb/use-cases.md
deleted file mode 100644
index fab9e86a90..0000000000
--- a/site/content/3.11/about-arangodb/use-cases.md
+++ /dev/null
@@ -1,164 +0,0 @@
----
-title: ArangoDB Use Cases
-menuTitle: Use Cases
-weight: 15
-description: >-
- ArangoDB is a database system with a large solution space because it combines
- graphs, documents, key-value, search engine, and machine learning all in one
-pageToc:
- maxHeadlineLevel: 2
-aliases:
- - ../introduction/use-cases
----
-## ArangoDB as a Graph Database
-
-ArangoDB as a graph database is a great fit for use cases like fraud detection,
-knowledge graphs, recommendation engines, identity and access management,
-network and IT operations, social media management, traffic management, and many
-more.
-
-### Fraud Detection
-
-{{< image src="../../images/icon-fraud-detection.png" alt="Fraud Detection icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Uncover illegal activities by discovering difficult-to-detect patterns.
-ArangoDB lets you look beyond individual data points in disparate data sources,
-allowing you to integrate and harmonize data to analyze activities and
-relationships all together, for a broader view of connection patterns, to detect
-complex fraudulent behavior such as fraud rings.
-
-### Recommendation Engine
-
-{{< image src="../../images/icon-recommendation-engine.png" alt="Recommendation Engine icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Suggest products, services, and information to users based on data relationships.
-For example, you can use ArangoDB together with PyTorch Geometric to build a
-[movie recommendation system](https://www.arangodb.com/2022/04/integrate-arangodb-with-pytorch-geometric-to-build-recommendation-systems/),
-by analyzing the movies users watched and then predicting links between the two
-with a graph neural network (GNN).
-
-### Network Management
-
-{{< image src="../../images/icon-network-management.png" alt="Network Management icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Reduce downtime by connecting and visualizing network, infrastructure, and code.
-Network devices and how they interconnect can naturally be modeled as a graph.
-Traversal algorithms let you explore the routes between different nodes, with the
-option to stop at subnet boundaries or to take things like the connection
-bandwidth into account when path-finding.
-
-### Customer 360
-
-{{< image src="../../images/icon-customer-360.png" alt="Customer 360 icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Gain a complete understanding of your customers by integrating multiple data
-sources and code. ArangoDB can act as the platform to merge and consolidate
-information in any shape, with the added ability to link related records and to
-track data origins using graph features.
-
-### Identity and Access Management
-
-{{< image src="../../images/icon-identity-management.png" alt="Identity Management icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Increase security and compliance by managing data access based on role and
-position. You can map out an organization chart as a graph and use ArangoDB to
-determine who is authorized to see which information. Put ArangoDB's graph
-capabilities to work to implement access control lists and permission
-inheritance.
-
-### Supply Chain
-
-{{< image src="../../images/icon-supply-chain.png" alt="Supply Chain icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Speed shipments by monitoring and optimizing the flow of goods through a
-supply chain. You can represent your inventory, supplier, and delivery
-information as a graph to understand what the possible sources of delays and
-disruptions are.
-
-## ArangoDB as a Document Database
-
-ArangoDB can be used as the backend for heterogeneous content management,
-e-commerce systems, Internet of Things applications, and more generally as a
-persistence layer for a broad range of services that benefit from an agile
-and scalable data store.
-
-### Content Management
-
-{{< image src="../../images/icon-content-management.png" alt="Content management icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Store information of any kind without upfront schema declaration. ArangoDB is
-schema-free, storing every data record as a self-contained document, allowing
-you to manage heterogeneous content with ease. Build the next (headless)
-content management system on top of ArangoDB.
-
-### E-Commerce Systems
-
-{{< image src="../../images/icon-e-commerce.png" alt="E-commerce icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-ArangoDB combines data modeling freedom with strong consistency and resilience
-features to power online shops and ordering systems. Handle product catalog data
-with ease using any combination of free text and structured data, and process
-checkouts with the necessary transactional guarantees.
-
-### Internet of Things
-
-{{< image src="../../images/icon-internet-of-things.png" alt="Internet of things icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Collect sensor readings and other IoT data in ArangoDB for a single view of
-everything. Store all data points in the same system that also lets you run
-aggregation queries using sliding windows for efficient data analysis.
-
-## ArangoDB as a Key-Value Database
-
-{{< image src="../../images/icon-key-value.png" alt="Key value icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-Key-value stores are the simplest kind of database systems. Each record is
-stored as a block of data under a key that uniquely identifies the record.
-The data is opaque, which means the system doesn't know anything about the
-contained information, it simply stores it and can retrieve it for you via
-the identifiers.
-
-This paradigm is used at the heart of ArangoDB and allows it to scale well,
-but without the limitations of a pure key-value store. Every document has a
-`_key` attribute, which is either user-provided or automatically generated.
-You can create additional indexes and work with subsets of attributes as
-needed, requiring the system to be aware of the stored data structures - unlike
-pure key-value stores.
-
-While ArangoDB can store binary data, it is not designed for
-binary large objects (BLOBs) and works best with small to medium-sized
-JSON objects.
-
-For more information about how ArangoDB persists data, see
-[Storage Engine](../components/arangodb-server/storage-engine.md).
-
-## ArangoDB as a Search Engine
-
-{{< image src="../../images/icon-search-engine.png" alt="Search engine icon" style="float: right; padding: 0 20px; margin-bottom: 20px;">}}
-
-ArangoDB has a natively integrated search engine for a broad range of
-information retrieval needs. It is powered by inverted indexes and can index
-full-text, GeoJSON, as well as arbitrary JSON data. It supports various
-kinds of search patterns (tokens, phrases, wildcard, fuzzy, geo-spatial, etc.)
-and it can rank results by relevance and similarity using popular
-scoring algorithms.
-
-It also features natural language processing (NLP) capabilities.
-{{% comment %}} Experimental feature
-and can classify or find similar terms using word embedding models.
-{{% /comment %}}
-
-For more information about the search engine, see [ArangoSearch](../index-and-search/arangosearch/_index.md).
-
-## ArangoDB for Machine Learning
-
-You can use ArangoDB as the foundation for machine learning based on graphs
-at enterprise scale. You can use it as a metadata store for model training
-parameters, run analytical algorithms in the database, or serve operative
-queries using data that you computed.
-
-ArangoDB integrates well into existing data infrastructures and provides
-connectors for popular machine learning frameworks and data processing
-ecosystems.
-
-
diff --git a/site/content/3.11/aql/examples-and-query-patterns/remove-vertex.md b/site/content/3.11/aql/examples-and-query-patterns/remove-vertex.md
deleted file mode 100644
index 60a845ad94..0000000000
--- a/site/content/3.11/aql/examples-and-query-patterns/remove-vertex.md
+++ /dev/null
@@ -1,81 +0,0 @@
----
-title: Remove vertices with AQL
-menuTitle: Remove vertex
-weight: 45
-description: >-
- Removing connected edges along with vertex documents directly in AQL is
- possible in a limited way
----
-Deleting vertices with associated edges is currently not handled via AQL while
-the [graph management interface](../../graphs/general-graphs/management.md#remove-a-vertex)
-and the
-[REST API for the graph module](../../develop/http-api/graphs/named-graphs.md#remove-a-vertex)
-offer a vertex deletion functionality.
-However, as shown in this example based on the
-[Knows Graph](../../graphs/example-graphs.md#knows-graph), a query for this
-use case can be created.
-
-
-
-When deleting vertex **eve** from the graph, we also want the edges
-`eve -> alice` and `eve -> bob` to be removed.
-The involved graph and its only edge collection has to be known. In this case it
-is the graph **knows_graph** and the edge collection **knows**.
-
-This query will delete **eve** with its adjacent edges:
-
-```aql
----
-name: GRAPHTRAV_removeVertex1
-description: ''
-dataset: knows_graph
----
-LET edgeKeys = (FOR v, e IN 1..1 ANY 'persons/eve' GRAPH 'knows_graph' RETURN e._key)
-LET r = (FOR key IN edgeKeys REMOVE key IN knows)
-REMOVE 'eve' IN persons
-```
-
-This query executed several actions:
-- use a graph traversal of depth 1 to get the `_key` of **eve's** adjacent edges
-- remove all of these edges from the `knows` collection
-- remove vertex **eve** from the `persons` collection
-
-The following query shows a different design to achieve the same result:
-
-```aql
----
-name: GRAPHTRAV_removeVertex2
-description: ''
-dataset: knows_graph
----
-LET edgeKeys = (FOR v, e IN 1..1 ANY 'persons/eve' GRAPH 'knows_graph'
- REMOVE e._key IN knows)
-REMOVE 'eve' IN persons
-```
-
-**Note**: The query has to be adjusted to match a graph with multiple vertex/edge collections.
-
-For example, the [City Graph](../../graphs/example-graphs.md#city-graph)
-contains several vertex collections - `germanCity` and `frenchCity` and several
-edge collections - `french / german / international Highway`.
-
-
-
-To delete city **Berlin** all edge collections `french / german / international Highway`
-have to be considered. The **REMOVE** operation has to be applied on all edge
-collections with `OPTIONS { ignoreErrors: true }`. Not using this option will stop the query
-whenever a non existing key should be removed in a collection.
-
-```aql
----
-name: GRAPHTRAV_removeVertex3
-description: ''
-dataset: routeplanner
----
-LET edgeKeys = (FOR v, e IN 1..1 ANY 'germanCity/Berlin' GRAPH 'routeplanner' RETURN e._key)
-LET r = (FOR key IN edgeKeys REMOVE key IN internationalHighway
- OPTIONS { ignoreErrors: true } REMOVE key IN germanHighway
- OPTIONS { ignoreErrors: true } REMOVE key IN frenchHighway
- OPTIONS { ignoreErrors: true })
-REMOVE 'Berlin' IN germanCity
-```
diff --git a/site/content/3.11/aql/examples-and-query-patterns/traversals.md b/site/content/3.11/aql/examples-and-query-patterns/traversals.md
deleted file mode 100644
index b2521d48c2..0000000000
--- a/site/content/3.11/aql/examples-and-query-patterns/traversals.md
+++ /dev/null
@@ -1,118 +0,0 @@
----
-title: Combining AQL Graph Traversals
-menuTitle: Traversals
-weight: 40
-description: >-
- You can combine graph queries with other AQL features like geo-spatial search
----
-## Finding the start vertex via a geo query
-
-Our first example will locate the start vertex for a graph traversal via [a geo index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md).
-We use the [City Graph](../../graphs/example-graphs.md#city-graph) and its geo indexes:
-
-
-
-```js
----
-name: COMBINING_GRAPH_01_create_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.loadGraph("routeplanner");
-~examples.dropGraph("routeplanner");
-```
-
-We search all german cities in a range of 400 km around the ex-capital **Bonn**: **Hamburg** and **Cologne**.
-We won't find **Paris** since its in the `frenchCity` collection.
-
-```aql
----
-name: COMBINING_GRAPH_02_show_geo
-description: ''
-dataset: routeplanner
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- RETURN startCity._key
-```
-
-Let's revalidate that the geo indexes are actually used:
-
-```aql
----
-name: COMBINING_GRAPH_03_explain_geo
-description: ''
-dataset: routeplanner
-explain: true
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- RETURN startCity._key
-```
-
-And now combine this with a graph traversal:
-
-```aql
----
-name: COMBINING_GRAPH_04_combine
-description: ''
-dataset: routeplanner
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- FOR v, e, p IN 1..1 OUTBOUND startCity
- GRAPH 'routeplanner'
- RETURN {startcity: startCity._key, traversedCity: v._key}
-```
-
-The geo index query returns us `startCity` (**Cologne** and **Hamburg**) which we then use as starting point for our graph traversal.
-For simplicity we only return their direct neighbours. We format the return result so we can see from which `startCity` the traversal came.
-
-Alternatively we could use a `LET` statement with a subquery to group the traversals by their `startCity` efficiently:
-
-```aql
----
-name: COMBINING_GRAPH_05_combine_let
-description: ''
-dataset: routeplanner
-bindVars:
- {
- "bonn": [7.0998, 50.7340],
- "radius": 400000
- }
----
-FOR startCity IN germanCity
- FILTER GEO_DISTANCE(@bonn, startCity.geometry) < @radius
- LET oneCity = (
- FOR v, e, p IN 1..1 OUTBOUND startCity
- GRAPH 'routeplanner' RETURN v._key
- )
- RETURN {startCity: startCity._key, connectedCities: oneCity}
-```
-
-Finally, we clean up again:
-
-```js
----
-name: COMBINING_GRAPH_06_cleanup
-description: ''
----
-~var examples = require("@arangodb/graph-examples/example-graph");
-~var g = examples.loadGraph("routeplanner");
-examples.dropGraph("routeplanner");
-```
diff --git a/site/content/3.11/aql/functions/arangosearch.md b/site/content/3.11/aql/functions/arangosearch.md
deleted file mode 100644
index 303eb0419f..0000000000
--- a/site/content/3.11/aql/functions/arangosearch.md
+++ /dev/null
@@ -1,1361 +0,0 @@
----
-title: ArangoSearch functions in AQL
-menuTitle: ArangoSearch
-weight: 5
-description: >-
- ArangoSearch offers various AQL functions for search queries to control the search context, for filtering and scoring
-pageToc:
- maxHeadlineLevel: 3
----
-You can form search expressions by composing ArangoSearch function calls,
-logical operators and comparison operators. This allows you to filter Views
-as well as to utilize inverted indexes to filter collections.
-
-The AQL [`SEARCH` operation](../high-level-operations/search.md) accepts search expressions,
-such as `PHRASE(doc.text, "foo bar", "text_en")`, for querying Views. You can
-combine ArangoSearch filter and context functions as well as operators like
-`AND` and `OR` to form complex search conditions. Similarly, the
-[`FILTER` operation](../high-level-operations/filter.md) accepts such search expressions
-when using [inverted indexes](../../index-and-search/indexing/working-with-indexes/inverted-indexes.md).
-
-Scoring functions allow you to rank matches and to sort results by relevance.
-They are limited to Views.
-
-Search highlighting functions let you retrieve the string positions of matches.
-They are limited to Views.
-
-You can use most functions also without an inverted index or a View and the
-`SEARCH` keyword, but then they are not accelerated by an index.
-
-See [Information Retrieval with ArangoSearch](../../index-and-search/arangosearch/_index.md) for an
-introduction.
-
-## Context Functions
-
-### ANALYZER()
-
-`ANALYZER(expr, analyzer) → retVal`
-
-Sets the Analyzer for the given search expression.
-
-{{< info >}}
-The `ANALYZER()` function is only applicable for queries against `arangosearch` Views.
-
-In queries against `search-alias` Views and inverted indexes, you don't need to
-specify Analyzers because every field can be indexed with a single Analyzer only
-and they are inferred from the index definition.
-{{< /info >}}
-
-The default Analyzer is `identity` for any search expression that is used for
-filtering `arangosearch` Views. This utility function can be used
-to wrap a complex expression to set a particular Analyzer. It also sets it for
-all the nested functions which require such an argument to avoid repeating the
-Analyzer parameter. If an Analyzer argument is passed to a nested function
-regardless, then it takes precedence over the Analyzer set via `ANALYZER()`.
-
-The `TOKENS()` function is an exception. It requires the Analyzer name to be
-passed in in all cases even if wrapped in an `ANALYZER()` call, because it is
-not an ArangoSearch function but a regular string function which can be used
-outside of `SEARCH` operations.
-
-- **expr** (expression): any valid search expression
-- **analyzer** (string): name of an [Analyzer](../../index-and-search/analyzers.md).
-- returns **retVal** (any): the expression result that it wraps
-
-#### Example: Using a custom Analyzer
-
-Assuming a View definition with an Analyzer whose name and type is `delimiter`:
-
-```json
-{
- "links": {
- "coll": {
- "analyzers": [ "delimiter" ],
- "includeAllFields": true,
- }
- },
- ...
-}
-```
-
-… with the Analyzer properties `{ "delimiter": "|" }` and an example document
-`{ "text": "foo|bar|baz" }` in the collection `coll`, the following query would
-return the document:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(doc.text == "bar", "delimiter")
- RETURN doc
-```
-
-The expression `doc.text == "bar"` has to be wrapped by `ANALYZER()` in order
-to set the Analyzer to `delimiter`. Otherwise the expression would be evaluated
-with the default `identity` Analyzer. `"foo|bar|baz" == "bar"` would not match,
-but the View does not even process the indexed fields with the `identity`
-Analyzer. The following query would also return an empty result because of
-the Analyzer mismatch:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.text == "foo|bar|baz"
- //SEARCH ANALYZER(doc.text == "foo|bar|baz", "identity")
- RETURN doc
-```
-
-#### Example: Setting the Analyzer context with and without `ANALYZER()`
-
-In below query, the search expression is swapped by `ANALYZER()` to set the
-`text_en` Analyzer for both `PHRASE()` functions:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(PHRASE(doc.text, "foo") OR PHRASE(doc.text, "bar"), "text_en")
- RETURN doc
-```
-
-Without the usage of `ANALYZER()`:
-
-```aql
-FOR doc IN viewName
- SEARCH PHRASE(doc.text, "foo", "text_en") OR PHRASE(doc.text, "bar", "text_en")
- RETURN doc
-```
-
-#### Example: Analyzer precedence and specifics of the `TOKENS()` function
-
-In the following example `ANALYZER()` is used to set the Analyzer `text_en`,
-but in the second call to `PHRASE()` a different Analyzer is set (`identity`)
-which overrules `ANALYZER()`. Therefore, the `text_en` Analyzer is used to find
-the phrase *foo* and the `identity` Analyzer to find *bar*:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(PHRASE(doc.text, "foo") OR PHRASE(doc.text, "bar", "identity"), "text_en")
- RETURN doc
-```
-
-Despite the wrapping `ANALYZER()` function, the Analyzer name cannot be
-omitted in calls to the `TOKENS()` function. Both occurrences of `text_en`
-are required, to set the Analyzer for the expression `doc.text IN ...` and
-for the `TOKENS()` function itself. This is because the `TOKENS()` function
-is a regular string function that does not take the Analyzer context into
-account:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(doc.text IN TOKENS("foo", "text_en"), "text_en")
- RETURN doc
-```
-
-### BOOST()
-
-`BOOST(expr, boost) → retVal`
-
-Override boost in the context of a search expression with a specified value,
-making it available for scorer functions. By default, the context has a boost
-value equal to `1.0`.
-
-- **expr** (expression): any valid search expression
-- **boost** (number): numeric boost value
-- returns **retVal** (any): the expression result that it wraps
-
-#### Example: Boosting a search sub-expression
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(BOOST(doc.text == "foo", 2.5) OR doc.text == "bar", "text_en")
- LET score = BM25(doc)
- SORT score DESC
- RETURN { text: doc.text, score }
-```
-
-Assuming a View with the following documents indexed and processed by the
-`text_en` Analyzer:
-
-```js
-{ "text": "foo bar" }
-{ "text": "foo" }
-{ "text": "bar" }
-{ "text": "foo baz" }
-{ "text": "baz" }
-```
-
-… the result of above query would be:
-
-```json
-[
- {
- "text": "foo bar",
- "score": 2.787301540374756
- },
- {
- "text": "foo baz",
- "score": 1.6895781755447388
- },
- {
- "text": "foo",
- "score": 1.525835633277893
- },
- {
- "text": "bar",
- "score": 0.9913395643234253
- }
-]
-```
-
-## Filter Functions
-
-### EXISTS()
-
-{{< info >}}
-If you use `arangosearch` Views, the `EXISTS()` function only matches values if
-you set the **storeValues** link property to `"id"` in the View definition
-(the default is `"none"`).
-{{< /info >}}
-
-#### Testing for attribute presence
-
-`EXISTS(path)`
-
-Match documents where the attribute at `path` is present.
-
-- **path** (attribute path expression): the attribute to test in the document
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text)
- RETURN doc
-```
-
-#### Testing for attribute type
-
-`EXISTS(path, type)`
-
-Match documents where the attribute at `path` is present _and_ is of the
-specified data type.
-
-- **path** (attribute path expression): the attribute to test in the document
-- **type** (string): data type to test for, can be one of:
- - `"null"`
- - `"bool"` / `"boolean"`
- - `"numeric"`
- - `"type"` (matches `null`, `boolean`, and `numeric` values)
- - `"string"`
- - `"analyzer"` (see below)
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text, "string")
- RETURN doc
-```
-
-#### Testing for Analyzer index status
-
-`EXISTS(path, "analyzer", analyzer)`
-
-Match documents where the attribute at `path` is present _and_ was indexed
-by the specified `analyzer`.
-
-- **path** (attribute path expression): the attribute to test in the document
-- **type** (string): string literal `"analyzer"`
-- **analyzer** (string, _optional_): name of an [Analyzer](../../index-and-search/analyzers.md).
- Uses the Analyzer of a wrapping `ANALYZER()` call if not specified or
- defaults to `"identity"`
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text, "analyzer", "text_en")
- RETURN doc
-```
-
-#### Testing for nested fields
-
-`EXISTS(path, "nested")`
-
-Match documents where the attribute at `path` is present _and_ is indexed
-as a nested field for [nested search with Views](../../index-and-search/arangosearch/nested-search.md)
-or [inverted indexes](../../index-and-search/indexing/working-with-indexes/inverted-indexes.md#nested-search-enterprise-edition).
-
-- **path** (attribute path expression): the attribute to test in the document
-- **type** (string): string literal `"nested"`
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-**Examples**
-
-Only return documents from the View `viewName` whose `text` attribute is indexed
-as a nested field:
-
-```aql
-FOR doc IN viewName
- SEARCH EXISTS(doc.text, "nested")
- RETURN doc
-```
-
-Only return documents whose `attr` attribute and its nested `text` attribute are
-indexed as nested fields:
-
-```aql
-FOR doc IN viewName
- SEARCH doc.attr[? FILTER EXISTS(CURRENT.text, "nested")]
- RETURN doc
-```
-
-Only return documents from the collection `coll` whose `text` attribute is indexed
-as a nested field by an inverted index:
-
-```aql
-FOR doc IN coll OPTIONS { indexHint: "inv-idx", forceIndexHint: true }
- FILTER EXISTS(doc.text, "nested")
- RETURN doc
-```
-
-Only return documents whose `attr` attribute and its nested `text` attribute are
-indexed as nested fields:
-
-```aql
-FOR doc IN coll OPTIONS { indexHint: "inv-idx", forceIndexHint: true }
- FILTER doc.attr[? FILTER EXISTS(CURRENT.text, "nested")]
- RETURN doc
-```
-
-### IN_RANGE()
-
-`IN_RANGE(path, low, high, includeLow, includeHigh) → included`
-
-Match documents where the attribute at `path` is greater than (or equal to)
-`low` and less than (or equal to) `high`.
-
-You can use `IN_RANGE()` for searching more efficiently compared to an equivalent
-expression that combines two comparisons with a logical conjunction:
-
-- `IN_RANGE(path, low, high, true, true)` instead of `low <= value AND value <= high`
-- `IN_RANGE(path, low, high, true, false)` instead of `low <= value AND value < high`
-- `IN_RANGE(path, low, high, false, true)` instead of `low < value AND value <= high`
-- `IN_RANGE(path, low, high, false, false)` instead of `low < value AND value < high`
-
-`low` and `high` can be numbers or strings (technically also `null`, `true`
-and `false`), but the data type must be the same for both.
-
-{{< warning >}}
-The alphabetical order of characters is not taken into account by ArangoSearch,
-i.e. range queries in SEARCH operations against Views will not follow the
-language rules as per the defined Analyzer locale (except for the
-[`collation` Analyzer](../../index-and-search/analyzers.md#collation)) nor the server language
-(startup option `--default-language`)!
-Also see [Known Issues](../../release-notes/version-3.11/known-issues-in-3-11.md#arangosearch).
-{{< /warning >}}
-
-There is a corresponding [`IN_RANGE()` Miscellaneous Function](miscellaneous.md#in_range)
-that is used outside of `SEARCH` operations.
-
-- **path** (attribute path expression):
- the path of the attribute to test in the document
-- **low** (number\|string): minimum value of the desired range
-- **high** (number\|string): maximum value of the desired range
-- **includeLow** (bool): whether the minimum value shall be included in
- the range (left-closed interval) or not (left-open interval)
-- **includeHigh** (bool): whether the maximum value shall be included in
- the range (right-closed interval) or not (right-open interval)
-- returns **included** (bool): whether `value` is in the range
-
-If `low` and `high` are the same, but `includeLow` and/or `includeHigh` is set
-to `false`, then nothing will match. If `low` is greater than `high` nothing will
-match either.
-
-#### Example: Using numeric ranges
-
-To match documents with the attribute `value >= 3` and `value <= 5` using the
-default `"identity"` Analyzer you would write the following query:
-
-```aql
-FOR doc IN viewName
- SEARCH IN_RANGE(doc.value, 3, 5, true, true)
- RETURN doc.value
-```
-
-This will also match documents which have an array of numbers as `value`
-attribute where at least one of the numbers is in the specified boundaries.
-
-#### Example: Using string ranges
-
-Using string boundaries and a text Analyzer allows to match documents which
-have at least one token within the specified character range:
-
-```aql
-FOR doc IN valView
- SEARCH ANALYZER(IN_RANGE(doc.value, "a","f", true, false), "text_en")
- RETURN doc
-```
-
-This will match `{ "value": "bar" }` and `{ "value": "foo bar" }` because the
-_b_ of _bar_ is in the range (`"a" <= "b" < "f"`), but not `{ "value": "foo" }`
-because the _f_ of _foo_ is excluded (`high` is "f" but `includeHigh` is false).
-
-### MIN_MATCH()
-
-`MIN_MATCH(expr1, ... exprN, minMatchCount) → fulfilled`
-
-Match documents where at least `minMatchCount` of the specified
-search expressions are satisfied.
-
-There is a corresponding [`MIN_MATCH()` Miscellaneous function](miscellaneous.md#min_match)
-that is used outside of `SEARCH` operations.
-
-- **expr** (expression, _repeatable_): any valid search expression
-- **minMatchCount** (number): minimum number of search expressions that should
- be satisfied
-- returns **fulfilled** (bool): whether at least `minMatchCount` of the
- specified expressions are `true`
-
-#### Example: Matching a subset of search sub-expressions
-
-Assuming a View with a text Analyzer, you may use it to match documents where
-the attribute contains at least two out of three tokens:
-
-```aql
-LET t = TOKENS("quick brown fox", "text_en")
-FOR doc IN viewName
- SEARCH ANALYZER(MIN_MATCH(doc.text == t[0], doc.text == t[1], doc.text == t[2], 2), "text_en")
- RETURN doc.text
-```
-
-This will match `{ "text": "the quick brown fox" }` and `{ "text": "some brown fox" }`,
-but not `{ "text": "snow fox" }` which only fulfills one of the conditions.
-
-Note that you can also use the `AT LEAST` [array comparison operator](../high-level-operations/search.md#array-comparison-operators)
-in the specific case of matching a subset of tokens against a single attribute:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(TOKENS("quick brown fox", "text_en") AT LEAST (2) == doc.text, "text_en")
- RETURN doc.text
-```
-
-### MINHASH_MATCH()
-
-`MINHASH_MATCH(path, target, threshold, analyzer) → fulfilled`
-
-Match documents with an approximate Jaccard similarity of at least the
-`threshold`, approximated with the specified `minhash` Analyzer.
-
-To only compute the MinHash signatures, see the
-[`MINHASH()` Miscellaneous function](miscellaneous.md#minhash).
-
-- **path** (attribute path expression\|string): the path of the attribute in
- a document or a string
-- **target** (string): the string to hash with the specified Analyzer and to
- compare against the stored attribute
-- **threshold** (number, _optional_): a value between `0.0` and `1.0`.
-- **analyzer** (string): the name of a [`minhash` Analyzer](../../index-and-search/analyzers.md#minhash).
-- returns **fulfilled** (bool): `true` if the approximate Jaccard similarity
- is greater than or equal to the specified threshold, `false` otherwise
-
-#### Example: Find documents with a text similar to a target text
-
-Assuming a View with a `minhash` Analyzer, you can use the stored
-MinHash signature to find candidates for the more expensive Jaccard similarity
-calculation:
-
-```aql
-LET target = "the quick brown fox jumps over the lazy dog"
-LET targetSignature = TOKENS(target, "myMinHash")
-
-FOR doc IN viewName
- SEARCH MINHASH_MATCH(doc.text, target, 0.5, "myMinHash") // approximation
- LET jaccard = JACCARD(targetSignature, TOKENS(doc.text, "myMinHash"))
- FILTER jaccard > 0.75
- SORT jaccard DESC
- RETURN doc.text
-```
-
-### NGRAM_MATCH()
-
-`NGRAM_MATCH(path, target, threshold, analyzer) → fulfilled`
-
-Match documents whose attribute value has an
-[_n_-gram similarity](https://webdocs.cs.ualberta.ca/~kondrak/papers/spire05.pdf)
-higher than the specified threshold compared to the target value.
-
-The similarity is calculated by counting how long the longest sequence of
-matching _n_-grams is, divided by the target's total _n_-gram count.
-Only fully matching _n_-grams are counted.
-
-The _n_-grams for both attribute and target are produced by the specified
-Analyzer. Increasing the _n_-gram length will increase accuracy, but reduce
-error tolerance. In most cases a size of 2 or 3 will be a good choice.
-
-Also see the String Functions
-[`NGRAM_POSITIONAL_SIMILARITY()`](string.md#ngram_positional_similarity)
-and [`NGRAM_SIMILARITY()`](string.md#ngram_similarity)
-for calculating _n_-gram similarity that cannot be accelerated by a View index.
-
-- **path** (attribute path expression\|string): the path of the attribute in
- a document or a string
-- **target** (string): the string to compare against the stored attribute
-- **threshold** (number, _optional_): a value between `0.0` and `1.0`. Defaults
- to `0.7` if none is specified.
-- **analyzer** (string): the name of an [Analyzer](../../index-and-search/analyzers.md).
-- returns **fulfilled** (bool): `true` if the evaluated _n_-gram similarity value
- is greater than or equal to the specified threshold, `false` otherwise
-
-{{< info >}}
-Use an Analyzer of type `ngram` with `preserveOriginal: false` and `min` equal
-to `max`. Otherwise, the similarity score calculated internally will be lower
-than expected.
-
-The Analyzer must have the `"position"` and `"frequency"` features enabled or
-the `NGRAM_MATCH()` function will not find anything.
-{{< /info >}}
-
-#### Example: Using a custom bigram Analyzer
-
-Given a View indexing an attribute `text`, a custom _n_-gram Analyzer `"bigram"`
-(`min: 2, max: 2, preserveOriginal: false, streamType: "utf8"`) and a document
-`{ "text": "quick red fox" }`, the following query would match it (with a
-threshold of `1.0`):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH(doc.text, "quick fox", "bigram")
- RETURN doc.text
-```
-
-The following will also match (note the low threshold value):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH(doc.text, "quick blue fox", 0.4, "bigram")
- RETURN doc.text
-```
-
-The following will not match (note the high threshold value):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH(doc.text, "quick blue fox", 0.9, "bigram")
- RETURN doc.text
-```
-
-#### Example: Using constant values
-
-`NGRAM_MATCH()` can be called with constant arguments, but for such calls the
-`analyzer` argument is mandatory (even for calls inside of a `SEARCH` clause):
-
-```aql
-FOR doc IN viewName
- SEARCH NGRAM_MATCH("quick fox", "quick blue fox", 0.9, "bigram")
- RETURN doc.text
-```
-
-```aql
-RETURN NGRAM_MATCH("quick fox", "quick blue fox", "bigram")
-```
-
-### PHRASE()
-
-`PHRASE(path, phrasePart, analyzer)`
-
-`PHRASE(path, phrasePart1, skipTokens1, ... phrasePartN, skipTokensN, analyzer)`
-
-`PHRASE(path, [ phrasePart1, skipTokens1, ... phrasePartN, skipTokensN ], analyzer)`
-
-Search for a phrase in the referenced attribute. It only matches documents in
-which the tokens appear in the specified order. To search for tokens in any
-order use [`TOKENS()`](string.md#tokens) instead.
-
-The phrase can be expressed as an arbitrary number of `phraseParts` separated by
-*skipTokens* number of tokens (wildcards), either as separate arguments or as
-array as second argument.
-
-- **path** (attribute path expression): the attribute to test in the document
-- **phrasePart** (string\|array\|object): text to search for in the tokens.
- Can also be an [array](#example-using-phrase-with-an-array-of-tokens)
- comprised of string, array and [object tokens](#object-tokens), or tokens
- interleaved with numbers of `skipTokens`. The specified `analyzer` is applied
- to string and array tokens, but not for object tokens.
-- **skipTokens** (number, _optional_): amount of tokens to treat
- as wildcards
-- **analyzer** (string, _optional_): name of an [Analyzer](../../index-and-search/analyzers.md).
- Uses the Analyzer of a wrapping `ANALYZER()` call if not specified or
- defaults to `"identity"`
-- returns nothing: the function evaluates to a boolean, but this value cannot be
- returned. The function can only be called in a search expression. It throws
- an error if used outside of a [`SEARCH` operation](../high-level-operations/search.md) or
- a `FILTER` operation that uses an inverted index.
-
-{{< info >}}
-The selected Analyzer must have the `"position"` and `"frequency"` features
-enabled. The `PHRASE()` function will otherwise not find anything.
-{{< /info >}}
-
-#### Object tokens
-
-- `{IN_RANGE: [low, high, includeLow, includeHigh]}`:
- see [`IN_RANGE()`](#in_range). *low* and *high* can only be strings.
-- `{LEVENSHTEIN_MATCH: [token, maxDistance, transpositions, maxTerms, prefix]}`:
- - `token` (string): a string to search
- - `maxDistance` (number): maximum Levenshtein / Damerau-Levenshtein distance
- - `transpositions` (bool, _optional_): if set to `false`, a Levenshtein
- distance is computed, otherwise a Damerau-Levenshtein distance (default)
- - `maxTerms` (number, _optional_): consider only a specified number of the
- most relevant terms. One can pass `0` to consider all matched terms, but it may
- impact performance negatively. The default value is `64`.
- - `prefix` (string, _optional_): if defined, then a search for the exact
- prefix is carried out, using the matches as candidates. The Levenshtein /
- Damerau-Levenshtein distance is then computed for each candidate using the
- remainders of the strings. This option can improve performance in cases where
- there is a known common prefix. The default value is an empty string
- (introduced in v3.7.13, v3.8.1).
-- `{STARTS_WITH: [prefix]}`: see [`STARTS_WITH()`](#starts_with).
- Array brackets are optional
-- `{TERM: [token]}`: equal to `token` but without Analyzer tokenization.
- Array brackets are optional
-- `{TERMS: [token1, ..., tokenN]}`: one of `token1, ..., tokenN` can be found
- in specified position. Inside an array the object syntax can be replaced with
- the object field value, e.g., `[..., [token1, ..., tokenN], ...]`.
-- `{WILDCARD: [token]}`: see [`LIKE()`](#like).
- Array brackets are optional
-
-An array token inside an array can be used in the `TERMS` case only.
-
-Also see [Example: Using object tokens](#example-using-object-tokens).
-
-#### Example: Using a text Analyzer for a phrase search
-
-Given a View indexing an attribute `text` with the `"text_en"` Analyzer and a
-document `{ "text": "Lorem ipsum dolor sit amet, consectetur adipiscing elit" }`,
-the following query would match it:
-
-```aql
-FOR doc IN viewName
- SEARCH PHRASE(doc.text, "lorem ipsum", "text_en")
- RETURN doc.text
-```
-
-However, this search expression does not because the tokens `"ipsum"` and
-`"lorem"` do not appear in this order:
-
-```aql
-PHRASE(doc.text, "ipsum lorem", "text_en")
-```
-
-#### Example: Skip tokens for a proximity search
-
-To match `"ipsum"` and `"amet"` with any two tokens in between, you can use the
-following search expression:
-
-```aql
-PHRASE(doc.text, "ipsum", 2, "amet", "text_en")
-```
-
-The `skipTokens` value of `2` defines how many wildcard tokens have to appear
-between *ipsum* and *amet*. A `skipTokens` value of `0` means that the tokens
-must be adjacent. Negative values are allowed, but not very useful. These three
-search expressions are equivalent:
-
-```aql
-PHRASE(doc.text, "lorem ipsum", "text_en")
-PHRASE(doc.text, "lorem", 0, "ipsum", "text_en")
-PHRASE(doc.text, "ipsum", -1, "lorem", "text_en")
-```
-
-#### Example: Using `PHRASE()` with an array of tokens
-
-The `PHRASE()` function also accepts an array as second argument with
-`phrasePart` and `skipTokens` parameters as elements.
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick brown fox"], "text_en") RETURN doc
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick", "brown", "fox"], "text_en") RETURN doc
-```
-
-This syntax variation enables the usage of computed expressions:
-
-```aql
-LET proximityCondition = [ "foo", ROUND(RAND()*10), "bar" ]
-FOR doc IN viewName
- SEARCH PHRASE(doc.text, proximityCondition, "text_en")
- RETURN doc
-```
-
-```aql
-LET tokens = TOKENS("quick brown fox", "text_en") // ["quick", "brown", "fox"]
-FOR doc IN myView SEARCH PHRASE(doc.title, tokens, "text_en") RETURN doc
-```
-
-Above example is equivalent to the more cumbersome and static form:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 0, "brown", 0, "fox", "text_en") RETURN doc
-```
-
-You can optionally specify the number of skipTokens in the array form before
-every string element:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, ["quick", 1, "fox", "jumps"], "text_en") RETURN doc
-```
-
-It is the same as the following:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 1, "fox", 0, "jumps", "text_en") RETURN doc
-```
-
-#### Example: Handling of arrays with no members
-
-Empty arrays are skipped:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 1, [], 1, "jumps", "text_en") RETURN doc
-```
-
-The query is equivalent to:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title, "quick", 2 "jumps", "text_en") RETURN doc
-```
-
-Providing only empty arrays is valid, but will yield no results.
-
-#### Example: Using object tokens
-
-Using object tokens `STARTS_WITH`, `WILDCARD`, `LEVENSHTEIN_MATCH`, `TERMS` and
-`IN_RANGE`:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title,
- {STARTS_WITH: ["qui"]}, 0,
- {WILDCARD: ["b%o_n"]}, 0,
- {LEVENSHTEIN_MATCH: ["foks", 2]}, 0,
- {TERMS: ["jump", "run"]}, 0, // Analyzer not applied!
- {IN_RANGE: ["over", "through", true, false]},
- "text_en") RETURN doc
-```
-
-Note that the `text_en` Analyzer has stemming enabled, but for object tokens
-the Analyzer isn't applied. `{TERMS: ["jumps", "runs"]}` would not match the
-indexed (and stemmed!) attribute value. Therefore, the trailing `s` which would
-be stemmed away is removed from both words manually in the example.
-
-Above example is equivalent to:
-
-```aql
-FOR doc IN myView SEARCH PHRASE(doc.title,
-[
- {STARTS_WITH: "qui"}, 0,
- {WILDCARD: "b%o_n"}, 0,
- {LEVENSHTEIN_MATCH: ["foks", 2]}, 0,
- ["jumps", "runs"], 0, // Analyzer is applied using this syntax
- {IN_RANGE: ["over", "through", true, false]}
-], "text_en") RETURN doc
-```
-
-### STARTS_WITH()
-
-`STARTS_WITH(path, prefix) → startsWith`
-
-Match the value of the attribute that starts with `prefix`. If the attribute
-is processed by a tokenizing Analyzer (type `"text"` or `"delimiter"`) or if it
-is an array, then a single token/element starting with the prefix is sufficient
-to match the document.
-
-{{< warning >}}
-The alphabetical order of characters is not taken into account by ArangoSearch,
-i.e. range queries in SEARCH operations against Views will not follow the
-language rules as per the defined Analyzer locale (except for the
-[`collation` Analyzer](../../index-and-search/analyzers.md#collation)) nor the server language
-(startup option `--default-language`)!
-Also see [Known Issues](../../release-notes/version-3.11/known-issues-in-3-11.md#arangosearch).
-{{< /warning >}}
-
-There is a corresponding [`STARTS_WITH()` String function](string.md#starts_with)
-that is used outside of `SEARCH` operations.
-
-- **path** (attribute path expression): the path of the attribute to compare
- against in the document
-- **prefix** (string): a string to search at the start of the text
-- returns **startsWith** (bool): whether the specified attribute starts with
- the given prefix
-
----
-
-`STARTS_WITH(path, prefixes, minMatchCount) → startsWith`
-
-Match the value of the attribute that starts with one of the `prefixes`, or
-optionally with at least `minMatchCount` of the prefixes.
-
-- **path** (attribute path expression): the path of the attribute to compare
- against in the document
-- **prefixes** (array): an array of strings to search at the start of the text
-- **minMatchCount** (number, _optional_): minimum number of search prefixes
- that should be satisfied (see
- [example](#example-searching-for-one-or-multiple-prefixes)). The default is `1`
-- returns **startsWith** (bool): whether the specified attribute starts with at
- least `minMatchCount` of the given prefixes
-
-#### Example: Searching for an exact value prefix
-
-To match a document `{ "text": "lorem ipsum..." }` using a prefix and the
-`"identity"` Analyzer you can use it like this:
-
-```aql
-FOR doc IN viewName
- SEARCH STARTS_WITH(doc.text, "lorem ip")
- RETURN doc
-```
-
-#### Example: Searching for a prefix in text
-
-This query will match `{ "text": "lorem ipsum" }` as well as
-`{ "text": [ "lorem", "ipsum" ] }` given a View which indexes the `text`
-attribute and processes it with the `"text_en"` Analyzer:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, "ips"), "text_en")
- RETURN doc.text
-```
-
-Note that it will not match `{ "text": "IPS (in-plane switching)" }` without
-modification to the query. The prefixes were passed to `STARTS_WITH()` as-is,
-but the built-in `text_en` Analyzer used for indexing has stemming enabled.
-So the indexed values are the following:
-
-```aql
-RETURN TOKENS("IPS (in-plane switching)", "text_en")
-```
-
-```json
-[
- [
- "ip",
- "in",
- "plane",
- "switch"
- ]
-]
-```
-
-The *s* is removed from *ips*, which leads to the prefix *ips* not matching
-the indexed token *ip*. You may either create a custom text Analyzer with
-stemming disabled to avoid this issue, or apply stemming to the prefixes:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, TOKENS("ips", "text_en")), "text_en")
- RETURN doc.text
-```
-
-#### Example: Searching for one or multiple prefixes
-
-The `STARTS_WITH()` function accepts an array of prefix alternatives of which
-only one has to match:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, ["something", "ips"]), "text_en")
- RETURN doc.text
-```
-
-It will match a document `{ "text": "lorem ipsum" }` but also
-`{ "text": "that is something" }`, as at least one of the words start with a
-given prefix.
-
-The same query again, but with an explicit `minMatchCount`:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, ["wrong", "ips"], 1), "text_en")
- RETURN doc.text
-```
-
-The number can be increased to require that at least this many prefixes must
-be present:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(STARTS_WITH(doc.text, ["lo", "ips", "something"], 2), "text_en")
- RETURN doc.text
-```
-
-This will still match `{ "text": "lorem ipsum" }` because at least two prefixes
-(`lo` and `ips`) are found, but not `{ "text": "that is something" }` which only
-contains one of the prefixes (`something`).
-
-### LEVENSHTEIN_MATCH()
-
-`LEVENSHTEIN_MATCH(path, target, distance, transpositions, maxTerms, prefix) → fulfilled`
-
-Match documents with a [Damerau-Levenshtein distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance)
-lower than or equal to `distance` between the stored attribute value and
-`target`. It can optionally match documents using a pure Levenshtein distance.
-
-See [`LEVENSHTEIN_DISTANCE()`](string.md#levenshtein_distance)
-if you want to calculate the edit distance of two strings.
-
-- **path** (attribute path expression\|string): the path of the attribute to
- compare against in the document or a string
-- **target** (string): the string to compare against the stored attribute
-- **distance** (number): the maximum edit distance, which can be between
- `0` and `4` if `transpositions` is `false`, and between `0` and `3` if
- it is `true`
-- **transpositions** (bool, _optional_): if set to `false`, a Levenshtein
- distance is computed, otherwise a Damerau-Levenshtein distance (default)
-- **maxTerms** (number, _optional_): consider only a specified number of the
- most relevant terms. One can pass `0` to consider all matched terms, but it may
- impact performance negatively. The default value is `64`.
-- returns **fulfilled** (bool): `true` if the calculated distance is less than
- or equal to *distance*, `false` otherwise
-- **prefix** (string, _optional_): if defined, then a search for the exact
- prefix is carried out, using the matches as candidates. The Levenshtein /
- Damerau-Levenshtein distance is then computed for each candidate using
- the `target` value and the remainders of the strings, which means that the
- **prefix needs to be removed from `target`** (see
- [example](#example-matching-with-prefix-search)). This option can improve
- performance in cases where there is a known common prefix. The default value
- is an empty string (introduced in v3.7.13, v3.8.1).
-
-#### Example: Matching with and without transpositions
-
-The Levenshtein distance between _quick_ and _quikc_ is `2` because it requires
-two operations to go from one to the other (remove _k_, insert _k_ at a
-different position).
-
-```aql
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, "quikc", 2, false) // matches "quick"
- RETURN doc.text
-```
-
-The Damerau-Levenshtein distance is `1` (move _k_ to the end).
-
-```aql
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, "quikc", 1) // matches "quick"
- RETURN doc.text
-```
-
-#### Example: Matching with prefix search
-
-Match documents with a Levenshtein distance of 1 with the prefix `qui`. The edit
-distance is calculated using the search term `kc` (`quikc` with the prefix `qui`
-removed) and the stored value without the prefix (e.g. `ck`). The prefix `qui`
-is constant.
-
-```aql
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, "kc", 1, false, 64, "qui") // matches "quick"
- RETURN doc.text
-```
-
-You may compute the prefix and suffix from the input string as follows:
-
-```aql
-LET input = "quikc"
-LET prefixSize = 3
-LET prefix = LEFT(input, prefixSize)
-LET suffix = SUBSTRING(input, prefixSize)
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, suffix, 1, false, 64, prefix) // matches "quick"
- RETURN doc.text
-```
-
-#### Example: Basing the edit distance on string length
-
-You may want to pick the maximum edit distance based on string length.
-If the stored attribute is the string _quick_ and the target string is
-_quicksands_, then the Levenshtein distance is 5, with 50% of the
-characters mismatching. If the inputs are _q_ and _qu_, then the distance
-is only 1, although it is also a 50% mismatch.
-
-```aql
-LET target = "input"
-LET targetLength = LENGTH(target)
-LET maxDistance = (targetLength > 5 ? 2 : (targetLength >= 3 ? 1 : 0))
-FOR doc IN viewName
- SEARCH LEVENSHTEIN_MATCH(doc.text, target, maxDistance, true)
- RETURN doc.text
-```
-
-### LIKE()
-
-`LIKE(path, search) → bool`
-
-Check whether the pattern `search` is contained in the attribute denoted by `path`,
-using wildcard matching.
-
-- `_`: A single arbitrary character
-- `%`: Zero, one or many arbitrary characters
-- `\\_`: A literal underscore
-- `\\%`: A literal percent sign
-
-{{< info >}}
-Literal backlashes require different amounts of escaping depending on the
-context:
-- `\` in bind variables (_Table_ view mode) in the web interface (automatically
- escaped to `\\` unless the value is wrapped in double quotes and already
- escaped properly)
-- `\\` in bind variables (_JSON_ view mode) and queries in the web interface
-- `\\` in bind variables in arangosh
-- `\\\\` in queries in arangosh
-- Double the amount compared to arangosh in shells that use backslashes for
-escaping (`\\\\` in bind variables and `\\\\\\\\` in queries)
-{{< /info >}}
-
-Searching with the `LIKE()` function in the context of a `SEARCH` operation
-is backed by View indexes. The [String `LIKE()` function](string.md#like)
-is used in other contexts such as in `FILTER` operations and cannot be
-accelerated by any sort of index on the other hand. Another difference is that
-the ArangoSearch variant does not accept a third argument to enable
-case-insensitive matching. This can be controlled with Analyzers instead.
-
-- **path** (attribute path expression): the path of the attribute to compare
- against in the document
-- **search** (string): a search pattern that can contain the wildcard characters
- `%` (meaning any sequence of characters, including none) and `_` (any single
- character). Literal `%` and `_` must be escaped with backslashes.
-- returns **bool** (bool): `true` if the pattern is contained in `text`,
- and `false` otherwise
-
-#### Example: Searching with wildcards
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(LIKE(doc.text, "foo%b_r"), "text_en")
- RETURN doc.text
-```
-
-`LIKE` can also be used in operator form:
-
-```aql
-FOR doc IN viewName
- SEARCH ANALYZER(doc.text LIKE "foo%b_r", "text_en")
- RETURN doc.text
-```
-
-## Geo functions
-
-The following functions can be accelerated by View indexes. There are
-corresponding [Geo Functions](geo.md) for the regular geo index
-type, but also general purpose functions such as GeoJSON constructors that can
-be used in conjunction with ArangoSearch.
-
-### GEO_CONTAINS()
-
-Introduced in: v3.8.0
-
-`GEO_CONTAINS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](geo.md#geojson) `geoJsonA`
-fully contains `geoJsonB` (every point in B is also in A).
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- returns **bool** (bool): `true` when every point in B is also contained in A,
- `false` otherwise
-
-### GEO_DISTANCE()
-
-Introduced in: v3.8.0
-
-`GEO_DISTANCE(geoJsonA, geoJsonB) → distance`
-
-Return the distance between two [GeoJSON objects](geo.md#geojson),
-measured from the `centroid` of each shape.
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- returns **distance** (number): the distance between the centroid points of
- the two objects on the reference ellipsoid
-
-### GEO_IN_RANGE()
-
-Introduced in: v3.8.0
-
-`GEO_IN_RANGE(geoJsonA, geoJsonB, low, high, includeLow, includeHigh) → bool`
-
-Checks whether the distance between two [GeoJSON objects](geo.md#geojson)
-lies within a given interval. The distance is measured from the `centroid` of
-each shape.
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **low** (number): minimum value of the desired range
-- **high** (number): maximum value of the desired range
-- **includeLow** (bool, optional): whether the minimum value shall be included
- in the range (left-closed interval) or not (left-open interval). The default
- value is `true`
-- **includeHigh** (bool): whether the maximum value shall be included in the
- range (right-closed interval) or not (right-open interval). The default value
- is `true`
-- returns **bool** (bool): whether the evaluated distance lies within the range
-
-### GEO_INTERSECTS()
-
-Introduced in: v3.8.0
-
-`GEO_INTERSECTS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](geo.md#geojson) `geoJsonA`
-intersects with `geoJsonB` (i.e. at least one point of B is in A or vice versa).
-
-- **geoJsonA** (object\|array): first GeoJSON object or coordinate array
- (in longitude, latitude order)
-- **geoJsonB** (object\|array): second GeoJSON object or coordinate array
- (in longitude, latitude order)
-- returns **bool** (bool): `true` if A and B intersect, `false` otherwise
-
-## Scoring Functions
-
-Scoring functions return a ranking value for the documents found by a
-[SEARCH operation](../high-level-operations/search.md). The better the documents match
-the search expression the higher the returned number.
-
-The first argument to any scoring function is always the document emitted by
-a `FOR` operation over an `arangosearch` View.
-
-To sort the result set by relevance, with the more relevant documents coming
-first, sort in **descending order** by the score (e.g. `SORT BM25(...) DESC`).
-
-You may calculate custom scores based on a scoring function using document
-attributes and numeric functions (e.g. `TFIDF(doc) * LOG(doc.value)`):
-
-```aql
-FOR movie IN imdbView
- SEARCH PHRASE(movie.title, "Star Wars", "text_en")
- SORT BM25(movie) * LOG(movie.runtime + 1) DESC
- RETURN movie
-```
-
-Sorting by more than one score is allowed. You may also sort by a mix of
-scores and attributes from multiple Views as well as collections:
-
-```aql
-FOR a IN viewA
- FOR c IN coll
- FOR b IN viewB
- SORT TFIDF(b), c.name, BM25(a)
- ...
-```
-
-### BM25()
-
-`BM25(doc, k, b) → score`
-
-Sorts documents using the
-[**Best Matching 25** algorithm](https://en.wikipedia.org/wiki/Okapi_BM25)
-(Okapi BM25).
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **k** (number, _optional_): calibrates the text term frequency scaling.
- The value needs to be non-negative (`0.0` or higher), or the returned
- score is an undefined value that may cause unpredictable results.
- The default is `1.2`. A `k` value of `0` corresponds to a binary model
- (no term frequency), and a large value corresponds to using raw term frequency
-- **b** (number, _optional_): determines the scaling by the total text length.
- The value needs to be between `0.0` and `1.0` (inclusive), or the returned
- score is an undefined value that may cause unpredictable results.
- The default is `0.75`. At the extreme values of the coefficient `b`, BM25
- turns into the ranking functions known as:
- - BM11 for `b` = `1` (corresponds to fully scaling the term weight by the
- total text length)
- - BM15 for `b` = `0` (corresponds to no length normalization)
-- returns **score** (number): computed ranking value
-
-{{< info >}}
-The Analyzers used for indexing document attributes must have the `"frequency"`
-feature enabled. The `BM25()` function will otherwise return a score of 0.
-The Analyzers should have the `"norm"` feature enabled, too, or normalization
-will be disabled, which is not meaningful for BM25 and BM11. BM15 does not need
-the `"norm"` feature as it has no length normalization.
-{{< /info >}}
-
-#### Example: Sorting by default `BM25()` score
-
-Sorting by relevance with BM25 at default settings:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT BM25(doc) DESC
- RETURN doc
-```
-
-#### Example: Sorting with tuned `BM25()` ranking
-
-Sorting by relevance, with double-weighted term frequency and with full text
-length normalization:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT BM25(doc, 2.4, 1) DESC
- RETURN doc
-```
-
-### TFIDF()
-
-`TFIDF(doc, normalize) → score`
-
-Sorts documents using the
-[**term frequency–inverse document frequency** algorithm](https://en.wikipedia.org/wiki/TF-IDF)
-(TF-IDF).
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **normalize** (bool, _optional_): specifies whether scores should be
- normalized. The default is `false`.
-- returns **score** (number): computed ranking value
-
-{{< info >}}
-The Analyzers used for indexing document attributes must have the `"frequency"`
-feature enabled. The `TFIDF()` function will otherwise return a score of 0.
-The Analyzers need to have the `"norm"` feature enabled, too, if you want to use
-`TFIDF()` with the `normalize` parameter set to `true`.
-{{< /info >}}
-
-#### Example: Sorting by default `TFIDF()` score
-
-Sort by relevance using the TF-IDF score:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT TFIDF(doc) DESC
- RETURN doc
-```
-
-#### Example: Sorting by `TFIDF()` score with normalization
-
-Sort by relevance using a normalized TF-IDF score:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT TFIDF(doc, true) DESC
- RETURN doc
-```
-
-#### Example: Sort by value and `TFIDF()`
-
-Sort by the value of the `text` attribute in ascending order, then by the TFIDF
-score in descending order where the attribute values are equivalent:
-
-```aql
-FOR doc IN viewName
- SEARCH ...
- SORT doc.text, TFIDF(doc) DESC
- RETURN doc
-```
-
-## Search Highlighting Functions
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-### OFFSET_INFO()
-
-`OFFSET_INFO(doc, paths) → offsetInfo`
-
-Returns the attribute paths and substring offsets of matched terms, phrases, or
-_n_-grams for search highlighting purposes.
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **paths** (string\|array): a string or an array of strings, each describing an
- attribute and array element path you want to get the offsets for. Use `.` to
- access nested objects, and `[n]` with `n` being an array index to specify array
- elements. The attributes need to be indexed by Analyzers with the `offset`
- feature enabled.
-- returns **offsetInfo** (array): an array of objects, limited to a default of
- 10 offsets per path. Each object has the following attributes:
- - **name** (array): the attribute and array element path as an array of
- strings and numbers. You can pass this name to the
- [`VALUE()` function](document-object.md) to dynamically look up the value.
- - **offsets** (array): an array of arrays with the matched positions. Each
- inner array has two elements with the start offset and the length of a match.
-
- {{< warning >}}
- The offsets describe the positions in bytes, not characters. You may need
- to account for characters encoded using multiple bytes.
- {{< /warning >}}
-
----
-
-`OFFSET_INFO(doc, rules) → offsetInfo`
-
-- **doc** (document): must be emitted by `FOR ... IN viewName`
-- **rules** (array): an array of objects with the following attributes:
- - **name** (string): an attribute and array element path
- you want to get the offsets for. Use `.` to access nested objects,
- and `[n]` with `n` being an array index to specify array elements. The
- attributes need to be indexed by Analyzers with the `offset` feature enabled.
- - **options** (object): an object with the following attributes:
- - **maxOffsets** (number, _optional_): the total number of offsets to
- collect per path. Default: `10`.
- - **limits** (object, _optional_): an object with the following attributes:
- - **term** (number, _optional_): the total number of term offsets to
- collect per path. Default: 232 .
- - **phrase** (number, _optional_): the total number of phrase offsets to
- collect per path. Default: 232 .
- - **ngram** (number, _optional_): the total number of _n_-gram offsets to
- collect per path. Default: 232 .
-- returns **offsetInfo** (array): an array of objects, each with the following
- attributes:
- - **name** (array): the attribute and array element path as an array of
- strings and numbers. You can pass this name to the
- [`VALUE()`](document-object.md) to dynamically look up the value.
- - **offsets** (array): an array of arrays with the matched positions, capped
- to the specified limits. Each inner array has two elements with the start
- offset and the length of a match.
-
- {{< warning >}}
- The start offsets and lengths describe the positions in bytes, not characters.
- You may need to account for characters encoded using multiple bytes.
- {{< /warning >}}
-
-**Examples**
-
-Search a View and get the offset information for the matches:
-
-```js
----
-name: aqlOffsetInfo
-description: ''
----
-~db._create("food");
-~db.food.save({ name: "avocado", description: { en: "The avocado is a medium-sized, evergreen tree, native to the Americas." } });
-~db.food.save({ name: "tomato", description: { en: "The tomato is the edible berry of the tomato plant." } });
-~var analyzers = require("@arangodb/analyzers");
-~var analyzer = analyzers.save("text_en_offset", "text", { locale: "en", stopwords: [] }, ["frequency", "norm", "position", "offset"]);
-~db._createView("food_view", "arangosearch", { links: { food: { fields: { description: { fields: { en: { analyzers: ["text_en_offset"] } } } } } } });
-~assert(db._query(`FOR d IN food_view COLLECT WITH COUNT INTO c RETURN c`).toArray()[0] === 2);
-db._query(`
- FOR doc IN food_view
- SEARCH ANALYZER(TOKENS("avocado tomato", "text_en_offset") ANY == doc.description.en, "text_en_offset")
- RETURN OFFSET_INFO(doc, ["description.en"])`);
-~db._dropView("food_view");
-~db._drop("food");
-~analyzers.remove(analyzer.name);
-```
-
-For full examples, see [Search Highlighting](../../index-and-search/arangosearch/search-highlighting.md).
diff --git a/site/content/3.11/aql/functions/geo.md b/site/content/3.11/aql/functions/geo.md
deleted file mode 100644
index bc4558ee74..0000000000
--- a/site/content/3.11/aql/functions/geo.md
+++ /dev/null
@@ -1,964 +0,0 @@
----
-title: Geo-spatial functions in AQL
-menuTitle: Geo
-weight: 35
-description: >-
- AQL supports functions for geo-spatial queries and a subset of calls can be
- accelerated by geo-spatial indexes
----
-## Geo-spatial data representations
-
-You can model geo-spatial information in different ways using the data types
-available in ArangoDB. The recommended way is to use objects with **GeoJSON**
-geometry but you can also use **longitude and latitude coordinate pairs**
-for points. Both models are supported by
-[Geo-Spatial Indexes](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md).
-
-### Coordinate pairs
-
-Longitude and latitude coordinates are numeric values and can be stored in the
-following ways:
-
-- Coordinates using an array with two numbers in `[longitude, latitude]` order,
- for example, in a user-chosen attribute called `location`:
-
- ```json
- {
- "location": [ -73.983, 40.764 ]
- }
- ```
-
-- Coordinates using an array with two numbers in `[latitude, longitude]` order,
- for example, in a user-chosen attribute called `location`:
-
- ```json
- {
- "location": [ 40.764, -73.983 ]
- }
- ```
-
-- Coordinates using two separate numeric attributes, for example, in two
- user-chosen attributes called `lat` and `lng` as sub-attributes of a `location`
- attribute:
-
- ```json
- {
- "location": {
- "lat": 40.764,
- "lng": -73.983
- }
- }
- ```
-
-### GeoJSON
-
-GeoJSON is a geospatial data format based on JSON. It defines several different
-types of JSON objects and the way in which they can be combined to represent
-data about geographic shapes on the Earth surface.
-
-Example of a document with a GeoJSON Point stored in a user-chosen attribute
-called `location` (with coordinates in `[longitude, latitude]` order):
-
-```json
-{
- "location": {
- "type": "Point",
- "coordinates": [ -73.983, 40.764 ]
- }
-}
-```
-
-GeoJSON uses a geographic coordinate reference system,
-World Geodetic System 1984 (WGS 84), and units of decimal degrees.
-
-Internally ArangoDB maps all coordinate pairs onto a unit sphere. Distances are
-projected onto a sphere with the Earth's *Volumetric mean radius* of *6371
-km*. ArangoDB implements a useful subset of the GeoJSON format
-[(RFC 7946)](https://tools.ietf.org/html/rfc7946).
-Feature Objects and the GeometryCollection type are not supported.
-Supported geometry object types are:
-
-- Point
-- MultiPoint
-- LineString
-- MultiLineString
-- Polygon
-- MultiPolygon
-
-#### Point
-
-A [GeoJSON Point](https://tools.ietf.org/html/rfc7946#section-3.1.2) is a
-[position](https://tools.ietf.org/html/rfc7946#section-3.1.1) comprised of
-a longitude and a latitude:
-
-```json
-{
- "type": "Point",
- "coordinates": [100.0, 0.0]
-}
-```
-
-#### MultiPoint
-
-A [GeoJSON MultiPoint](https://tools.ietf.org/html/rfc7946#section-3.1.7) is
-an array of positions:
-
-```json
-{
- "type": "MultiPoint",
- "coordinates": [
- [100.0, 0.0],
- [101.0, 1.0]
- ]
-}
-```
-
-#### LineString
-
-A [GeoJSON LineString](https://tools.ietf.org/html/rfc7946#section-3.1.4) is
-an array of two or more positions:
-
-```json
-{
- "type": "LineString",
- "coordinates": [
- [100.0, 0.0],
- [101.0, 1.0]
- ]
-}
-```
-
-#### MultiLineString
-
-A [GeoJSON MultiLineString](https://tools.ietf.org/html/rfc7946#section-3.1.5) is
-an array of LineString coordinate arrays:
-
-```json
-{
- "type": "MultiLineString",
- "coordinates": [
- [
- [100.0, 0.0],
- [101.0, 1.0]
- ],
- [
- [102.0, 2.0],
- [103.0, 3.0]
- ]
- ]
-}
-```
-
-#### Polygon
-
-A [GeoJSON Polygon](https://tools.ietf.org/html/rfc7946#section-3.1.6) consists
-of a series of closed `LineString` objects (ring-like). These *Linear Ring*
-objects consist of four or more coordinate pairs with the first and last
-coordinate pair being equal. Coordinate pairs of a Polygon are an array of
-linear ring coordinate arrays. The first element in the array represents
-the exterior ring. Any subsequent elements represent interior rings
-(holes within the surface).
-
-The orientation of the first linear ring is crucial: the right-hand-rule
-is applied, so that the area to the left of the path of the linear ring
-(when walking on the surface of the Earth) is considered to be the
-"interior" of the polygon. All other linear rings must be contained
-within this interior. According to the GeoJSON standard, the subsequent
-linear rings must be oriented following the right-hand-rule, too,
-that is, they must run **clockwise** around the hole (viewed from
-above). However, ArangoDB is tolerant here (as suggested by the
-[GeoJSON standard](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6)),
-all but the first linear ring are inverted if the orientation is wrong.
-
-In the end, a point is considered to be in the interior of the polygon,
-if and only if one has to cross an odd number of linear rings to reach the
-exterior of the polygon prescribed by the first linear ring.
-
-A number of additional rules apply (and are enforced by the GeoJSON
-parser):
-
-- A polygon must contain at least one linear ring, i.e., it must not be
- empty.
-- A linear ring may not be empty, it needs at least three _distinct_
- coordinate pairs, that is, at least 4 coordinate pairs (since the first and
- last must be the same).
-- No two edges of linear rings in the polygon must intersect, in
- particular, no linear ring may be self-intersecting.
-- Within the same linear ring, consecutive coordinate pairs may be the same,
- otherwise all coordinate pairs need to be distinct (except the first and last one).
-- Linear rings of a polygon must not share edges, but they may share coordinate pairs.
-- A linear ring defines two regions on the sphere. ArangoDB always
- interprets the region that lies to the left of the boundary ring (in
- the direction of its travel on the surface of the Earth) as the
- interior of the ring. This is in contrast to earlier versions of
- ArangoDB before 3.10, which always took the **smaller** of the two
- regions as the interior. Therefore, from 3.10 on one can now have
- polygons whose outer ring encloses more than half the Earth's surface.
-- The interior rings must be contained in the (interior) of the outer ring.
-- Interior rings should follow the above rule for orientation
- (counterclockwise external rings, clockwise internal rings, interior
- always to the left of the line).
-
-Here is an example with no holes:
-
-```json
-{
- "type": "Polygon",
- "coordinates": [
- [
- [100.0, 0.0],
- [101.0, 0.0],
- [101.0, 1.0],
- [100.0, 1.0],
- [100.0, 0.0]
- ]
- ]
-}
-```
-
-Here is an example with a hole:
-
-```json
-{
- "type": "Polygon",
- "coordinates": [
- [
- [100.0, 0.0],
- [101.0, 0.0],
- [101.0, 1.0],
- [100.0, 1.0],
- [100.0, 0.0]
- ],
- [
- [100.8, 0.8],
- [100.8, 0.2],
- [100.2, 0.2],
- [100.2, 0.8],
- [100.8, 0.8]
- ]
- ]
-}
-```
-
-#### MultiPolygon
-
-A [GeoJSON MultiPolygon](https://tools.ietf.org/html/rfc7946#section-3.1.6) consists
-of multiple polygons. The "coordinates" member is an array of
-_Polygon_ coordinate arrays. See [above](#polygon) for the rules and
-the meaning of polygons.
-
-If the polygons in a MultiPolygon are disjoint, then a point is in the
-interior of the MultiPolygon if and only if it is
-contained in one of the polygons. If some polygon P2 in a MultiPolygon
-is contained in another polygon P1, then P2 is treated like a hole
-in P1 and containment of points is defined with the even-odd-crossings rule
-(see [Polygon](#polygon)).
-
-Additionally, the following rules apply and are enforced for
-MultiPolygons:
-
-- No two edges in the linear rings of the polygons of a MultiPolygon
- may intersect.
-- Polygons in the same MultiPolygon may not share edges, but they may share
- coordinate pairs.
-
-Example with two polygons, the second one with a hole:
-
-```json
-{
- "type": "MultiPolygon",
- "coordinates": [
- [
- [
- [102.0, 2.0],
- [103.0, 2.0],
- [103.0, 3.0],
- [102.0, 3.0],
- [102.0, 2.0]
- ]
- ],
- [
- [
- [100.0, 0.0],
- [101.0, 0.0],
- [101.0, 1.0],
- [100.0, 1.0],
- [100.0, 0.0]
- ],
- [
- [100.2, 0.2],
- [100.2, 0.8],
- [100.8, 0.8],
- [100.8, 0.2],
- [100.2, 0.2]
- ]
- ]
- ]
-}
-```
-
-## GeoJSON interpretation
-
-Note the following technical detail about GeoJSON: The
-[GeoJSON standard, Section 3.1.1 Position](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.1)
-prescribes that lines are **cartesian lines in cylindrical coordinates**
-(longitude/latitude). However, this definition is inconvenient in practice,
-since such lines are not geodesic on the surface of the Earth.
-Furthermore, the best available algorithms for geospatial computations on Earth
-typically use geodesic lines as the boundaries of polygons on Earth.
-
-Therefore, ArangoDB uses the **syntax of the GeoJSON** standard,
-but then interprets lines (and boundaries of polygons) as
-**geodesic lines (pieces of great circles) on Earth**. This is a
-violation of the GeoJSON standard, but serving a practical purpose.
-
-Note in particular that this can sometimes lead to unexpected results.
-Consider the following polygon (remember that GeoJSON has
-**longitude before latitude** in coordinate pairs):
-
-```json
-{ "type": "Polygon", "coordinates": [[
- [4, 54], [4, 47], [16, 47], [16, 54], [4, 54]
-]] }
-```
-
-
-
-It does not contain the point `[10, 47]` since the shortest path (geodesic)
-from `[4, 47]` to `[16, 47]` lies North relative to the parallel of latitude at
-47 degrees. On the contrary, the polygon does contain the point `[10, 54]` as it
-lies South of the parallel of latitude at 54 degrees.
-
-{{< info >}}
-ArangoDB version before 3.10 did an inconsistent special detection of "rectangle"
-polygons that later versions from 3.10 onward no longer do, see
-[Legacy Polygons](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md#legacy-polygons).
-{{< /info >}}
-
-Furthermore, there is an issue with the interpretation of linear rings
-(boundaries of polygons) according to
-[GeoJSON standard, Section 3.1.6 Polygon](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1.6).
-This section states explicitly:
-
-> A linear ring MUST follow the right-hand rule with respect to the
-> area it bounds, i.e., exterior rings are counter-clockwise, and
-> holes are clockwise.
-
-This rather misleading phrase means that when a linear ring is used as
-the boundary of a polygon, the "interior" of the polygon lies **to the left**
-of the boundary when one travels on the surface of the Earth and
-along the linear ring. For
-example, the polygon below travels **counter-clockwise** around the point
-`[10, 50]`, and thus the interior of the polygon contains this point and
-its surroundings, but not, for example, the North Pole and the South
-Pole.
-
-```json
-{ "type": "Polygon", "coordinates": [[
- [4, 54], [4, 47], [16, 47], [16, 54], [4, 54]
-]] }
-```
-
-
-
-On the other hand, the following polygon travels **clockwise** around the point
-`[10, 50]`, and thus its "interior" does not contain `[10, 50]`, but does
-contain the North Pole and the South Pole:
-
-```json
-{ "type": "Polygon", "coordinates": [[
- [4, 54], [16, 54], [16, 47], [4, 47], [4, 54]
-]] }
-```
-
-
-
-Remember that the "interior" is to the left of the given
-linear ring, so this second polygon is basically the complement on Earth
-of the previous polygon!
-
-ArangoDB versions before 3.10 did not follow this rule and always took the
-"smaller" connected component of the surface as the "interior" of the polygon.
-This made it impossible to specify polygons which covered more than half of the
-sphere. From version 3.10 onward, ArangoDB recognizes this correctly.
-See [Legacy Polygons](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md#legacy-polygons)
-for how to deal with this issue.
-
-## Geo utility functions
-
-The following helper functions **can** use geo indexes, but do not have to in
-all cases. You can use all of these functions in combination with each other,
-and if you have configured a geo index it may be utilized,
-see [Geo Indexing](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md).
-
-### DISTANCE()
-
-`DISTANCE(latitude1, longitude1, latitude2, longitude2) → distance`
-
-Calculate the distance between two arbitrary points in meters (as birds
-would fly). The value is computed using the haversine formula, which is based
-on a spherical Earth model. It's fast to compute and is accurate to around 0.3%,
-which is sufficient for most use cases such as location-aware services.
-
-- **latitude1** (number): the latitude of the first point
-- **longitude1** (number): the longitude of the first point
-- **latitude2** (number): the latitude of the second point
-- **longitude2** (number): the longitude of the second point
-- returns **distance** (number): the distance between both points in **meters**
-
-```aql
-// Distance from Brandenburg Gate (Berlin) to ArangoDB headquarters (Cologne)
-DISTANCE(52.5163, 13.3777, 50.9322, 6.94) // 476918.89688380965 (~477km)
-
-// Sort a small number of documents based on distance to Central Park (New York)
-FOR doc IN coll // e.g. documents returned by a traversal
- SORT DISTANCE(doc.latitude, doc.longitude, 40.78, -73.97)
- RETURN doc
-```
-
-### GEO_CONTAINS()
-
-`GEO_CONTAINS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](#geojson) `geoJsonA`
-fully contains `geoJsonB` (every point in B is also in A). The object `geoJsonA`
-has to be of type _Polygon_ or _MultiPolygon_. For other types containment is
-not well-defined because of numerical stability problems.
-
-- **geoJsonA** (object): first GeoJSON object
-- **geoJsonB** (object): second GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- returns **bool** (bool): `true` if every point in B is also contained in A,
- otherwise `false`
-
-{{< info >}}
-ArangoDB follows and exposes the same behavior as the underlying
-S2 geometry library. As stated in the S2 documentation:
-
-> Point containment is defined such that if the sphere is subdivided
-> into faces (loops), every point is contained by exactly one face.
-> This implies that linear rings do not necessarily contain their vertices.
-
-As a consequence, a linear ring or polygon does not necessarily contain its
-boundary edges!
-{{< /info >}}
-
-You can optimize queries that contain a `FILTER` expression of the following
-form with an S2-based [geospatial index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md):
-
-```aql
-FOR doc IN coll
- FILTER GEO_CONTAINS(geoJson, doc.geo)
- ...
-```
-
-In this example, you would create the index for the collection `coll`, on the
-attribute `geo`. You need to set the `geoJson` index option to `true`.
-The `geoJson` variable needs to evaluate to a valid GeoJSON object. Also note
-the argument order: the stored document attribute `doc.geo` is passed as the
-second argument. Passing it as the first argument, like
-`FILTER GEO_CONTAINS(doc.geo, geoJson)` to test whether `doc.geo` contains
-`geoJson`, cannot utilize the index.
-
-### GEO_DISTANCE()
-
-`GEO_DISTANCE(geoJsonA, geoJsonB, ellipsoid) → distance`
-
-Return the distance between two GeoJSON objects in meters, measured from the
-**centroid** of each shape. For a list of supported types see the
-[geo index page](#geojson).
-
-- **geoJsonA** (object): first GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- **geoJsonB** (object): second GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- **ellipsoid** (string, *optional*): reference ellipsoid to use.
- Supported are `"sphere"` (default) and `"wgs84"`.
-- returns **distance** (number): the distance between the centroid points of
- the two objects on the reference ellipsoid in **meters**
-
-```aql
-LET polygon = {
- type: "Polygon",
- coordinates: [[[-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]]]
-}
-FOR doc IN collectionName
- LET distance = GEO_DISTANCE(doc.geometry, polygon) // calculates the distance
- RETURN distance
-```
-
-You can optimize queries that contain a `FILTER` expression of the following
-form with an S2-based [geospatial index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md):
-
-```aql
-FOR doc IN coll
- FILTER GEO_DISTANCE(geoJson, doc.geo) <= limit
- ...
-```
-
-In this example, you would create the index for the collection `coll`, on the
-attribute `geo`. You need to set the `geoJson` index option to `true`.
-`geoJson` needs to evaluate to a valid GeoJSON object. `limit` must be a
-distance in meters; it cannot be an expression. An upper bound with `<`,
-a lower bound with `>` or `>=`, or both, are equally supported.
-
-You can also optimize queries that use a `SORT` condition of the following form
-with a geospatial index:
-
-```aql
- SORT GEO_DISTANCE(geoJson, doc.geo)
-```
-
-The index covers returning matches from closest to furthest away, or vice versa.
-You may combine such a `SORT` with a `FILTER` expression that utilizes the
-geospatial index, too, via the [`GEO_DISTANCE()`](#geo_distance),
-[`GEO_CONTAINS()`](#geo_contains), and [`GEO_INTERSECTS()`](#geo_intersects)
-functions.
-
-### GEO_AREA()
-
-`GEO_AREA(geoJson, ellipsoid) → area`
-
-Return the area for a [Polygon](#polygon) or [MultiPolygon](#multipolygon)
-on a sphere with the average Earth radius, or an ellipsoid.
-
-- **geoJson** (object): a GeoJSON object
-- **ellipsoid** (string, *optional*): reference ellipsoid to use.
- Supported are `"sphere"` (default) and `"wgs84"`.
-- returns **area** (number): the area of the polygon in **square meters**
-
-```aql
-LET polygon = {
- type: "Polygon",
- coordinates: [[[-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]]]
-}
-RETURN GEO_AREA(polygon, "wgs84")
-```
-
-### GEO_EQUALS()
-
-`GEO_EQUALS(geoJsonA, geoJsonB) → bool`
-
-Checks whether two [GeoJSON objects](#geojson) are equal or not.
-
-- **geoJsonA** (object): first GeoJSON object.
-- **geoJsonB** (object): second GeoJSON object.
-- returns **bool** (bool): `true` if they are equal, otherwise `false`.
-
-```aql
-LET polygonA = GEO_POLYGON([
- [-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]
-])
-LET polygonB = GEO_POLYGON([
- [-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]
-])
-RETURN GEO_EQUALS(polygonA, polygonB) // true
-```
-
-```aql
-LET polygonA = GEO_POLYGON([
- [-11.1, 24.0], [-10.5, 26.1], [-11.2, 27.1], [-11.1, 24.0]
-])
-LET polygonB = GEO_POLYGON([
- [-11.5, 23.5], [-10.5, 26.1], [-11.2, 27.1], [-11.5, 23.5]
-])
-RETURN GEO_EQUALS(polygonA, polygonB) // false
-```
-
-### GEO_INTERSECTS()
-
-`GEO_INTERSECTS(geoJsonA, geoJsonB) → bool`
-
-Checks whether the [GeoJSON object](#geojson) `geoJsonA`
-intersects with `geoJsonB` (i.e. at least one point in B is also in A or vice-versa).
-
-- **geoJsonA** (object): first GeoJSON object
-- **geoJsonB** (object): second GeoJSON object, or a coordinate array in
- `[longitude, latitude]` order
-- returns **bool** (bool): true if B intersects A, false otherwise
-
-You can optimize queries that contain a `FILTER` expression of the following
-form with an S2-based [geospatial index](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md):
-
-```aql
-FOR doc IN coll
- FILTER GEO_INTERSECTS(geoJson, doc.geo)
- ...
-```
-
-In this example, you would create the index for the collection `coll`, on the
-attribute `geo`. You need to set the `geoJson` index option to `true`.
-`geoJson` needs to evaluate to a valid GeoJSON object. Also note
-the argument order: the stored document attribute `doc.geo` is passed as the
-second argument. Passing it as the first argument, like
-`FILTER GEO_INTERSECTS(doc.geo, geoJson)` to test whether `doc.geo` intersects
-`geoJson`, cannot utilize the index.
-
-### GEO_IN_RANGE()
-
-Introduced in: v3.8.0
-
-`GEO_IN_RANGE(geoJsonA, geoJsonB, low, high, includeLow, includeHigh) → bool`
-
-Checks whether the distance between two [GeoJSON objects](#geojson)
-lies within a given interval. The distance is measured from the **centroid** of
-each shape.
-
-- **geoJsonA** (object\|array): first GeoJSON object, or a coordinate array
- in `[longitude, latitude]` order
-- **geoJsonB** (object\|array): second GeoJSON object, or a coordinate array
- in `[longitude, latitude]` order
-- **low** (number): minimum value of the desired range
-- **high** (number): maximum value of the desired range
-- **includeLow** (bool, optional): whether the minimum value shall be included
- in the range (left-closed interval) or not (left-open interval). The default
- value is `true`
-- **includeHigh** (bool): whether the maximum value shall be included in the
- range (right-closed interval) or not (right-open interval). The default value
- is `true`
-- returns **bool** (bool): whether the evaluated distance lies within the range
-
-### IS_IN_POLYGON()
-
-Determine whether a point is inside a polygon.
-
-{{< warning >}}
-The `IS_IN_POLYGON()` AQL function is **deprecated** as of ArangoDB 3.4.0 in
-favor of the new [`GEO_CONTAINS()` AQL function](#geo_contains), which works with
-[GeoJSON](https://tools.ietf.org/html/rfc7946) Polygons and MultiPolygons.
-{{< /warning >}}
-
-`IS_IN_POLYGON(polygon, latitude, longitude) → bool`
-
-- **polygon** (array): an array of arrays with 2 elements each, representing the
- points of the polygon in the format `[latitude, longitude]`
-- **latitude** (number): the latitude of the point to search
-- **longitude** (number): the longitude of the point to search
-- returns **bool** (bool): `true` if the point (`[latitude, longitude]`) is
- inside the `polygon` or `false` if it's not. The result is undefined (can be
- `true` or `false`) if the specified point is exactly on a boundary of the
- polygon.
-
-```aql
-// checks if the point (latitude 4, longitude 7) is contained inside the polygon
-IS_IN_POLYGON( [ [ 0, 0 ], [ 0, 10 ], [ 10, 10 ], [ 10, 0 ] ], 4, 7 )
-```
-
----
-
-`IS_IN_POLYGON(polygon, coord, useLonLat) → bool`
-
-The 2nd parameter can alternatively be specified as an array with two values.
-
-By default, each array element in `polygon` is expected to be in the format
-`[latitude, longitude]`. This can be changed by setting the 3rd parameter to `true` to
-interpret the points as `[longitude, latitude]`. `coord` is then also interpreted in
-the same way.
-
-- **polygon** (array): an array of arrays with 2 elements each, representing the
- points of the polygon
-- **coord** (array): the point to search as a numeric array with two elements
-- **useLonLat** (bool, *optional*): if set to `true`, the coordinates in
- `polygon` and the coordinate pair `coord` are interpreted as
- `[longitude, latitude]` (like in GeoJSON). The default is `false` and the
- format `[latitude, longitude]` is expected.
-- returns **bool** (bool): `true` if the point `coord` is inside the `polygon`
- or `false` if it's not. The result is undefined (can be `true` or `false`) if
- the specified point is exactly on a boundary of the polygon.
-
-```aql
-// checks if the point (lat 4, lon 7) is contained inside the polygon
-IS_IN_POLYGON( [ [ 0, 0 ], [ 0, 10 ], [ 10, 10 ], [ 10, 0 ] ], [ 4, 7 ] )
-
-// checks if the point (lat 4, lon 7) is contained inside the polygon
-IS_IN_POLYGON( [ [ 0, 0 ], [ 10, 0 ], [ 10, 10 ], [ 0, 10 ] ], [ 7, 4 ], true )
-```
-
-## GeoJSON Constructors
-
-The following helper functions are available to easily create valid GeoJSON
-output. In all cases you can write equivalent JSON yourself, but these functions
-will help you to make all your AQL queries shorter and easier to read.
-
-### GEO_LINESTRING()
-
-`GEO_LINESTRING(points) → geoJson`
-
-Construct a GeoJSON LineString.
-Needs at least two longitude/latitude pairs.
-
-- **points** (array): an array of `[longitude, latitude]` pairs
-- returns **geoJson** (object): a valid GeoJSON LineString
-
-```aql
----
-name: aqlGeoLineString_1
-description: ''
----
-RETURN GEO_LINESTRING([
- [35, 10], [45, 45]
-])
-```
-
-### GEO_MULTILINESTRING()
-
-`GEO_MULTILINESTRING(points) → geoJson`
-
-Construct a GeoJSON MultiLineString.
-Needs at least two elements consisting valid LineStrings coordinate arrays.
-
-- **points** (array): array of LineStrings
-- returns **geoJson** (object): a valid GeoJSON MultiLineString
-
-```aql
----
-name: aqlGeoMultiLineString_1
-description: ''
----
-RETURN GEO_MULTILINESTRING([
- [[100.0, 0.0], [101.0, 1.0]],
- [[102.0, 2.0], [101.0, 2.3]]
-])
-```
-
-### GEO_MULTIPOINT()
-
-`GEO_MULTIPOINT(points) → geoJson`
-
-Construct a GeoJSON LineString. Needs at least two longitude/latitude pairs.
-
-- **points** (array): an array of `[longitude, latitude]` pairs
-- returns **geoJson** (object): a valid GeoJSON Point
-
-```aql
----
-name: aqlGeoMultiPoint_1
-description: ''
----
-RETURN GEO_MULTIPOINT([
- [35, 10], [45, 45]
-])
-```
-
-### GEO_POINT()
-
-`GEO_POINT(longitude, latitude) → geoJson`
-
-Construct a valid GeoJSON Point.
-
-- **longitude** (number): the longitude portion of the point
-- **latitude** (number): the latitude portion of the point
-- returns **geoJson** (object): a GeoJSON Point
-
-```aql
----
-name: aqlGeoPoint_1
-description: ''
----
-RETURN GEO_POINT(1.0, 2.0)
-```
-
-### GEO_POLYGON()
-
-`GEO_POLYGON(points) → geoJson`
-
-Construct a GeoJSON Polygon. Needs at least one array representing
-a linear ring. Each linear ring consists of an array with at least four
-longitude/latitude pairs. The first linear ring must be the outermost, while
-any subsequent linear ring will be interpreted as holes.
-
-For details about the rules, see [GeoJSON polygons](#polygon).
-
-- **points** (array): an array of (arrays of) `[longitude, latitude]` pairs
-- returns **geoJson** (object\|null): a valid GeoJSON Polygon
-
-A validation step is performed using the S2 geometry library. If the
-validation is not successful, an AQL warning is issued and `null` is
-returned.
-
-Simple Polygon:
-
-```aql
----
-name: aqlGeoPolygon_1
-description: ''
----
-RETURN GEO_POLYGON([
- [0.0, 0.0], [7.5, 2.5], [0.0, 5.0], [0.0, 0.0]
-])
-```
-
-Advanced Polygon with a hole inside:
-
-```aql
----
-name: aqlGeoPolygon_2
-description: ''
----
-RETURN GEO_POLYGON([
- [[35, 10], [45, 45], [15, 40], [10, 20], [35, 10]],
- [[20, 30], [30, 20], [35, 35], [20, 30]]
-])
-```
-
-### GEO_MULTIPOLYGON()
-
-`GEO_MULTIPOLYGON(polygons) → geoJson`
-
-Construct a GeoJSON MultiPolygon. Needs at least two Polygons inside.
-See [`GEO_POLYGON()`](#geo_polygon) and [GeoJSON MultiPolygon](#multipolygon)
-for the rules of Polygon and MultiPolygon construction.
-
-- **polygons** (array): an array of arrays of arrays of `[longitude, latitude]` pairs
-- returns **geoJson** (object\|null): a valid GeoJSON MultiPolygon
-
-A validation step is performed using the S2 geometry library, if the
-validation is not successful, an AQL warning is issued and `null` is
-returned.
-
-MultiPolygon comprised of a simple Polygon and a Polygon with hole:
-
-```aql
----
-name: aqlGeoMultiPolygon_1
-description: ''
----
-RETURN GEO_MULTIPOLYGON([
- [
- [[40, 40], [20, 45], [45, 30], [40, 40]]
- ],
- [
- [[20, 35], [10, 30], [10, 10], [30, 5], [45, 20], [20, 35]],
- [[30, 20], [20, 15], [20, 25], [30, 20]]
- ]
-])
-```
-
-## Geo Index Functions
-
-{{< warning >}}
-The AQL functions `NEAR()`, `WITHIN()` and `WITHIN_RECTANGLE()` are
-deprecated starting from version 3.4.0.
-Please use the [Geo utility functions](#geo-utility-functions) instead.
-{{< /warning >}}
-
-AQL offers the following functions to filter data based on
-[geo indexes](../../index-and-search/indexing/working-with-indexes/geo-spatial-indexes.md). These functions require the collection
-to have at least one geo index. If no geo index can be found, calling this
-function will fail with an error at runtime. There is no error when explaining
-the query however.
-
-### NEAR()
-
-{{< warning >}}
-`NEAR()` is a deprecated AQL function from version 3.4.0 on.
-Use [`DISTANCE()`](#distance) in a query like this instead:
-
-```aql
-FOR doc IN coll
- SORT DISTANCE(doc.latitude, doc.longitude, paramLatitude, paramLongitude) ASC
- RETURN doc
-```
-Assuming there exists a geo-type index on `latitude` and `longitude`, the
-optimizer will recognize it and accelerate the query.
-{{< /warning >}}
-
-`NEAR(coll, latitude, longitude, limit, distanceName) → docArray`
-
-Return at most *limit* documents from collection *coll* that are near
-*latitude* and *longitude*. The result contains at most *limit* documents,
-returned sorted by distance, with closest distances being returned first.
-Optionally, the distances in meters between the specified coordinate pair
-(*latitude* and *longitude*) and the stored coordinate pairs can be returned as
-well. To make use of that, the desired attribute name for the distance result
-has to be specified in the *distanceName* argument. The result documents will
-contain the distance value in an attribute of that name.
-
-- **coll** (collection): a collection
-- **latitude** (number): the latitude of the point to search
-- **longitude** (number): the longitude of the point to search
-- **limit** (number, *optional*): cap the result to at most this number of
- documents. The default is 100. If more documents than *limit* are found,
- it is undefined which ones will be returned.
-- **distanceName** (string, *optional*): include the distance (in meters)
- between the reference point and the stored point in the result, using the
- attribute name *distanceName*
-- returns **docArray** (array): an array of documents, sorted by distance
- (shortest distance first)
-
-### WITHIN()
-
-{{< warning >}}
-`WITHIN()` is a deprecated AQL function from version 3.4.0 on.
-Use [`DISTANCE()`](#distance) in a query like this instead:
-
-```aql
-FOR doc IN coll
- LET d = DISTANCE(doc.latitude, doc.longitude, paramLatitude, paramLongitude)
- FILTER d <= radius
- SORT d ASC
- RETURN doc
-```
-
-Assuming there exists a geo-type index on `latitude` and `longitude`, the
-optimizer will recognize it and accelerate the query.
-{{< /warning >}}
-
-`WITHIN(coll, latitude, longitude, radius, distanceName) → docArray`
-
-Return all documents from collection *coll* that are within a radius of *radius*
-around the specified coordinate pair (*latitude* and *longitude*). The documents
-returned are sorted by distance to the reference point, with the closest
-distances being returned first. Optionally, the distance (in meters) between the
-reference point and the stored point can be returned as well. To make
-use of that, an attribute name for the distance result has to be specified in
-the *distanceName* argument. The result documents will contain the distance
-value in an attribute of that name.
-
-- **coll** (collection): a collection
-- **latitude** (number): the latitude of the point to search
-- **longitude** (number): the longitude of the point to search
-- **radius** (number): radius in meters
-- **distanceName** (string, *optional*): include the distance (in meters)
- between the reference point and stored point in the result, using the
- attribute name *distanceName*
-- returns **docArray** (array): an array of documents, sorted by distance
- (shortest distance first)
-
-### WITHIN_RECTANGLE()
-
-{{< warning >}}
-`WITHIN_RECTANGLE()` is a deprecated AQL function from version 3.4.0 on. Use
-[`GEO_CONTAINS()`](#geo_contains) and a GeoJSON polygon instead - but note that
-this uses geodesic lines from version 3.10.0 onward
-(see [GeoJSON interpretation](#geojson-interpretation)):
-
-```aql
-LET rect = GEO_POLYGON([ [
- [longitude1, latitude1], // bottom-left
- [longitude2, latitude1], // bottom-right
- [longitude2, latitude2], // top-right
- [longitude1, latitude2], // top-left
- [longitude1, latitude1], // bottom-left
-] ])
-FOR doc IN coll
- FILTER GEO_CONTAINS(rect, [doc.longitude, doc.latitude])
- RETURN doc
-```
-
-Assuming there exists a geo-type index on `latitude` and `longitude`, the
-optimizer will recognize it and accelerate the query.
-{{< /warning >}}
-
-`WITHIN_RECTANGLE(coll, latitude1, longitude1, latitude2, longitude2) → docArray`
-
-Return all documents from collection *coll* that are positioned inside the
-bounding rectangle with the points (*latitude1*, *longitude1*) and (*latitude2*,
-*longitude2*). There is no guaranteed order in which the documents are returned.
-
-- **coll** (collection): a collection
-- **latitude1** (number): the latitude of the bottom-left point to search
-- **longitude1** (number): the longitude of the bottom-left point to search
-- **latitude2** (number): the latitude of the top-right point to search
-- **longitude2** (number): the longitude of the top-right point to search
-- returns **docArray** (array): an array of documents, in random order
diff --git a/site/content/3.11/aql/graphs/all-shortest-paths.md b/site/content/3.11/aql/graphs/all-shortest-paths.md
deleted file mode 100644
index a60da2eab8..0000000000
--- a/site/content/3.11/aql/graphs/all-shortest-paths.md
+++ /dev/null
@@ -1,197 +0,0 @@
----
-title: All Shortest Paths in AQL
-menuTitle: All Shortest Paths
-weight: 20
-description: >-
- Find all paths of shortest length between two vertices
----
-## General query idea
-
-This type of query finds all paths of shortest length between two given
-documents (*startVertex* and *targetVertex*) in your graph.
-
-Every returned path is a JSON object with two attributes:
-
-- An array containing the `vertices` on the path.
-- An array containing the `edges` on the path.
-
-**Example**
-
-A visual representation of the example graph:
-
-
-
-Each ellipse stands for a train station with the name of the city written inside
-of it. They are the vertices of the graph. Arrows represent train connections
-between cities and are the edges of the graph.
-
-Assuming that you want to go from **Carlisle** to **London** by train, the
-expected two shortest paths are:
-
-1. Carlisle – Birmingham – London
-2. Carlisle – York – London
-
-Another path that connects Carlisle and London is
-Carlisle – Glasgow – Edinburgh – York – London, but it has two more stops and
-is therefore not a path of the shortest length.
-
-## Syntax
-
-The syntax for All Shortest Paths queries is similar to the one for
-[Shortest Path](shortest-path.md) and there are also two options to
-either use a named graph or a set of edge collections. It only emits a path
-variable however, whereas `SHORTEST_PATH` emits a vertex and an edge variable.
-
-### Working with named graphs
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY ALL_SHORTEST_PATHS
- startVertex TO targetVertex
- GRAPH graphName
-```
-
-- `FOR`: Emits the variable **path** which contains one shortest path as an
- object, with the `vertices` and `edges` of the path.
-- `IN` `OUTBOUND|INBOUND|ANY`: Defines in which direction
- edges are followed (outgoing, incoming, or both)
-- `ALL_SHORTEST_PATHS`: The keyword to compute All Shortest Paths
-- **startVertex** `TO` **targetVertex** (both string\|object): The two vertices between
- which the paths are computed. This can be specified in the form of
- a ID string or in the form of a document with the attribute `_id`. All other
- values result in a warning and an empty result. If one of the specified
- documents does not exist, the result is empty as well and there is no warning.
-- `GRAPH` **graphName** (string): The name identifying the named graph. Its vertex and
- edge collections are looked up for the path search.
-
-{{< info >}}
-All Shortest Paths traversals do not support edge weights.
-{{< /info >}}
-
-### Working with collection sets
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY ALL_SHORTEST_PATHS
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
-```
-
-Instead of `GRAPH graphName` you can specify a list of edge collections.
-The involved vertex collections are determined by the edges of the given
-edge collections.
-
-### Traversing in mixed directions
-
-For All Shortest Paths with a list of edge collections, you can optionally specify the
-direction for some of the edge collections. Say, for example, you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken into
-account. In this case you can use `OUTBOUND` as a general search direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR path IN OUTBOUND ALL_SHORTEST_PATHS
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction use the
-direction defined after `IN` (here: `OUTBOUND`). This allows using a different
-direction for each collection in your path search.
-
-## Examples
-
-Load an example graph to get a named graph that reflects some possible
-train connections in Europe and North America:
-
-
-
-```js
----
-name: GRAPHASP_01_create_graph
-description: ''
----
-~addIgnoreCollection("places");
-~addIgnoreCollection("connections");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-```
-
-Suppose you want to query a route from **Carlisle** to **London**, and
-compare the outputs of `SHORTEST_PATH`, `K_SHORTEST_PATHS` and `ALL_SHORTEST_PATHS`.
-Note that `SHORTEST_PATH` returns any of the shortest paths, whereas
-`ALL_SHORTEST_PATHS` returns all of them. `K_SHORTEST_PATHS` returns the
-shortest paths first but continues with longer paths, until it found all routes
-or reaches the defined limit (the number of paths).
-
-Using `SHORTEST_PATH` to get one shortest path:
-
-```aql
----
-name: GRAPHASP_01_Carlisle_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e IN OUTBOUND SHORTEST_PATH 'places/Carlisle' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { place: v.label }
-```
-
-Using `ALL_SHORTEST_PATHS` to get both shortest paths:
-
-```aql
----
-name: GRAPHASP_02_Carlisle_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND ALL_SHORTEST_PATHS 'places/Carlisle' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label }
-```
-
-Using `K_SHORTEST_PATHS` without a limit to get all paths in order of
-increasing length:
-
-```aql
----
-name: GRAPHASP_03_Carlisle_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Carlisle' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label }
-```
-
-If you ask for routes that don't exist, you get an empty result
-(from **Carlisle** to **Toronto**):
-
-```aql
----
-name: GRAPHASP_04_Carlisle_to_Toronto
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND ALL_SHORTEST_PATHS 'places/Carlisle' TO 'places/Toronto'
-GRAPH 'kShortestPathsGraph'
- RETURN {
- places: p.vertices[*].label
- }
-```
-
-And finally clean up by removing the named graph:
-
-```js
----
-name: GRAPHASP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("kShortestPathsGraph");
-~removeIgnoreCollection("places");
-~removeIgnoreCollection("connections");
-```
diff --git a/site/content/3.11/aql/graphs/k-paths.md b/site/content/3.11/aql/graphs/k-paths.md
deleted file mode 100644
index 582232dc9a..0000000000
--- a/site/content/3.11/aql/graphs/k-paths.md
+++ /dev/null
@@ -1,232 +0,0 @@
----
-title: k Paths in AQL
-menuTitle: k Paths
-weight: 30
-description: >-
- Find all paths between two vertices with a fixed range of path lengths
----
-## General query idea
-
-This type of query finds all paths between two given documents
-(*startVertex* and *targetVertex*) in your graph. The paths are restricted
-by a minimum and maximum length that you specify.
-
-Every such path is returned as a JSON object with two components:
-
-- an array containing the `vertices` on the path
-- an array containing the `edges` on the path
-
-**Example**
-
-Here is an example graph to explain how the k Paths algorithm works:
-
-
-
-Each ellipse stands for a train station with the name of the city written inside
-of it. They are the vertices of the graph. Arrows represent train connections
-between cities and are the edges of the graph. The numbers near the arrows
-describe how long it takes to get from one station to another. They are used
-as edge weights.
-
-Assume that you want to go from **Aberdeen** to **London** by train.
-
-You have a couple of alternatives:
-
-a) Straight way
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. York
- 5. London
-
-b) Detour at York
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. York
- 5. **Carlisle**
- 6. **Birmingham**
- 7. London
-
-c) Detour at Edinburgh
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. **Glasgow**
- 5. **Carlisle**
- 6. **Birmingham**
- 7. London
-
-d) Detour at Edinburgh to York
-
- 1. Aberdeen
- 2. Leuchars
- 3. Edinburgh
- 4. **Glasgow**
- 5. **Carlisle**
- 6. York
- 7. London
-
-Note that only paths that do not contain the same vertex twice are consider to
-be valid. The following alternative would visit Aberdeen twice and is **not**
-returned by the k Paths algorithm:
-
-1. Aberdeen
-2. **Inverness**
-3. **Aberdeen**
-4. Leuchars
-5. Edinburgh
-6. York
-7. London
-
-## Example Use Cases
-
-The use-cases for k Paths are about the same as for unweighted k Shortest Paths.
-The main difference is that k Shortest Paths enumerates all paths with
-**increasing length**. It stops as soon as a given number of paths is reached.
-k Paths enumerates all paths within a given **range of path lengths** instead,
-and is thereby upper-bounded.
-
-The k Paths traversal can be used as foundation for several other algorithms:
-
-- **Transportation** of any kind (e.g. road traffic, network package routing)
-- **Flow problems**: You need to transfer items from A to B, which alternatives
- do you have? What is their capacity?
-
-## Syntax
-
-The syntax for k Paths queries is similar to the one for
-[K Shortest Path](k-shortest-paths.md) with the addition to define the
-minimum and maximum length of the path.
-
-{{< warning >}}
-It is highly recommended that you use a reasonable maximum path length or a
-**LIMIT** statement, as k Paths is a potentially expensive operation. It can
-return a large number of paths for large connected graphs.
-{{< /warning >}}
-
-### Working with named graphs
-
-```aql
-FOR path
- IN MIN..MAX OUTBOUND|INBOUND|ANY K_PATHS
- startVertex TO targetVertex
- GRAPH graphName
-```
-
-- `FOR`: Emits the variable **path** which contains one path as an object
- containing `vertices` and `edges` of the path.
-- `IN` `MIN..MAX`: The minimal and maximal depth for the traversal:
- - **min** (number, *optional*): Paths returned by this query
- have at least a length of this many edges.
- If not specified, it defaults to `1`. The minimal possible value is `0`.
- - **max** (number, *optional*): Paths returned by this query
- have at most a length of this many edges.
- If omitted, it defaults to the value of `min`. Thus, only the vertices and
- edges in the range of `min` are returned. You cannot specify `max` without `min`.
-- `OUTBOUND|INBOUND|ANY`: Defines in which direction
- edges are followed (outgoing, incoming, or both).
-- `K_PATHS`: The keyword to compute all paths with the specified lengths.
-- **startVertex** `TO` **targetVertex** (both string\|object): The two vertices
- between which the paths are computed. This can be specified in the form of
- a document identifier string or in the form of an object with the `_id`
- attribute. All other values lead to a warning and an empty result. This is
- also the case if one of the specified documents does not exist.
-- `GRAPH` **graphName** (string): The name identifying the named graph.
- Its vertex and edge collections are looked up for the path search.
-
-### Working with collection sets
-
-```aql
-FOR path
- IN MIN..MAX OUTBOUND|INBOUND|ANY K_PATHS
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
- [OPTIONS options]
-```
-
-Instead of `GRAPH graphName` you can specify a list of edge collections.
-The involved vertex collections are determined by the edges of the given
-edge collections.
-
-### Traversing in mixed directions
-
-For k paths with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken
-into account. In this case you can use `OUTBOUND` as general search direction
-and `ANY` specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND K_PATHS
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction use the
-direction defined after `IN` (here: `OUTBOUND`). This allows to use a different
-direction for each collection in your path search.
-
-## Examples
-
-You can load the `kShortestPathsGraph` example graph to get a named graph that
-reflects some possible train connections in Europe and North America.
-
-
-
-```js
----
-name: GRAPHKP_01_create_graph
-description: ''
----
-~addIgnoreCollection("places");
-~addIgnoreCollection("connections");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-```
-
-Suppose you want to query all routes from **Aberdeen** to **London**.
-
-```aql
----
-name: GRAPHKP_01_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN 1..10 OUTBOUND K_PATHS 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label, travelTimes: p.edges[*].travelTime }
-```
-
-If you ask for routes that don't exist, you get an empty result
-(from **Aberdeen** to **Toronto**):
-
-```aql
----
-name: GRAPHKP_02_Aberdeen_to_Toronto
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN 1..10 OUTBOUND K_PATHS 'places/Aberdeen' TO 'places/Toronto'
-GRAPH 'kShortestPathsGraph'
- RETURN { places: p.vertices[*].label, travelTimes: p.edges[*].travelTime }
-```
-
-And finally clean up by removing the named graph:
-
-```js
----
-name: GRAPHKP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("kShortestPathsGraph");
-~removeIgnoreCollection("places");
-~removeIgnoreCollection("connections");
-```
diff --git a/site/content/3.11/aql/graphs/k-shortest-paths.md b/site/content/3.11/aql/graphs/k-shortest-paths.md
deleted file mode 100644
index 3a339ea9c9..0000000000
--- a/site/content/3.11/aql/graphs/k-shortest-paths.md
+++ /dev/null
@@ -1,308 +0,0 @@
----
-title: k Shortest Paths in AQL
-menuTitle: k Shortest Paths
-weight: 25
-description: >-
- Find a number of shortest paths in the order of increasing path length or weight
----
-## General query idea
-
-This type of query finds the first *k* paths in order of length
-(or weight) between two given documents (*startVertex* and *targetVertex*) in
-your graph.
-
-Every such path is returned as a JSON object with three components:
-
-- an array containing the `vertices` on the path
-- an array containing the `edges` on the path
-- the `weight` of the path, that is the sum of all edge weights
-
-If no `weightAttribute` is specified, the weight of the path is just its length.
-
-{{< youtube id="XdITulJFdVo" >}}
-
-**Example**
-
-Here is an example graph to explain how the k Shortest Paths algorithm works:
-
-
-
-Each ellipse stands for a train station with the name of the city written inside
-of it. They are the vertices of the graph. Arrows represent train connections
-between cities and are the edges of the graph. The numbers near the arrows
-describe how long it takes to get from one station to another. They are used
-as edge weights.
-
-Let us assume that you want to go from **Aberdeen** to **London** by train.
-
-You expect to see the following vertices on *the* shortest path, in this order:
-
-1. Aberdeen
-2. Leuchars
-3. Edinburgh
-4. York
-5. London
-
-By the way, the weight of the path is: 1.5 + 1.5 + 3.5 + 1.8 = **8.3**.
-
-Let us look at alternative paths next, for example because you know that the
-direct connection between York and London does not operate currently.
-An alternative path, which is slightly longer, goes like this:
-
-1. Aberdeen
-2. Leuchars
-3. Edinburgh
-4. York
-5. **Carlisle**
-6. **Birmingham**
-7. London
-
-Its weight is: 1.5 + 1.5 + 3.5 + 2.0 + 1.5 = **10.0**.
-
-Another route goes via Glasgow. There are seven stations on the path as well,
-however, it is quicker if you compare the edge weights:
-
-1. Aberdeen
-2. Leuchars
-3. Edinburgh
-4. **Glasgow**
-5. Carlisle
-6. Birmingham
-7. London
-
-The path weight is lower: 1.5 + 1.5 + 1.0 + 1.0 + 2.0 + 1.5 = **8.5**.
-
-## Syntax
-
-The syntax for k Shortest Paths queries is similar to the one for
-[Shortest Path](shortest-path.md) and there are also two options to
-either use a named graph or a set of edge collections. It only emits a path
-variable however, whereas `SHORTEST_PATH` emits a vertex and an edge variable.
-
-{{< warning >}}
-It is highly recommended that you use a **LIMIT** statement, as
-k Shortest Paths is a potentially expensive operation. On large connected
-graphs it can return a large number of paths, or perform an expensive
-(but unsuccessful) search for more short paths.
-{{< /warning >}}
-
-### Working with named graphs
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY K_SHORTEST_PATHS
- startVertex TO targetVertex
- GRAPH graphName
- [OPTIONS options]
- [LIMIT offset, count]
-```
-
-- `FOR`: Emits the variable **path** which contains one path as an object containing
- `vertices`, `edges`, and the `weight` of the path.
-- `IN` `OUTBOUND|INBOUND|ANY`: Defines in which direction
- edges are followed (outgoing, incoming, or both).
-- `K_SHORTEST_PATHS`: The keyword to compute k Shortest Paths
-- **startVertex** `TO` **targetVertex** (both string\|object): The two vertices between
- which the paths are computed. This can be specified in the form of
- a ID string or in the form of a document with the attribute `_id`. All other
- values lead to a warning and an empty result. If one of the specified
- documents does not exist, the result is empty as well and there is no warning.
-- `GRAPH` **graphName** (string): The name identifying the named graph. Its vertex and
- edge collections are looked up by the path search.
-- `OPTIONS` **options** (object, *optional*):
- See the [path search options](#path-search-options).
-- `LIMIT` (see [LIMIT operation](../high-level-operations/limit.md), *optional*):
- the maximal number of paths to return. It is highly recommended to use
- a `LIMIT` for `K_SHORTEST_PATHS`.
-
-{{< info >}}
-k Shortest Paths traversals do not support negative weights. If a document
-attribute (as specified by `weightAttribute`) with a negative value is
-encountered during traversal, or if `defaultWeight` is set to a negative
-number, then the query is aborted with an error.
-{{< /info >}}
-
-### Working with collection sets
-
-```aql
-FOR path
- IN OUTBOUND|INBOUND|ANY K_SHORTEST_PATHS
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
- [OPTIONS options]
- [LIMIT offset, count]
-```
-
-Instead of `GRAPH graphName` you can specify a list of edge collections.
-The involved vertex collections are determined by the edges of the given
-edge collections.
-
-### Path search options
-
-You can optionally specify the following options to modify the execution of a
-graph path search. If you specify unknown options, query warnings are raised.
-
-#### `weightAttribute`
-
-A top-level edge attribute that should be used to read the edge weight (string).
-
-If the attribute does not exist or is not numeric, the `defaultWeight` is used
-instead.
-
-The attribute value must not be negative.
-
-#### `defaultWeight`
-
-This value is used as fallback if there is no `weightAttribute` in the
-edge document, or if it's not a number (number).
-
-The value must not be negative. The default is `1`.
-
-### Traversing in mixed directions
-
-For k shortest paths with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken into
-account. In this case you can use `OUTBOUND` as general search direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND K_SHORTEST_PATHS
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction use the
-direction defined after `IN` (here: `OUTBOUND`). This allows to use a different
-direction for each collection in your path search.
-
-## Examples
-
-You can load the `kShortestPathsGraph` example graph to get a named graph that
-reflects some possible train connections in Europe and North America.
-
-
-
-```js
----
-name: GRAPHKSP_01_create_graph
-description: ''
----
-~addIgnoreCollection("places");
-~addIgnoreCollection("connections");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("kShortestPathsGraph");
-db.places.toArray();
-db.connections.toArray();
-```
-
-Suppose you want to query a route from **Aberdeen** to **London**, and
-compare the outputs of `SHORTEST_PATH` and `K_SHORTEST_PATHS` with
-`LIMIT 1`. Note that while `SHORTEST_PATH` and `K_SHORTEST_PATH` with
-`LIMIT 1` should return a path of the same length (or weight), they do
-not need to return the same path.
-
-Using `SHORTEST_PATH`:
-
-```aql
----
-name: GRAPHKSP_01_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e IN OUTBOUND SHORTEST_PATH 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- RETURN { place: v.label, travelTime: e.travelTime }
-```
-
-Using `K_SHORTEST_PATHS`:
-
-```aql
----
-name: GRAPHKSP_02_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- LIMIT 1
- RETURN { places: p.vertices[*].label, travelTimes: p.edges[*].travelTime }
-```
-
-With `K_SHORTEST_PATHS`, you can ask for more than one option for a route:
-
-```aql
----
-name: GRAPHKSP_03_Aberdeen_to_London
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Aberdeen' TO 'places/London'
-GRAPH 'kShortestPathsGraph'
- LIMIT 3
- RETURN {
- places: p.vertices[*].label,
- travelTimes: p.edges[*].travelTime,
- travelTimeTotal: SUM(p.edges[*].travelTime)
- }
-```
-
-If you ask for routes that don't exist, you get an empty result
-(from **Aberdeen** to **Toronto**):
-
-```aql
----
-name: GRAPHKSP_04_Aberdeen_to_Toronto
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/Aberdeen' TO 'places/Toronto'
-GRAPH 'kShortestPathsGraph'
- LIMIT 3
- RETURN {
- places: p.vertices[*].label,
- travelTimes: p.edges[*].travelTime,
- travelTimeTotal: SUM(p.edges[*].travelTime)
- }
-```
-
-You can use the `travelTime` attribute that connections have as edge weights to
-take into account which connections are quicker. A high default weight is set,
-to be used if an edge has no `travelTime` attribute (not the case with the
-example graph). This returns the top three routes with the fewest changes
-and favoring the least travel time for the connection **Saint Andrews**
-to **Cologne**:
-
-```aql
----
-name: GRAPHKSP_05_StAndrews_to_Cologne
-description: ''
-dataset: kShortestPathsGraph
----
-FOR p IN OUTBOUND K_SHORTEST_PATHS 'places/StAndrews' TO 'places/Cologne'
-GRAPH 'kShortestPathsGraph'
-OPTIONS {
- weightAttribute: 'travelTime',
- defaultWeight: 15
-}
- LIMIT 3
- RETURN {
- places: p.vertices[*].label,
- travelTimes: p.edges[*].travelTime,
- travelTimeTotal: SUM(p.edges[*].travelTime)
- }
-```
-
-And finally clean up by removing the named graph:
-
-```js
----
-name: GRAPHKSP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("kShortestPathsGraph");
-~removeIgnoreCollection("places");
-~removeIgnoreCollection("connections");
-```
diff --git a/site/content/3.11/aql/graphs/shortest-path.md b/site/content/3.11/aql/graphs/shortest-path.md
deleted file mode 100644
index bfc7e0fa5c..0000000000
--- a/site/content/3.11/aql/graphs/shortest-path.md
+++ /dev/null
@@ -1,228 +0,0 @@
----
-title: Shortest Path in AQL
-menuTitle: Shortest Path
-weight: 15
-description: >-
- Find one path of shortest length between two vertices
----
-## General query idea
-
-This type of query finds the shortest path between two given documents
-(*startVertex* and *targetVertex*) in your graph. If there are multiple
-shortest paths, the path with the lowest weight or a random one (in case
-of a tie) is returned.
-
-The shortest path search emits the following two variables for every step of
-the path:
-
-1. The vertex on this path.
-2. The edge pointing to it.
-
-### Example execution
-
-Let's take a look at a simple example to explain how it works.
-This is the graph that you are going to find a shortest path on:
-
-
-
-You can use the following parameters for the query:
-
-1. You start at the vertex **A**.
-2. You finish with the vertex **D**.
-
-So, obviously, you have the vertices **A**, **B**, **C** and **D** on the
-shortest path in exactly this order. Then, the shortest path statement
-returns the following pairs:
-
-| Vertex | Edge |
-|--------|-------|
-| A | null |
-| B | A → B |
-| C | B → C |
-| D | C → D |
-
-Note that the first edge is always `null` because there is no edge pointing
-to the *startVertex*.
-
-## Syntax
-
-The next step is to see how you can write a shortest path query.
-You have two options here, you can either use a named graph or a set of edge
-collections (anonymous graph).
-
-### Working with named graphs
-
-```aql
-FOR vertex[, edge]
- IN OUTBOUND|INBOUND|ANY SHORTEST_PATH
- startVertex TO targetVertex
- GRAPH graphName
- [OPTIONS options]
-```
-
-- `FOR`: Emits up to two variables:
- - **vertex** (object): The current vertex on the shortest path
- - **edge** (object, *optional*): The edge pointing to the vertex
-- `IN` `OUTBOUND|INBOUND|ANY`: Defines in which direction edges are followed
- (outgoing, incoming, or both)
-- **startVertex** `TO` **targetVertex** (both string\|object): The two vertices between
- which the shortest path is computed. This can be specified in the form of
- an ID string or in the form of a document with the attribute `_id`. All other
- values lead to a warning and an empty result. If one of the specified
- documents does not exist, the result is empty as well and there is no warning.
-- `GRAPH` **graphName** (string): The name identifying the named graph. Its vertex and
- edge collections are looked up for the path search.
-- `OPTIONS` **options** (object, *optional*):
- See the [path search options](#path-search-options).
-
-{{< info >}}
-Shortest Path traversals do not support negative weights. If a document
-attribute (as specified by `weightAttribute`) with a negative value is
-encountered during traversal, or if `defaultWeight` is set to a negative
-number, then the query is aborted with an error.
-{{< /info >}}
-
-### Working with collection sets
-
-```aql
-FOR vertex[, edge]
- IN OUTBOUND|INBOUND|ANY SHORTEST_PATH
- startVertex TO targetVertex
- edgeCollection1, ..., edgeCollectionN
- [OPTIONS options]
-```
-
-Instead of `GRAPH graphName` you may specify a list of edge collections (anonymous
-graph). The involved vertex collections are determined by the edges of the given
-edge collections. The rest of the behavior is similar to the named version.
-
-### Path search options
-
-You can optionally specify the following options to modify the execution of a
-graph path search. If you specify unknown options, query warnings are raised.
-
-#### `weightAttribute`
-
-A top-level edge attribute that should be used to read the edge weight (string).
-
-If the attribute does not exist or is not numeric, the `defaultWeight` is used
-instead.
-
-The attribute value must not be negative.
-
-#### `defaultWeight`
-
-This value is used as fallback if there is no `weightAttribute` in the
-edge document, or if it's not a number (number).
-
-The value must not be negative. The default is `1`.
-
-### Traversing in mixed directions
-
-For shortest path with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction
-has no relevance, but in *edges1* and *edges3* the direction should be taken into
-account. In this case you can use `OUTBOUND` as general search direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND SHORTEST_PATH
- startVertex TO targetVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction use the
-direction defined after `IN` (here: `OUTBOUND`). This allows to use a different
-direction for each collection in your path search.
-
-## Conditional shortest path
-
-The `SHORTEST_PATH` computation only finds an unconditioned shortest path.
-With this construct it is not possible to define a condition like: "Find the
-shortest path where all edges are of type *X*". If you want to do this, use a
-normal [Traversal](traversals.md) instead with the option
-`{order: "bfs"}` in combination with `LIMIT 1`.
-
-Please also consider using [`WITH`](../high-level-operations/with.md) to specify the
-collections you expect to be involved.
-
-## Examples
-
-Creating a simple symmetric traversal demonstration graph:
-
-
-
-```js
----
-name: GRAPHSP_01_create_graph
-description: ''
----
-~addIgnoreCollection("circles");
-~addIgnoreCollection("edges");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("traversalGraph");
-db.circles.toArray();
-db.edges.toArray();
-```
-
-Start with the shortest path from **A** to **D** as above:
-
-```js
----
-name: GRAPHSP_02_A_to_D
-description: ''
----
-db._query(`
- FOR v, e IN OUTBOUND SHORTEST_PATH 'circles/A' TO 'circles/D' GRAPH 'traversalGraph'
- RETURN [v._key, e._key]
-`);
-
-db._query(`
- FOR v, e IN OUTBOUND SHORTEST_PATH 'circles/A' TO 'circles/D' edges
- RETURN [v._key, e._key]
-`);
-```
-
-You can see that expectations are fulfilled. You find the vertices in the
-correct ordering and the first edge is `null`, because no edge is pointing
-to the start vertex on this path.
-
-You can also compute shortest paths based on documents found in collections:
-
-```js
----
-name: GRAPHSP_03_A_to_D
-description: ''
----
-db._query(`
- FOR a IN circles
- FILTER a._key == 'A'
- FOR d IN circles
- FILTER d._key == 'D'
- FOR v, e IN OUTBOUND SHORTEST_PATH a TO d GRAPH 'traversalGraph'
- RETURN [v._key, e._key]
-`);
-
-db._query(`
- FOR a IN circles
- FILTER a._key == 'A'
- FOR d IN circles
- FILTER d._key == 'D'
- FOR v, e IN OUTBOUND SHORTEST_PATH a TO d edges
- RETURN [v._key, e._key]
-`);
-```
-
-And finally clean it up again:
-
-```js
----
-name: GRAPHSP_99_drop_graph
-description: ''
----
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("traversalGraph");
-~removeIgnoreCollection("circles");
-~removeIgnoreCollection("edges");
-```
diff --git a/site/content/3.11/aql/graphs/traversals-explained.md b/site/content/3.11/aql/graphs/traversals-explained.md
deleted file mode 100644
index a211ae6087..0000000000
--- a/site/content/3.11/aql/graphs/traversals-explained.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: AQL graph traversals explained
-menuTitle: Traversals explained
-weight: 5
-description: >-
- Traversing a graph means to follow edges connected to a start vertex and
- neighboring vertices until a specified depth
----
-## General query idea
-
-A traversal starts at one specific document (*startVertex*) and follows all
-edges connected to this document. For all documents (*vertices*) that are
-targeted by these edges it will again follow all edges connected to them and
-so on. It is possible to define how many of these follow iterations should be
-executed at least (*min* depth) and at most (*max* depth).
-
-For all vertices that were visited during this process in the range between
-*min* depth and *max* depth iterations you will get a result in form of a
-set with three items:
-
-1. The visited vertex.
-2. The edge pointing to it.
-3. The complete path from startVertex to the visited vertex as object with an
- attribute *edges* and an attribute *vertices*, each a list of the corresponding
- elements. These lists are sorted, which means the first element in *vertices*
- is the *startVertex* and the last is the visited vertex, and the n-th element
- in *edges* connects the n-th element with the (n+1)-th element in *vertices*.
-
-## Example execution
-
-Let's take a look at a simple example to explain how it works.
-This is the graph that we are going to traverse:
-
-
-
-We use the following parameters for our query:
-
-1. We start at the vertex **A**.
-2. We use a *min* depth of 1.
-3. We use a *max* depth of 2.
-4. We follow only in `OUTBOUND` direction of edges
-
-
-
-Now it walks to one of the direct neighbors of **A**, say **B** (note: ordering
-is not guaranteed!):
-
-
-
-The query will remember the state (red circle) and will emit the first result
-**A** → **B** (black box). This will also prevent the traverser to be trapped
-in cycles. Now again it will visit one of the direct neighbors of **B**, say **E**:
-
-
-
-We have limited the query with a *max* depth of *2*, so it will not pick any
-neighbor of **E**, as the path from **A** to **E** already requires *2* steps.
-Instead, we will go back one level to **B** and continue with any other direct
-neighbor there:
-
-
-
-Again after we produced this result we will step back to **B**.
-But there is no neighbor of **B** left that we have not yet visited.
-Hence we go another step back to **A** and continue with any other neighbor there.
-
-
-
-And identical to the iterations before we will visit **H**:
-
-
-
-And **J**:
-
-
-
-After these steps there is no further result left. So all together this query
-has returned the following paths:
-
-1. **A** → **B**
-2. **A** → **B** → **E**
-3. **A** → **B** → **C**
-4. **A** → **G**
-5. **A** → **G** → **H**
-6. **A** → **G** → **J**
diff --git a/site/content/3.11/aql/graphs/traversals.md b/site/content/3.11/aql/graphs/traversals.md
deleted file mode 100644
index 0048d9c38f..0000000000
--- a/site/content/3.11/aql/graphs/traversals.md
+++ /dev/null
@@ -1,890 +0,0 @@
----
-title: Graph traversals in AQL
-menuTitle: Traversals
-weight: 10
-description: >-
- You can traverse named graphs and anonymous graphs with a native AQL
- language construct
----
-## Syntax
-
-There are two slightly different syntaxes for traversals in AQL, one for
-- [named graphs](../../graphs/_index.md#named-graphs) and another to
-- specify a [set of edge collections](#working-with-collection-sets)
- ([anonymous graph](../../graphs/_index.md#anonymous-graphs)).
-
-### Working with named graphs
-
-The syntax for AQL graph traversals using named graphs is as follows
-(square brackets denote optional parts and `|` denotes alternatives):
-
-```aql
-FOR vertex[, edge[, path]]
- IN [min[..max]]
- OUTBOUND|INBOUND|ANY startVertex
- GRAPH graphName
- [PRUNE [pruneVariable = ]pruneCondition]
- [OPTIONS options]
-```
-
-- `FOR`: emits up to three variables:
- - **vertex** (object): the current vertex in a traversal
- - **edge** (object, *optional*): the current edge in a traversal
- - **path** (object, *optional*): representation of the current path with
- two members:
- - `vertices`: an array of all vertices on this path
- - `edges`: an array of all edges on this path
-- `IN` `min..max`: the minimal and maximal depth for the traversal:
- - **min** (number, *optional*): edges and vertices returned by this query
- start at the traversal depth of *min* (thus edges and vertices below it are
- not returned). If not specified, it defaults to 1. The minimal
- possible value is 0.
- - **max** (number, *optional*): up to *max* length paths are traversed.
- If omitted, *max* defaults to *min*. Thus only the vertices and edges in
- the range of *min* are returned. *max* cannot be specified without *min*.
-- `OUTBOUND|INBOUND|ANY`: follow outgoing, incoming, or edges pointing in either
- direction in the traversal. Note that this can't be replaced by a bind parameter.
-- **startVertex** (string\|object): a vertex where the traversal originates from.
- This can be specified in the form of an ID string or in the form of a document
- with the `_id` attribute. All other values lead to a warning and an empty
- result. If the specified document does not exist, the result is empty as well
- and there is no warning.
-- `GRAPH` **graphName** (string): the name identifying the named graph.
- Its vertex and edge collections are looked up. Note that the graph name
- is like a regular string, hence it must be enclosed by quote marks, like
- `GRAPH "graphName"`.
-- `PRUNE` **expression** (AQL expression, *optional*):
- An expression, like in a `FILTER` statement, which is evaluated in every step of
- the traversal, as early as possible. The semantics of this expression are as follows:
- - If the expression evaluates to `false`, the traversal continues on the current path.
- - If the expression evaluates to `true`, the traversal does not continue on the
- current path. However, the paths up to this point are considered as a result
- (they might still be post-filtered or ignored due to depth constraints).
- For example, a traversal over the graph `(A) -> (B) -> (C)` starting at `A`
- and pruning on `B` results in `(A)` and `(A) -> (B)` being valid paths,
- whereas `(A) -> (B) -> (C)` is not returned because it gets pruned on `B`.
-
- You can only use a single `PRUNE` clause per `FOR` traversal operation, but
- the prune expression can contain an arbitrary number of conditions using `AND`
- and `OR` statements for complex expressions. You can use the variables emitted
- by the `FOR` operation in the prune expression, as well as all variables
- defined before the traversal.
-
- You can optionally assign the prune expression to a variable like
- `PRUNE var = ` to use the evaluated result elsewhere in the query,
- typically in a `FILTER` expression.
-
- See [Pruning](#pruning) for details.
-- `OPTIONS` **options** (object, *optional*): See the [traversal options](#traversal-options).
-
-### Working with collection sets
-
-The syntax for AQL graph traversals using collection sets is as follows
-(square brackets denote optional parts and `|` denotes alternatives):
-
-```aql
-[WITH vertexCollection1[, vertexCollection2[, vertexCollectionN]]]
-FOR vertex[, edge[, path]]
- IN [min[..max]]
- OUTBOUND|INBOUND|ANY startVertex
- edgeCollection1[, edgeCollection2[, edgeCollectionN]]
- [PRUNE [pruneVariable = ]pruneCondition]
- [OPTIONS options]
-```
-
-- `WITH`: Declaration of collections. Optional for single server instances, but
- required for [graph traversals in a cluster](#graph-traversals-in-a-cluster).
- Needs to be placed at the very beginning of the query.
- - **collections** (collection, *repeatable*): list of vertex collections that
- are involved in the traversal
-- **edgeCollections** (collection, *repeatable*): One or more edge collections
- to use for the traversal (instead of using a named graph with `GRAPH graphName`).
- Vertex collections are determined by the edges in the edge collections.
-
- You can override the default traversal direction by setting `OUTBOUND`,
- `INBOUND`, or `ANY` before any of the edge collections.
-
- If the same edge collection is specified multiple times, it behaves as if it
- were specified only once. Specifying the same edge collection is only allowed
- when the collections do not have conflicting traversal directions.
-
- Views cannot be used as edge collections.
-- See the [named graph variant](#working-with-named-graphs) for the remaining
- traversal parameters as well as the [traversal options](#traversal-options).
- The `edgeCollections` restriction option is redundant in this case.
-
-### Traversal options
-
-You can optionally specify the following options to modify the execution of a
-graph traversal. If you specify unknown options, query warnings are raised.
-
-#### `order`
-
-Specify which traversal algorithm to use (string):
-- `"bfs"` – the traversal is executed breadth-first. The results
- first contain all vertices at depth 1, then all vertices at depth 2 and so on.
-- `"dfs"` (default) – the traversal is executed depth-first. It
- first returns all paths from *min* depth to *max* depth for one vertex at
- depth 1, then for the next vertex at depth 1 and so on.
-- `"weighted"` - the traversal is a weighted traversal
- (introduced in v3.8.0). Paths are enumerated with increasing cost.
- Also see `weightAttribute` and `defaultWeight`. A returned path has an
- additional attribute `weight` containing the cost of the path after every
- step. The order of paths having the same cost is non-deterministic.
- Negative weights are not supported and abort the query with an error.
-
-#### `bfs`
-
-Deprecated, use `order: "bfs"` instead.
-
-#### `uniqueVertices`
-
-Ensure vertex uniqueness (string):
-
-- `"path"` – it is guaranteed that there is no path returned with a duplicate vertex
-- `"global"` – it is guaranteed that each vertex is visited at most once during
- the traversal, no matter how many paths lead from the start vertex to this one.
- If you start with a `min depth > 1` a vertex that was found before *min* depth
- might not be returned at all (it still might be part of a path).
- It is required to set `order: "bfs"` or `order: "weighted"` because with
- depth-first search the results would be unpredictable. **Note:**
- Using this configuration the result is not deterministic any more. If there
- are multiple paths from *startVertex* to *vertex*, one of those is picked.
- In case of a `weighted` traversal, the path with the lowest weight is
- picked, but in case of equal weights it is undefined which one is chosen.
-- `"none"` (default) – no uniqueness check is applied on vertices
-
-#### `uniqueEdges`
-
-Ensure edge uniqueness (string):
-
-- `"path"` (default) – it is guaranteed that there is no path returned with a
- duplicate edge
-- `"none"` – no uniqueness check is applied on edges. **Note:**
- Using this configuration, the traversal follows edges in cycles.
-
-#### `edgeCollections`
-
-Restrict edge collections the traversal may visit (string\|array).
-
-If omitted or an empty array is specified, then there are no restrictions.
-
-- A string parameter is treated as the equivalent of an array with a single
- element.
-- Each element of the array should be a string containing the name of an
- edge collection.
-
-#### `vertexCollections`
-
-Restrict vertex collections the traversal may visit (string\|array).
-
-If omitted or an empty array is specified, then there are no restrictions.
-
-- A string parameter is treated as the equivalent of an array with a single
- element.
-- Each element of the array should be a string containing the name of a
- vertex collection.
-- The starting vertex is always allowed, even if it does not belong to one
- of the collections specified by a restriction.
-
-#### `parallelism`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Parallelize traversal execution (number).
-
-If omitted or set to a value of `1`, the traversal execution is not parallelized.
-If set to a value greater than `1`, then up to that many worker threads can be
-used for concurrently executing the traversal. The value is capped by the number
-of available cores on the target machine.
-
-Parallelizing a traversal is normally useful when there are many inputs (start
-vertices) that the nested traversal can work on concurrently. This is often the
-case when a nested traversal is fed with several tens of thousands of start
-vertices, which can then be distributed randomly to worker threads for parallel
-execution.
-
-#### `maxProjections`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Specifies the number of document attributes per `FOR` loop to be used as
-projections (number). The default value is `5`.
-
-#### `weightAttribute`
-
-Specifies the name of an attribute that is used to look up the weight of an edge
-(string).
-
-If no attribute is specified or if it is not present in the edge document then
-the `defaultWeight` is used.
-
-The attribute value must not be negative.
-
-{{< info >}}
-Weighted traversals do not support negative weights. If a document
-attribute (as specified by `weightAttribute`) with a negative value is
-encountered during traversal, the query is aborted with an error.
-{{< /info >}}
-
-#### `defaultWeight`
-
-Specifies the default weight of an edge (number). The default value is `1`.
-
-The value must not be negative.
-
-{{< info >}}
-Weighted traversals do not support negative weights. If `defaultWeight` is set
-to a negative number, then the query is aborted with an error.
-{{< /info >}}
-
-### Traversing in mixed directions
-
-For traversals with a list of edge collections you can optionally specify the
-direction for some of the edge collections. Say for example you have three edge
-collections *edges1*, *edges2* and *edges3*, where in *edges2* the direction has
-no relevance but in *edges1* and *edges3* the direction should be taken into account.
-In this case you can use `OUTBOUND` as general traversal direction and `ANY`
-specifically for *edges2* as follows:
-
-```aql
-FOR vertex IN OUTBOUND
- startVertex
- edges1, ANY edges2, edges3
-```
-
-All collections in the list that do not specify their own direction use the
-direction defined after `IN`. This allows to use a different direction for each
-collection in your traversal.
-
-### Graph traversals in a cluster
-
-Due to the nature of graphs, edges may reference vertices from arbitrary
-collections. Following the paths can thus involve documents from various
-collections and it is not possible to predict which are visited in a
-traversal. Which collections need to be loaded by the graph engine can only be
-determined at run time.
-
-Use the [`WITH` statement](../high-level-operations/with.md) to specify the collections you
-expect to be involved. This is required for traversals using collection sets
-in cluster deployments.
-
-## Pruning
-
-You can define stop conditions for graph traversals to return specific data and
-to improve the query performance. This is called _pruning_ and works by checking
-conditions during the traversal as opposed to filtering the results afterwards
-(post-filtering). This reduces the amount of data to be checked by stopping the
-traversal down specific paths early.
-
-{{< youtube id="4LVeeC0ciCQ" >}}
-
-You can specify one `PRUNE` expression per graph traversal, but it can contain
-an arbitrary number of conditions. You can use the vertex, edge, and path
-variables emitted by the traversal in a prune expression, as well as all other
-variables defined before the `FOR` operation. Note that `PRUNE` is an optional
-clause of the `FOR` operation and that the `OPTIONS` clause needs to be placed
-after `PRUNE`.
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample1
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 0..10 OUTBOUND "places/Toronto" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Edmonton"
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", p.vertices[*].label)
-```
-
-The above example shows a graph traversal using a
-[train station and connections dataset](../../graphs/example-graphs.md#k-shortest-paths-graph):
-
-
-
-The traversal starts at **Toronto** (bottom left), the traversal depth is
-limited to 10, and every station is only visited once. The traversal could
-continue up to **Vancouver** (bottom right) at depth 5, but it is stopped early
-on this path (the only path in this example) at **Edmonton** because of the
-prune expression.
-
-The traversal along paths is stopped as soon as the prune expression evaluates
-to `true` for a given path. The current depth is still included in the result,
-however. This can be seen in the query result of the example which includes the
-Edmonton vertex at which it stopped.
-
-The following example starts a traversal at **London** (middle right), with a
-depth between 2 and 3, and every station is only visited once. The station names
-as well as the travel times are returned:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample2
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-The same example with an added prune expression, with vertex and edge conditions:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample3
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-If either the **Carlisle** vertex or an edge with a travel time of over three
-hours is encountered, the subsequent paths are pruned. In the example, this
-removes the train connections to **Birmingham**, **Glasgow**, and **York**,
-which come after **Carlisle**, as well as the connections to and via
-**Edinburgh** because of the four hour duration for the section from **York**
-to **Edinburgh**.
-
-If your graph is comprised of multiple vertex or edge collections, you can
-also prune as soon as you reach a certain collection, using a condition like
-`PRUNE IS_SAME_COLLECTION("stopCollection", v)`.
-
-If you want to only return the results of the depth at which the traversal
-stopped due to the prune expression, you can use a `FILTER` in addition. You can
-assign the evaluated result of a prune expression to a variable
-(`PRUNE var = `) and use it for filtering:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample4
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE cond = v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- FILTER cond
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-Only paths that end at **Carlisle** or with the last edge having a travel time
-of over three hours are returned. This excludes the connection to **Cologne**
-from the results compared to the previous query.
-
-If you want to exclude the depth at which the prune expression stopped the
-traversal, you can assign the expression to a variable and use its negated value
-in a `FILTER`:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample5
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE cond = v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- FILTER NOT cond
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-This only returns the connection to **Cologne**, which is the opposite of the
-previous example.
-
-You may combine the prune variable with arbitrary other conditions in a `FILTER`
-operation. For example, you can remove results where the last edge has as lower
-travel time than the second to last edge of the path:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample6
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..5 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE cond = v.label == "Carlisle" OR e.travelTime > 3
- OPTIONS { uniqueVertices: "path" }
- FILTER cond AND p.edges[-1].travelTime >= p.edges[-2].travelTime
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-{{< info >}}
-The prune expression is **evaluated at every step of the traversal**. This
-includes any traversal depths below the specified minimum depth, despite not
-becoming part of the result. It also includes depth 0, which is the start vertex
-and a `null` edge.
-
-If you add prune conditions using the edge variable, make sure to account for
-the edge at depth 0 being `null`, as it may accidentally stop the traversal
-immediately. This may not be apparent due to the depth constraints.
-{{< /info >}}
-
-The following examples shows a graph traversal starting at **London**, with a
-traversal depth between 2 and 3, and every station is only visited once:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample7
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v, e, p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-If you add prune conditions to stop the traversal if the station is **Glasgow**
-or the travel time less than some number, no results are turned. This is even the
-case for a value of `2.5`, for which two paths exist that fulfill the criterion
-– to **Cologne** and **Carlisle**:
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample8
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v,e,p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Glasgow" OR e.travelTime < 2.5
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-The problem is that `null`, `false`, and `true` are all less than any number (`< 2.5`)
-because of AQL's [Type and value order](../fundamentals/type-and-value-order.md), and
-because the edge at depth 0 is always `null`. The prune condition is accidentally
-fulfilled at the start vertex, stopping the traversal too early. This similarly
-happens if you check an edge attribute for inequality (`!=`) and compare it to
-string, for instance, which evaluates to `true` for the `null` value.
-
-The depth at which a traversal is stopped by pruning is considered as a result,
-but in the above example, the minimum depth of `2` filters the start vertex out.
-If you lower the minimum depth to `0`, you get **London** as the sole result.
-This confirms that the traversal stopped at the start vertex.
-
-To avoid this problem, exclude the `null` value. For example, you can use
-`e.travelTime > 0 AND e.travelTime < 2.5`, but more generic solutions are to
-exclude depth 0 from the check (`LENGTH(p.edges) > 0`) or to simply ignore the
-`null` edge (`e != null`):
-
-```aql
----
-name: GRAPHTRAV_graphPruneExample9
-description: ''
-dataset: kShortestPathsGraph
----
-FOR v,e,p IN 2..3 OUTBOUND "places/London" GRAPH "kShortestPathsGraph"
- PRUNE v.label == "Glasgow" OR (e != null AND e.travelTime < 2.5)
- OPTIONS { uniqueVertices: "path" }
- RETURN CONCAT_SEPARATOR(" -- ", INTERLEAVE(p.vertices[*].label, p.edges[*].travelTime))
-```
-
-{{< warning >}}
-You can use AQL functions in prune expressions but only those that can be
-executed on DB-Servers, regardless of your deployment mode. The following
-functions cannot be used in the expression:
-- `CALL()`
-- `APPLY()`
-- `DOCUMENT()`
-- `V8()`
-- `SCHEMA_GET()`
-- `SCHEMA_VALIDATE()`
-- `VERSION()`
-- `COLLECTIONS()`
-- `CURRENT_USER()`
-- `CURRENT_DATABASE()`
-- `COLLECTION_COUNT()`
-- `NEAR()`
-- `WITHIN()`
-- `WITHIN_RECTANGLE()`
-- `FULLTEXT()`
-- [User-defined functions (UDFs)](../user-defined-functions.md)
-{{< /warning >}}
-
-## Using filters
-
-All three variables emitted by the traversals might as well be used in filter
-statements. For some of these filter statements the optimizer can detect that it
-is possible to prune paths of traversals earlier, hence filtered results are
-not emitted to the variables in the first place. This may significantly
-improve the performance of your query. Whenever a filter is not fulfilled,
-the complete set of `vertex`, `edge` and `path` is skipped. All paths
-with a length greater than the `max` depth are never computed.
-
-Filter conditions that are `AND`-combined can be optimized, but `OR`-combined
-conditions cannot.
-
-### Filtering on paths
-
-Filtering on paths allows for the second most powerful filtering and may have the
-second highest impact on performance. Using the path variable you can filter on
-specific iteration depths. You can filter for absolute positions in the path
-by specifying a positive number (which then qualifies for the optimizations),
-or relative positions to the end of the path by specifying a negative number.
-
-#### Filtering edges on the path
-
-This example traversal filters all paths where the start edge (index 0) has the
-attribute `theTruth` equal to `true`. The resulting paths are up to 5 items long:
-
-```aql
----
-name: GRAPHTRAV_graphFilterEdges
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].theTruth == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-#### Filtering vertices on the path
-
-Similar to filtering the edges on the path, you can also filter the vertices:
-
-```aql
----
-name: GRAPHTRAV_graphFilterVertices
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key == "G"
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-#### Combining several filters
-
-You can combine filters in any way you like:
-
-```aql
----
-name: GRAPHTRAV_graphFilterCombine
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].theTruth == true
- AND p.edges[1].theFalse == false
- FILTER p.vertices[1]._key == "G"
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-The query filters all paths where the first edge has the attribute
-`theTruth` equal to `true`, the first vertex is `"G"` and the second edge has
-the attribute `theFalse` equal to `false`. The resulting paths are up to
-5 items long.
-
-**Note**: Despite the `min` depth of 1, this only returns results of
-depth 2. This is because for all results in depth 1, the second edge does not
-exist and hence cannot fulfill the condition here.
-
-#### Filter on the entire path
-
-With the help of array comparison operators filters can also be defined
-on the entire path, like `ALL` edges should have `theTruth == true`:
-
-```aql
----
-name: GRAPHTRAV_graphFilterEntirePath
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[*].theTruth ALL == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-Or `NONE` of the edges should have `theTruth == true`:
-
-```aql
----
-name: GRAPHTRAV_graphFilterPathEdges
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[*].theTruth NONE == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-Both examples above are recognized by the optimizer and can potentially use other indexes
-than the edge index.
-
-It is also possible to define that at least one edge on the path has to fulfill the condition:
-
-```aql
----
-name: GRAPHTRAV_graphFilterPathAnyEdge
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..5 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[*].theTruth ANY == true
- RETURN { vertices: p.vertices[*]._key, edges: p.edges[*].label }
-```
-
-It is guaranteed that at least one, but potentially more edges fulfill the condition.
-All of the above filters can be defined on vertices in the exact same way.
-
-### Filtering on the path vs. filtering on vertices or edges
-
-Filtering on the path influences the Iteration on your graph. If certain conditions
-aren't met, the traversal may stop continuing along this path.
-
-In contrast filters on vertex or edge only express whether you're interested in the actual value of these
-documents. Thus, it influences the list of returned documents (if you return v or e) similar
-as specifying a non-null `min` value. If you specify a min value of 2, the traversal over the first
-two nodes of these paths has to be executed - you just won't see them in your result array.
-
-Similar are filters on vertices or edges - the traverser has to walk along these nodes, since
-you may be interested in documents further down the path.
-
-### Examples
-
-Create a simple symmetric traversal demonstration graph:
-
-
-
-```js
----
-name: GRAPHTRAV_01_create_graph
-description: ''
----
-~addIgnoreCollection("circles");
-~addIgnoreCollection("edges");
-var examples = require("@arangodb/graph-examples/example-graph");
-var graph = examples.loadGraph("traversalGraph");
-db.circles.toArray();
-db.edges.toArray();
-print("once you don't need them anymore, clean them up:");
-examples.dropGraph("traversalGraph");
-```
-
-To get started we select the full graph. For better overview we only return
-the vertex IDs:
-
-```aql
----
-name: GRAPHTRAV_02_traverse_all_a
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_02_traverse_all_b
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 OUTBOUND 'circles/A' edges RETURN v._key
-```
-
-We can nicely see that it is heading for the first outer vertex, then goes back to
-the branch to descend into the next tree. After that it returns to our start node,
-to descend again. As we can see both queries return the same result, the first one
-uses the named graph, the second uses the edge collections directly.
-
-Now we only want the elements of a specific depth (min = max = 2), the ones that
-are right behind the fork:
-
-```aql
----
-name: GRAPHTRAV_03_traverse_3a
-description: ''
-dataset: traversalGraph
----
-FOR v IN 2..2 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_03_traverse_3b
-description: ''
-dataset: traversalGraph
----
-FOR v IN 2 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-As you can see, we can express this in two ways: with or without the `max` depth
-parameter.
-
-### Filter examples
-
-Now let's start to add some filters. We want to cut of the branch on the right
-side of the graph, we may filter in two ways:
-
-- we know the vertex at depth 1 has `_key` == `G`
-- we know the `label` attribute of the edge connecting **A** to **G** is `right_foo`
-
-```aql
----
-name: GRAPHTRAV_04_traverse_4a
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key != 'G'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_04_traverse_4b
-description: ''
-dataset: traversalGraph
----
-FOR v, e, p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].label != 'right_foo'
- RETURN v._key
-```
-
-As we can see, all vertices behind **G** are skipped in both queries.
-The first filters on the vertex `_key`, the second on an edge label.
-Note again, as soon as a filter is not fulfilled for any of the three elements
-`v`, `e` or `p`, the complete set of these is excluded from the result.
-
-We also may combine several filters, for instance to filter out the right branch
-(**G**), and the **E** branch:
-
-```aql
----
-name: GRAPHTRAV_05_traverse_5a
-description: ''
-dataset: traversalGraph
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key != 'G'
- FILTER p.edges[1].label != 'left_blub'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_05_traverse_5b
-description: ''
-dataset: traversalGraph
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.vertices[1]._key != 'G' AND p.edges[1].label != 'left_blub'
- RETURN v._key
-```
-
-As you can see, combining two `FILTER` statements with an `AND` has the same result.
-
-## Comparing OUTBOUND / INBOUND / ANY
-
-All our previous examples traversed the graph in `OUTBOUND` edge direction.
-You may however want to also traverse in reverse direction (`INBOUND`) or
-both (`ANY`). Since `circles/A` only has outbound edges, we start our queries
-from `circles/E`:
-
-```aql
----
-name: GRAPHTRAV_06_traverse_6a
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 OUTBOUND 'circles/E' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_06_traverse_6b
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 INBOUND 'circles/E' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_06_traverse_6c
-description: ''
-dataset: traversalGraph
----
-FOR v IN 1..3 ANY 'circles/E' GRAPH 'traversalGraph'
- RETURN v._key
-```
-
-The first traversal only walks in the forward (`OUTBOUND`) direction.
-Therefore from **E** we only can see **F**. Walking in reverse direction
-(`INBOUND`), we see the path to **A**: **B** → **A**.
-
-Walking in forward and reverse direction (`ANY`) we can see a more diverse result.
-First of all, we see the simple paths to **F** and **A**. However, these vertices
-have edges in other directions and they are traversed.
-
-**Note**: The traverser may use identical edges multiple times. For instance,
-if it walks from **E** to **F**, it continues to walk from **F** to **E**
-using the same edge once again. Due to this, we see duplicate nodes in the result.
-
-Please note that the direction can't be passed in by a bind parameter.
-
-## Use the AQL explainer for optimizations
-
-Now let's have a look what the optimizer does behind the curtain and inspect
-traversal queries using [the explainer](../execution-and-performance/query-optimization.md):
-
-```aql
----
-name: GRAPHTRAV_07_traverse_7
-description: ''
-dataset: traversalGraph
-explain: true
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- LET localScopeVar = RAND() > 0.5
- FILTER p.edges[0].theTruth != localScopeVar
- RETURN v._key
-```
-
-```aql
----
-name: GRAPHTRAV_07_traverse_8
-description: ''
-dataset: traversalGraph
-explain: true
----
-FOR v,e,p IN 1..3 OUTBOUND 'circles/A' GRAPH 'traversalGraph'
- FILTER p.edges[0].label == 'right_foo'
- RETURN v._key
-```
-
-We now see two queries: In one we add a `localScopeVar` variable, which is outside
-the scope of the traversal itself - it is not known inside of the traverser.
-Therefore, this filter can only be executed after the traversal, which may be
-undesired in large graphs. The second query on the other hand only operates on the
-path, and therefore this condition can be used during the execution of the traversal.
-Paths that are filtered out by this condition won't be processed at all.
-
-And finally clean it up again:
-
-```js
----
-name: GRAPHTRAV_99_drop_graph
-description: ''
----
-~examples.loadGraph("traversalGraph");
-var examples = require("@arangodb/graph-examples/example-graph");
-examples.dropGraph("traversalGraph");
-```
-
-If this traversal is not powerful enough for your needs, like you cannot describe
-your conditions as AQL filter statements, then you might want to have a look at
-the [edge collection methods](../../develop/javascript-api/@arangodb/collection-object.md#edge-documents)
-in the JavaScript API.
-
-Also see how to [combine graph traversals](../examples-and-query-patterns/traversals.md).
diff --git a/site/content/3.11/aql/how-to-invoke-aql/with-arangosh.md b/site/content/3.11/aql/how-to-invoke-aql/with-arangosh.md
deleted file mode 100644
index 20c0a0b70f..0000000000
--- a/site/content/3.11/aql/how-to-invoke-aql/with-arangosh.md
+++ /dev/null
@@ -1,786 +0,0 @@
----
-title: Executing AQL queries from _arangosh_
-menuTitle: with arangosh
-weight: 5
-description: >-
- How to run queries, set bind parameters, and obtain the resulting and
- additional information using the JavaScript API
-# Undocumented on purpose:
-# db._query(, , , { forceOneShardAttributeValue: "..."} )
----
-In the ArangoDB shell, you can use the `db._query()` and `db._createStatement()`
-methods to execute AQL queries. This chapter also describes
-how to use bind parameters, counting, statistics and cursors.
-
-## With `db._query()`
-
-`db._query() → cursor`
-
-You can execute queries with the `_query()` method of the `db` object.
-This runs the specified query in the context of the currently
-selected database and returns the query results in a cursor.
-You can print the results of the cursor using its `toArray()` method:
-
-```js
----
-name: 01_workWithAQL_all
-description: ''
----
-~addIgnoreCollection("mycollection")
-var coll = db._create("mycollection")
-var doc = db.mycollection.save({ _key: "testKey", Hello : "World" })
-db._query('FOR my IN mycollection RETURN my._key').toArray()
-```
-
-### `db._query()` bind parameters
-
-`db._query(, ) → cursor`
-
-To pass bind parameters into a query, you can specify a second argument when
-calling the `_query()` method:
-
-```js
----
-name: 02_workWithAQL_bindValues
-description: ''
----
-db._query('FOR c IN @@collection FILTER c._key == @key RETURN c._key', {
- '@collection': 'mycollection',
- 'key': 'testKey'
-}).toArray();
-```
-
-### ES6 template strings
-
-`` aql`` ``
-
-It is also possible to use ES6 template strings for generating AQL queries. There is
-a template string generator function named `aql`.
-
-The following example demonstrates what the template string function generates:
-
-```js
----
-name: 02_workWithAQL_aqlTemplateString
-description: ''
----
-var key = 'testKey';
-aql`FOR c IN mycollection FILTER c._key == ${key} RETURN c._key`
-```
-
-The next example directly uses the generated result to execute a query:
-
-```js
----
-name: 02_workWithAQL_aqlQuery
-description: ''
----
-var key = 'testKey';
-db._query(
- aql`FOR c IN mycollection FILTER c._key == ${key} RETURN c._key`
-).toArray();
-```
-
-Arbitrary JavaScript expressions can be used in queries that are generated with the
-`aql` template string generator. Collection objects are handled automatically:
-
-```js
----
-name: 02_workWithAQL_aqlCollectionQuery
-description: ''
----
-var key = 'testKey';
-db._query(aql`FOR doc IN ${ db.mycollection } RETURN doc`).toArray();
-```
-
-Note: data-modification AQL queries normally do not return a result unless the
-AQL query contains a `RETURN` operation at the top-level. Without a `RETURN`
-operation, the `toArray()` method returns an empty array.
-
-### Statistics and extra Information
-
-`cursor.getExtra() → queryInfo`
-
-It is always possible to retrieve statistics for a query with the `getExtra()` method:
-
-```js
----
-name: 03_workWithAQL_getExtra
-description: ''
----
-db._query(`
- FOR i IN 1..100
- INSERT { _key: CONCAT('test', TO_STRING(i)) } INTO mycollection
-`).getExtra();
-```
-
-The meaning of the statistics values is described in
-[Query statistics](../execution-and-performance/query-statistics.md).
-
-Query warnings are also reported here. If you design queries on the shell,
-be sure to check for warnings.
-
-### Main query options
-
-`db._query(, , , ) → cursor`
-
-You can pass the main options as the third argument to `db._query()` if you
-also pass a fourth argument with the sub options (can be an empty object `{}`).
-
-#### `count`
-
-Whether the number of documents in the result set should be calculated on the
-server side and returned in the `count` attribute of the result. Calculating the
-`count` attribute might have a performance impact for some queries so this
-option is turned off by default, and only returned when requested.
-
-If enabled, you can get the count by calling the `count()` method of the cursor.
-You can also count the number of results on the client side, for example, using
-`cursor.toArray().length`.
-
-```js
----
-name: 02_workWithAQL_count
-description: ''
----
-var cursor = db._query(
- 'FOR i IN 1..42 RETURN i',
- {},
- { count: true },
- {}
-);
-cursor.count();
-cursor.toArray().length;
-```
-
-#### `batchSize`
-
-The maximum number of result documents to be transferred from the server to the
-client in one roundtrip. If this attribute is not set, a server-controlled
-default value is used. A `batchSize` value of `0` is disallowed.
-
-```js
----
-name: 02_workWithAQL_batchSize
-description: ''
----
-db._query(
- 'FOR i IN 1..3 RETURN i',
- {},
- { batchSize: 2 },
- {}
-).toArray(); // full result retrieved in two batches
-```
-
-#### `ttl`
-
-The time-to-live for the cursor (in seconds). If the result set is small enough
-(less than or equal to `batchSize`), then results are returned right away.
-Otherwise, they are stored in memory and are accessible via the cursor with
-respect to the `ttl`. The cursor is removed on the server automatically after
-the specified amount of time. This is useful to ensure garbage collection of
-cursors that are not fully fetched by clients. If not set, a server-defined
-value is used (default: 30 seconds).
-
-```js
----
-name: 02_workWithAQL_ttl
-description: ''
----
-db._query(
- 'FOR i IN 1..20 RETURN i',
- {},
- { ttl: 5, batchSize: 10 },
- {}
-).toArray(); // Each batch needs to be fetched within 5 seconds
-```
-
-#### `memoryLimit`
-
-To set a memory limit for the query, pass `options` to the `_query()` method.
-The memory limit specifies the maximum number of bytes that the query is
-allowed to use. When a single AQL query reaches the specified limit value,
-the query will be aborted with a *resource limit exceeded* exception. In a
-cluster, the memory accounting is done per shard, so the limit value is
-effectively a memory limit per query per shard.
-
-```js
----
-name: 02_workWithAQL_memoryLimit
-description: ''
----
-db._query(
- 'FOR i IN 1..100000 SORT i RETURN i',
- {},
- { memoryLimit: 100000 }
-).toArray(); // xpError(ERROR_RESOURCE_LIMIT)
-```
-
-If no memory limit is specified, then the server default value (controlled by
-the `--query.memory-limit` startup option) is used for restricting the maximum amount
-of memory the query can use. A memory limit value of `0` means that the maximum
-amount of memory for the query is not restricted.
-
-### Query sub options
-
-`db._query(, , ) → cursor`
-
-`db._query(, , , ) → cursor`
-
-You can pass the sub options as the third argument to `db._query()` if you don't
-provide main options, or as fourth argument if you do.
-
-#### `fullCount`
-
-If you set `fullCount` to `true` and if the query contains a `LIMIT` operation, then the
-result has an extra attribute with the sub-attributes `stats` and `fullCount`, like
-`{ ... , "extra": { "stats": { "fullCount": 123 } } }`. The `fullCount` attribute
-contains the number of documents in the result before the last top-level `LIMIT` in the
-query was applied. It can be used to count the number of documents that match certain
-filter criteria, but only return a subset of them, in one go. It is thus similar to
-MySQL's `SQL_CALC_FOUND_ROWS` hint. Note that setting the option disables a few
-`LIMIT` optimizations and may lead to more documents being processed, and thus make
-queries run longer. Note that the `fullCount` attribute may only be present in the
-result if the query has a top-level `LIMIT` operation and the `LIMIT` operation
-is actually used in the query.
-
-#### `failOnWarning`
-If you set `failOnWarning` to `true`, this makes the query throw an exception and
-abort in case a warning occurs. You should use this option in development to catch
-errors early. If set to `false`, warnings don't propagate to exceptions and are
-returned with the query results. There is also a `--query.fail-on-warning`
-startup options for setting the default value for `failOnWarning`, so that you
-don't need to set it on a per-query level.
-
-#### `cache`
-
-Whether the [AQL query results cache](../execution-and-performance/caching-query-results.md)
-shall be used for adding as well as for retrieving results.
-
-If the query cache mode is set to `demand` and you set the `cache` query option
-to `true` for a query, then its query result is cached if it's eligible for
-caching. If the query cache mode is set to `on`, query results are automatically
-cached if they are eligible for caching unless you set the `cache` option to `false`.
-
-If you set the `cache` option to `false`, then any query cache lookup is skipped
-for the query. If you set it to `true`, the query cache is checked a cached result
-**if** the query cache mode is either set to `on` or `demand`.
-
-```js
----
-name: 02_workWithAQL_cache
-description: ''
----
-var resultCache = require("@arangodb/aql/cache");
-resultCache.properties({ mode: "demand" });
-~resultCache.clear();
-db._query("FOR i IN 1..5 RETURN i", {}, { cache: true }); // Adds result to cache
-db._query("FOR i IN 1..5 RETURN i", {}, { cache: true }); // Retrieves result from cache
-db._query("FOR i IN 1..5 RETURN i", {}, { cache: false }); // Bypasses the cache
-```
-
-#### `fillBlockCache`
-
-If you set `fillBlockCache` to `true` or not specify it, this makes the query store
-the data it reads via the RocksDB storage engine in the RocksDB block cache. This is
-usually the desired behavior. You can set the option to `false` for queries that are
-known to either read a lot of data that would thrash the block cache, or for queries
-that read data known to be outside of the hot set. By setting the option
-to `false`, data read by the query does not make it into the RocksDB block cache if
-it is not already in there, thus leaving more room for the actual hot set.
-
-#### `profile`
-
-If you set `profile` to `true` or `1`, extra timing information is returned for the query.
-The timing information is accessible via the `getExtra()` method of the query
-result. If set to `2`, the query includes execution statistics per query plan
-execution node in `stats.nodes` sub-attribute of the `extra` return attribute.
-Additionally, the query plan is returned in the `extra.plan` sub-attribute.
-
-#### `maxWarningCount`
-
-The `maxWarningCount` option limits the number of warnings that are returned by the query if
-`failOnWarning` is not set to `true`. The default value is `10`.
-
-#### `maxNumberOfPlans`
-
-The `maxNumberOfPlans` option limits the number of query execution plans the optimizer
-creates at most. Reducing the number of query execution plans may speed up query plan
-creation and optimization for complex queries, but normally there is no need to adjust
-this value.
-
-#### `optimizer`
-
-Options related to the query optimizer.
-
-- `rules`: A list of to-be-included or to-be-excluded optimizer rules can be put into
- this attribute, telling the optimizer to include or exclude specific rules. To disable
- a rule, prefix its name with a `-`, to enable a rule, prefix it with a `+`. There is also
- a pseudo-rule `all`, which matches all optimizer rules. `-all` disables all rules.
-
-#### `allowRetry`
-
-Set this option to `true` to make it possible to retry fetching the latest batch
-from a cursor.
-
-{{< info >}}
-This feature cannot be used on the server-side, like in [Foxx](../../develop/foxx-microservices/_index.md), as
-there is no client connection and no batching.
-{{< /info >}}
-
-If retrieving a result batch fails because of a connection issue, you can ask
-for that batch again using the `POST /_api/cursor//`
-endpoint. The first batch has an ID of `1` and the value is incremented by 1
-with every batch. Every result response except the last one also includes a
-`nextBatchId` attribute, indicating the ID of the batch after the current.
-You can remember and use this batch ID should retrieving the next batch fail.
-
-You can only request the latest batch again (or the next batch).
-Earlier batches are not kept on the server-side.
-Requesting a batch again does not advance the cursor.
-
-You can also call this endpoint with the next batch identifier, i.e. the value
-returned in the `nextBatchId` attribute of a previous request. This advances the
-cursor and returns the results of the next batch. This is only supported if there
-are more results in the cursor (i.e. `hasMore` is `true` in the latest batch).
-
-From v3.11.1 onward, you may use the `POST /_api/cursor//`
-endpoint even if the `allowRetry` attribute is `false` to fetch the next batch,
-but you cannot request a batch again unless you set it to `true`.
-
-To allow refetching of the last batch of the query, the server cannot
-automatically delete the cursor. After the first attempt of fetching the last
-batch, the server would normally delete the cursor to free up resources. As you
-might need to reattempt the fetch, it needs to keep the final batch when the
-`allowRetry` option is enabled. Once you successfully received the last batch,
-you should call the `DELETE /_api/cursor/` endpoint so that the
-server doesn't unnecessarily keep the batch until the cursor times out
-(`ttl` query option).
-
-#### `stream`
-
-Set `stream` to `true` to execute the query in a **streaming** fashion.
-The query result is not stored on the server, but calculated on the fly.
-
-{{< warning >}}
-Long-running queries need to hold the collection locks for as long as the query
-cursor exists. It is advisable to **only** use this option on short-running
-queries **or** without exclusive locks.
-{{< /warning >}}
-
-If set to `false`, the query is executed right away in its entirety.
-In that case, the query results are either returned right away (if the result
-set is small enough), or stored on the arangod instance and can be accessed
-via the cursor API.
-
-The default value is `false`.
-
-{{< info >}}
-The query options `cache`, `count` and `fullCount` don't work on streaming
-queries. Additionally, query statistics, profiling data, and warnings are only
-available after the query has finished and are delivered as part of the last batch.
-{{< /info >}}
-
-#### `maxRuntime`
-
-The query has to be executed within the given runtime or it is killed.
-The value is specified in seconds. The default value is `0.0` (no timeout).
-
-#### `maxDNFConditionMembers`
-
-Introduced in: v3.11.0
-
-A threshold for the maximum number of `OR` sub-nodes in the internal
-representation of an AQL `FILTER` condition.
-
-Yon can use this option to limit the computation time and memory usage when
-converting complex AQL `FILTER` conditions into the internal DNF
-(disjunctive normal form) format. `FILTER` conditions with a lot of logical
-branches (`AND`, `OR`, `NOT`) can take a large amount of processing time and
-memory. This query option limits the computation time and memory usage for
-such conditions.
-
-Once the threshold value is reached during the DNF conversion of a `FILTER`
-condition, the conversion is aborted, and the query continues with a simplified
-internal representation of the condition, which **cannot be used for index lookups**.
-
-You can also set the threshold globally instead of per query with the
-[`--query.max-dnf-condition-members` startup option](../../components/arangodb-server/options.md#--querymax-dnf-condition-members).
-
-#### `maxNodesPerCallstack`
-
-The number of execution nodes in the query plan after
-that stack splitting is performed to avoid a potential stack overflow.
-Defaults to the configured value of the startup option
-`--query.max-nodes-per-callstack`.
-
-This option is only useful for testing and debugging and normally does not need
-any adjustment.
-
-#### `maxTransactionSize`
-
-The transaction size limit in bytes.
-
-#### `intermediateCommitSize`
-
-The maximum total size of operations after which an intermediate
-commit is performed automatically.
-
-#### `intermediateCommitCount`
-
-The maximum number of operations after which an intermediate
-commit is performed automatically.
-
-#### `spillOverThresholdMemoryUsage`
-
-Introduced in: v3.10.0
-
-This option allows queries to store intermediate and final results temporarily
-on disk if the amount of memory used (in bytes) exceeds the specified value.
-This is used for decreasing the memory usage during the query execution.
-
-This option only has an effect on queries that use the `SORT` operation but
-without a `LIMIT`, and if you enable the spillover feature by setting a path
-for the directory to store the temporary data in with the
-[`--temp.intermediate-results-path` startup option](../../components/arangodb-server/options.md#--tempintermediate-results-path).
-
-Default value: 128MB.
-
-{{< info >}}
-Spilling data from RAM onto disk is an experimental feature and is turned off
-by default. The query results are still built up entirely in RAM on Coordinators
-and single servers for non-streaming queries. To avoid the buildup of
-the entire query result in RAM, use a streaming query (see the
-[`stream`](#stream) option).
-{{< /info >}}
-
-#### `spillOverThresholdNumRows`
-
-Introduced in: v3.10.0
-
-This option allows queries to store intermediate and final results temporarily
-on disk if the number of rows produced by the query exceeds the specified value.
-This is used for decreasing the memory usage during the query execution. In a
-query that iterates over a collection that contains documents, each row is a
-document, and in a query that iterates over temporary values
-(i.e. `FOR i IN 1..100`), each row is one of such temporary values.
-
-This option only has an effect on queries that use the `SORT` operation but
-without a `LIMIT`, and if you enable the spillover feature by setting a path
-for the directory to store the temporary data in with the
-[`--temp.intermediate-results-path` startup option](../../components/arangodb-server/options.md#--tempintermediate-results-path).
-
-Default value: `5000000` rows.
-
-{{< info >}}
-Spilling data from RAM onto disk is an experimental feature and is turned off
-by default. The query results are still built up entirely in RAM on Coordinators
-and single servers for non-streaming queries. To avoid the buildup of
-the entire query result in RAM, use a streaming query (see the
-[`stream`](#stream) option).
-{{< /info >}}
-
-#### `allowDirtyReads`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Introduced in: v3.10.0
-
-If you set this option to `true` and execute the query against a cluster
-deployment, then the Coordinator is allowed to read from any shard replica and
-not only from the leader. See [Read from followers](../../develop/http-api/documents.md#read-from-followers)
-for details.
-
-#### `skipInaccessibleCollections`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Let AQL queries (especially graph traversals) treat collection to which a
-user has **no access** rights for as if these collections are empty.
-Instead of returning a *forbidden access* error, your queries execute normally.
-This is intended to help with certain use-cases: A graph contains several collections
-and different users execute AQL queries on that graph. You can naturally limit the
-accessible results by changing the access rights of users on collections.
-
-#### `satelliteSyncWait`
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-Configure how long a DB-Server has time to bring the SatelliteCollections
-involved in the query into sync. The default value is `60.0` seconds.
-When the maximal time is reached, the query is stopped.
-
-## With `db._createStatement()` (ArangoStatement)
-
-The `_query()` method is a shorthand for creating an `ArangoStatement` object,
-executing it and iterating over the resulting cursor. If more control over the
-result set iteration is needed, it is recommended to first create an
-`ArangoStatement` object as follows:
-
-```js
----
-name: 04_workWithAQL_statements1
-description: ''
----
-stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-```
-
-To execute the query, use the `execute()` method of the _statement_ object:
-
-```js
----
-name: 05_workWithAQL_statements2
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-cursor = stmt.execute();
-```
-
-You can pass a number to the `execute()` method to specify a batch size value.
-The server returns at most this many results in one roundtrip.
-The batch size cannot be adjusted after the query is first executed.
-
-**Note**: There is no need to explicitly call the execute method if another
-means of fetching the query results is chosen. The following two approaches
-lead to the same result:
-
-```js
----
-name: executeQueryNoBatchSize
-description: ''
----
-~db._create("users");
-~db.users.save({ name: "Gerhard" });
-~db.users.save({ name: "Helmut" });
-~db.users.save({ name: "Angela" });
-var result = db.users.all().toArray();
-print(result);
-
-var q = db._query("FOR x IN users RETURN x");
-result = [ ];
-while (q.hasNext()) {
- result.push(q.next());
-}
-print(result);
-~db._drop("users")
-```
-
-The following two alternatives both use a batch size and return the same
-result:
-
-```js
----
-name: executeQueryBatchSize
-description: ''
----
-~db._create("users");
-~db.users.save({ name: "Gerhard" });
-~db.users.save({ name: "Helmut" });
-~db.users.save({ name: "Angela" });
-var result = [ ];
-var q = db.users.all();
-q.execute(1);
-while(q.hasNext()) {
- result.push(q.next());
-}
-print(result);
-
-result = [ ];
-q = db._query("FOR x IN users RETURN x", {}, { batchSize: 1 });
-while (q.hasNext()) {
- result.push(q.next());
-}
-print(result);
-~db._drop("users")
-```
-
-### Cursors
-
-Once the query executed the query results are available in a cursor.
-The cursor can return all its results at once using the `toArray()` method.
-This is a short-cut that you can use if you want to access the full result
-set without iterating over it yourself.
-
-```js
----
-name: 05_workWithAQL_statements3
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-~var cursor = stmt.execute();
-cursor.toArray();
-```
-
-Cursors can also be used to iterate over the result set document-by-document.
-To do so, use the `hasNext()` and `next()` methods of the cursor:
-
-```js
----
-name: 05_workWithAQL_statements4
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2 ] RETURN i * 2" } );
-~var c = stmt.execute();
-while (c.hasNext()) {
- require("@arangodb").print(c.next());
-}
-```
-
-Please note that you can iterate over the results of a cursor only once, and that
-the cursor will be empty when you have fully iterated over it. To iterate over
-the results again, the query needs to be re-executed.
-
-Additionally, the iteration can be done in a forward-only fashion. There is no
-backwards iteration or random access to elements in a cursor.
-
-### ArangoStatement parameters binding
-
-To execute an AQL query using bind parameters, you need to create a statement first
-and then bind the parameters to it before execution:
-
-```js
----
-name: 05_workWithAQL_statements5
-description: ''
----
-var stmt = db._createStatement( { "query": "FOR i IN [ @one, @two ] RETURN i * 2" } );
-stmt.bind("one", 1);
-stmt.bind("two", 2);
-cursor = stmt.execute();
-```
-
-The cursor results can then be dumped or iterated over as usual, e.g.:
-
-```js
----
-name: 05_workWithAQL_statements6
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ @one, @two ] RETURN i * 2" } );
-~stmt.bind("one", 1);
-~stmt.bind("two", 2);
-~var cursor = stmt.execute();
-cursor.toArray();
-```
-
-or
-
-```js
----
-name: 05_workWithAQL_statements7
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ @one, @two ] RETURN i * 2" } );
-~stmt.bind("one", 1);
-~stmt.bind("two", 2);
-~var cursor = stmt.execute();
-while (cursor.hasNext()) {
- require("@arangodb").print(cursor.next());
-}
-```
-
-Please note that bind parameters can also be passed into the `_createStatement()`
-method directly, making it a bit more convenient:
-
-```js
----
-name: 05_workWithAQL_statements8
-description: ''
----
-stmt = db._createStatement({
- "query": "FOR i IN [ @one, @two ] RETURN i * 2",
- "bindVars": {
- "one": 1,
- "two": 2
- }
-});
-```
-
-### Counting with a cursor
-
-Cursors also optionally provide the total number of results. By default, they do not.
-To make the server return the total number of results, you may set the `count` attribute to
-`true` when creating a statement:
-
-```js
----
-name: 05_workWithAQL_statements9
-description: ''
----
-stmt = db._createStatement( {
- "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i",
- "count": true } );
-```
-
-After executing this query, you can use the `count` method of the cursor to get the
-number of total results from the result set:
-
-```js
----
-name: 05_workWithAQL_statements10
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i", "count": true } );
-var cursor = stmt.execute();
-cursor.count();
-```
-
-Please note that the `count` method returns nothing if you did not specify the `count`
-attribute when creating the query.
-
-This is intentional so that the server may apply optimizations when executing the query and
-construct the result set incrementally. Incremental creation of the result sets
-is no possible
-if all of the results need to be shipped to the client anyway. Therefore, the client
-has the choice to specify `count` and retrieve the total number of results for a query (and
-disable potential incremental result set creation on the server), or to not retrieve the total
-number of results and allow the server to apply optimizations.
-
-Please note that at the moment the server will always create the full result set for each query so
-specifying or omitting the `count` attribute currently does not have any impact on query execution.
-This may change in the future. Future versions of ArangoDB may create result sets incrementally
-on the server-side and may be able to apply optimizations if a result set is not fully fetched by
-a client.
-
-### Using cursors to obtain additional information on internal timings
-
-Cursors can also optionally provide statistics of the internal execution phases. By default, they do not.
-To get to know how long parsing, optimization, instantiation and execution took,
-make the server return that by setting the `profile` attribute to
-`true` when creating a statement:
-
-```js
----
-name: 06_workWithAQL_statements11
-description: ''
----
-stmt = db._createStatement({
- query: "FOR i IN [ 1, 2, 3, 4 ] RETURN i",
- options: {"profile": true}});
-```
-
-After executing this query, you can use the `getExtra()` method of the cursor to get the
-produced statistics:
-
-```js
----
-name: 06_workWithAQL_statements12
-description: ''
----
-~var stmt = db._createStatement( { "query": "FOR i IN [ 1, 2, 3, 4 ] RETURN i", options: {"profile": true}} );
-var cursor = stmt.execute();
-cursor.getExtra();
-```
-
-## Query validation with `db._parse()`
-
-The `_parse()` method of the `db` object can be used to parse and validate a
-query syntactically, without actually executing it.
-
-```js
----
-name: 06_workWithAQL_statements13
-description: ''
----
-db._parse( "FOR i IN [ 1, 2 ] RETURN i" );
-```
diff --git a/site/content/3.11/arangograph/_index.md b/site/content/3.11/arangograph/_index.md
deleted file mode 100644
index 0e07d4c600..0000000000
--- a/site/content/3.11/arangograph/_index.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: ArangoGraph Insights Platform
-menuTitle: ArangoGraph
-weight: 65
-description: >-
- The ArangoGraph Insights Platform provides the entire functionality of
- ArangoDB as a service, without the need to run or manage databases yourself
-aliases:
- - arangograph/changelog
----
-The [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
-formerly called Oasis, provides ArangoDB databases as a Service (DBaaS).
-It enables you to use the entire functionality of an ArangoDB cluster
-deployment without the need to run or manage the system yourself.
-
-The ArangoGraph Insights Platform...
-
-- runs your databases in data centers of the cloud provider
- of your choice: Google Cloud Platform (GCP) or Amazon Web Services (AWS).
- This optimizes performance and reduces cost.
-
-- ensures that your databases are always available and
- healthy by monitoring them 24/7.
-
-- ensures that your databases are kept up to date by
- installing new versions without service interruption.
-
-- ensures that your data is safe by providing encryption &
- audit logs and making frequent data backups.
-
-- guarantees that your data always remains your property and
- access to it is protected with industry standard safeguards.
-
-For more information see
-[dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-
-For quick start guide, see
-[Use ArangoDB in the Cloud](../get-started/set-up-a-cloud-instance.md).
diff --git a/site/content/3.11/arangograph/api/_index.md b/site/content/3.11/arangograph/api/_index.md
deleted file mode 100644
index ee4f21371f..0000000000
--- a/site/content/3.11/arangograph/api/_index.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: The ArangoGraph API
-menuTitle: ArangoGraph API
-weight: 60
-description: >-
- Interface to control all resources inside ArangoGraph in a scriptable manner
-aliases:
- - arangograph-api
----
-The [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
-comes with its own API. This API enables you to control all
-resources inside ArangoGraph in a scriptable manner. Typical use cases are spinning
-up ArangoGraph deployments during continuous integration and infrastructure as code.
-
-The ArangoGraph API…
-
-- is a well-specified API that uses
- [Protocol Buffers](https://developers.google.com/protocol-buffers/)
- as interface definition and [gRPC](https://grpc.io/) as
- underlying protocol.
-
-- allows for automatic generation of clients for a large list of languages.
- A Go client is available out of the box.
-
-- uses API keys for authentication. API keys impersonate a user and inherit
- the permissions of that user.
-
-- is also available as a command-line tool called [oasisctl](../oasisctl/_index.md).
-
-- is also available as a
- [Terraform plugin](https://github.com/arangodb-managed/terraform-provider-oasis/).
- This plugin makes integration of ArangoGraph in infrastructure as code projects
- very simple. To learn more, refer to the [plugin documentation](https://registry.terraform.io/providers/arangodb-managed/oasis/latest/docs).
-
-Also see:
-- [github.com/arangodb-managed/apis](https://github.com/arangodb-managed/apis/)
-- [API definitions](https://arangodb-managed.github.io/apis/index.html)
diff --git a/site/content/3.11/arangograph/api/get-started.md b/site/content/3.11/arangograph/api/get-started.md
deleted file mode 100644
index b4ea00e39d..0000000000
--- a/site/content/3.11/arangograph/api/get-started.md
+++ /dev/null
@@ -1,481 +0,0 @@
----
-title: Get started with the ArangoGraph API and Oasisctl
-menuTitle: Get started with Oasisctl
-weight: 10
-description: >-
- A tutorial that guides you through the ArangoGraph API as well as the Oasisctl
- command-line tool
-aliases:
- - ../arangograph-api/getting-started
----
-This tutorial shows you how to do the following:
-
-- Generate an API key and authenticate with Oasisctl
-- View information related to your organizations, projects, and deployments
-- Configure, create and delete a deployment
-
-With Oasisctl the general command structure is to execute commands such as:
-
-```
-oasisctl list deployments
-```
-
-This command lists all deployments available to the authenticated user and we
-will explore it in more detail later. Most commands also have associated
-`--flags` that are required or provide additional options, this aligns with the
-interaction method for many command line utilities. If you aren’t already
-familiar with this, follow along as there are many examples in this guide that
-will familiarize you with this command structure and using flags, along with
-how to use OasisCtl to access the ArangoGraph API.
-
-Note: A good rule of thumb for all variables, resource names, and identifiers
-is to **assume they are all case sensitive**, when being used with Oasisctl.
-
-## API Authentication
-
-### Generating an API Key
-
-The first step to using the ArangoGraph API is to generate an API key. To generate a
-key you will need to be signed into your account at
-[dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-Once you are signed in, hover over the profile icon in the top right corner.
-
-
-
-Click _My API keys_.
-
-This will bring you to your API key management screen. From this screen you can
-create, reject, and delete API keys.
-
-Click the _New API key_ button.
-
-
-
-The pop-up box that follows has a few options for customizing the access level
-of this API key.
-
-The options you have available include:
-
-- Limit access to 1 organization or all organizations this user has access to
-- Set an expiration time, specified in number of hours
-- Limit key to read-only access
-
-Once you have configured the API key access options, you will be presented with
-your API key ID and API key secret. It is very important that you capture the
-API key secret before clicking the close button. There is no way to retrieve
-the API key secret after closing this pop-up window.
-
-
-
-Once you have securely stored your API key ID and secret, click close.
-
-That is all there is to setting up API access to your ArangoGraph organizations.
-
-### Authenticating with Oasisctl
-
-Now that you have API access it is time to login with Oasisctl.
-
-Running the Oasisctl utility without any arguments is the equivalent of
-including the --help flag. This shows all of the top level commands available
-and you can continue exploring each command by typing the command name
-followed by the --help flag to see the options available for that command.
-
-Let’s start with doing that for the login command:
-
-```bash
-oasisctl login --help
-```
-
-You should see an output similar to this:
-
-
-
-This shows two additional flags are available, aside from the help flag.
-
-- `--key-id`
-- `--key-secret`
-
-These require the values we received when creating the API key. Once you run
-this command you will receive an authentication token that can be used for the
-remainder of the session.
-
-```bash
-oasisctl login \
- --key-id cncApiKeyId \
- --key-secret 873-secret-key-id
-```
-
-Upon successful login you should receive an authentication token:
-
-
-
-Depending on your environment, you could instead store this token for easier
-access. For example:
-
-With Linux:
-
-```bash
-export OASIS_TOKEN=$(oasisctl login --key-id cncApiKeyId --key-secret 873-secret-key-id)
-```
-
-Or Windows Powershell:
-
-```powershell
-setx OASIS_TOKEN (oasisctl login --key-id cncApiKeyId --key-secret 873-secret-key-id)
-```
-
-In the coming sections you will see how to authenticate with this token when
-using other commands that require authentication.
-
-## Viewing and Managing Organizations and Deployments
-
-### Format
-
-This section covers the basics of retrieving information from the ArangoGraph API.
-Depending on the data you are requesting from the ArangoGraph API, being able to read
-it in the command line can start to become difficult. To make text easier to
-read for humans and your applications, Oasisctl offers two options for
-formatting the data received:
-
-- Table
-- JSON
-
-You can define the format of the data by supplying the `--format` flag along
-with your preferred format, like so:
-
-```bash
-oasisctl --format json
-```
-
-### Viewing Information with the List Command
-
-This section will cover the two main functions of retrieving data with the
-ArangoGraph API. These are:
-
-- `list` - List resources
-- `get` - Get information
-
-Before you can jump right into making new deployments you need to be aware of
-what resources you have available. This is where the list command comes in.
-List serves as a way to retrieve general information, you can see all of the
-available list options by accessing its help output.
-
-```bash
-oasisctl list --help
-```
-
-This should output a screen similar to:
-
-
-
-As you can see you can get information on anything you would need about your
-ArangoGraph organizations, deployments, and access control. To start, let’s take a
-look at a few examples of listing information and then getting more details on
-our results.
-
-### List Organizations
-
-One of the first pieces of information you may be interested in is the
-organizations you have access to. This is useful to know because most commands
-require an explicit declaration of the organization you are interacting with.
-To find this, use list to list your available organizations:
-
-```bash
-oasisctl list organizations --format json
-```
-
-Once you have your available organizations you can refer to your desired
-organization using its name or id.
-
-
-
-Note: You may also notice the url attribute, this is for internal use only and
-should not be treated as a publicly accessible path.
-
-### List Projects
-
-Once you have the organization name that you wish to interact with, the next
-step is to list the available projects within that organization. Do this by
-following the same command structure as before and instead exchange
-organizations for projects, this time providing the desired organization name
-with the `--organization-id` flag.
-
-```bash
-oasisctl list projects \
- --organization-id "ArangoGraph Organization" \
- --format json
-```
-
-This will return information on all projects that the authenticated user has
-access to.
-
-
-
-### List Deployments
-
-Things start getting a bit more interesting with information related to
-deployments. Now that you have obtained an organization iD and a project ID,
-you can list all of the associated deployments for that project.
-
-```bash
-oasisctl list deployments \
- --organization-id "ArangoGraph Organization" \
- --project-id "Getting Started with ArangoGraph" \
- --format json
- ```
-
-
-
-This provides some basic details for all of the deployments associated with the
-project. Namely, it provides a deployment ID which we can use to start making
-modifications to the deployment or to get more detailed information, with the
-`get` command.
-
-### Using the Get Command
-
-In Oasisctl, you use the get command to obtain more detailed information about
-any of your available resources. It follows the same command structure as the
-previous commands but typically requires a bit more information. For example,
-to get more information on a specific deployment means you need to know at
-least:
-
-- Organization ID
-- Project ID
-- Deployment ID
-
-To get more information about our example deployment we would need to execute
-the following command:
-
-```bash
-oasisctl get deployment \
- --organization-id "ArangoGraph Organization" \
- --project-id "Getting Started with ArangoGraph" \
- --deployment-id "abc123DeploymentID" \
- --format json
-```
-
-This returns quite a bit more information about the deployment including more
-detailed server information, the endpoint URL where you can access the web interface,
-and optionally the root user password.
-
-
-
-### Node Size ID
-
-We won’t be exploring every flag available for creating a deployment but it is
-a good idea to explore the concept of the node size ID value. This is an
-indicator that is unique to each provider (Google, AWS) and indicates
-the CPU and memory. Depending on the provider and region this can also
-determine the available disk sizes for your deployment. In other words, it is
-pretty important to know which `node-size-id` your deployment will be using.
-
-The command you execute will determine on the available providers and regions
-for your organization but here is an example command that lists the available
-options in the US West region for the Google Cloud Platform:
-
-```bash
-oasisctl list nodesizes \
- --organization-id "ArangoGraph Organization" \
- --provider-id "Google Cloud Platform" \
- --region-id gcp-us-west2
-```
-
-The output you will see will be similar to this:
-
-
-
-It is important to note that you can scale up with more disk size but you are
-unable to scale down your deployment disk size. The only way to revert back to
-a lower disk size is to destroy and recreate your deployment.
-
-Once you have decided what your starting deployment needs are you can reference
-your decision with the Id value for the corresponding configuration. So, for
-our example, we will be choosing the c4-a4 configuration. The availability and
-options are different for each provider and region, so be sure to confirm the
-node size options before creating a new deployment.
-
-### Challenge
-
-You can use this combination of listing and getting to obtain all of the
-information you want for your ArangoGraph organizations. We only explored a few of
-the commands available but you can explore them all within the utility by
-utilizing the `--help` flag or you can see all of the available options
-in the [documentation](../oasisctl/options.md).
-
-Something that might be useful practice before moving on is getting the rest
-of the information that you need to create a deployment. Here are a list of
-items that won’t have defaults available when you attempt to create your
-first deployment and you will need to supply:
-
-- CA Certificate ID (name)
-- IP Allowlist ID (id) (optional)
-- Node Size ID (id)
-- Node Disk Size (GB disk size dependent on Node Size ID)
-- Organization ID (name)
-- Project ID (name)
-- Region ID (name)
-
-Try looking up that information to get more familiar with how to find
-information with Oasisctl. When in doubt use the `--help` flag with any
-command.
-
-## Creating Resources
-
-Now that you have seen how to obtain information about your available
-resources, it’s time to start using those skills to start creating your own
-deployment. To create resources with Oasisctl you use the create command.
-To see all the possible options you can start with the following command:
-
-```bash
-oasisctl create --help
-```
-
-
-
-### Create a Deployment
-
-To take a look at all of the options available when creating a deployment the
-best place to start is with our trusty help command.
-
-```bash
-oasisctl create deployment --help
-```
-
-
-
-As you can see there are a lot of default options but also a few that require
-some knowledge of our pre-existing resources. Attempting to create a deployment
-without one of the required options will return an error indicating which value
-is missing or invalid.
-
-Once you have collected all of the necessary information the command for
-creating a deployment is simply supplying the values along with the appropriate
-flags. This command will create a deployment:
-
-```bash
-oasisctl create deployment \
- --region-id gcp-us-west2 \
- --node-size-id c4-a4 \
- --node-disk-size 10 \
- --version 3.9.2 \
- --cacertificate-id OasisCert \
- --organization-id "ArangoGraph Organization" \
- --project-id "Getting Started with ArangoGraph" \
- --name "First Oasisctl Deployment" \
- --description "The first deployment created using the awesome Oasisctl utility!"
-```
-
-If everything went according to play you should see similar output:
-
-
-
-### Wait on Deployment Status
-
-When you create a deployment it begins the process of _bootstrapping_ which is
-getting the deployment ready for use. This should happen quickly and to see if
-it is ready for use you can run the wait command using the ID of the newly
-created deployment, shown at the top of the information you received above.
-
-```bash
-oasisctl wait deployment \
- --deployment-id hmkuedzw9oavvjmjdo0i
-```
-
-Once you receive a response of _Deployment Ready_, your deployment is indeed
-ready to use. You can get some new details by running the get command.
-
-```bash
-oasisctl get deployment \
- --organization-id "ArangoGraph Organization" \
- --deployment-id hmkuedzw9oavvjmjdo0i
-```
-
-
-
-Once the deployment is ready you will get two new pieces of information, the
-endpoint URL and Bootstrapped-At will indicate the time it became available.
-If you would like to login to the web interface to verify that your server is in fact
-up and running you will need to supply the `--show-root-password` flag along
-with the get command, this flag does not take a value.
-
-### The Update Command
-
-The inevitable time comes when something about your deployment must change and
-this is where the update command comes in. You can use update to change or
-update a number of things including updating the groups, policies, and roles
-for user access control. You can also update some of your deployment
-information or, for our situation, add an IP Allowlist if you didn’t add one
-during creation.
-
-There are, of course, many options available and it is always recommended to
-start with the --help flag to read about all of them.
-
-### Update a Deployment
-
-This section will show an example of how to update a deployment to use a
-pre-existing allowlist. To add an IP Allowlist after the fact we are really
-just updating the IP Allowlist value, which is currently empty. In order to
-update the IP Allowlist of a deployment you must create a allowlist and then
-you can simply reference its id like so:
-
-```bash
-oasisctl update deployment \
- --deployment-id hmkuedzw9oavvjmjdo0i \
- --ipallowlist-id abc123AllowlistID
-```
-
-You should receive a response with the deployment information and an indication
-that deployment was updated at the top.
-
-You can use the update command to update everything about your deployments as
-well. If you run:
-
-```bash
-oasisctl update deployment --help
-```
-
-You will see the full list of options available that will allow you to scale
-your deployment as needed.
-
-
-
-## Delete a Deployment
-
-There may come a day where you need to delete a resource. The process for this
-follows right along with the conventions for the other commands detailed
-throughout this guide.
-
-### The Delete Command
-
-For the final example in this guide we will delete the deployment that has
-been created. This only requires the deployment ID and the permissions to
-delete the deployment.
-
-```bash
-oasisctl delete deployment \
- --deployment-id hmkuedzw9oavvjmjdo0i
-```
-
-Once the deployment has been deleted you can confirm it is gone by listing
-your deployments.
-
-```bash
-oasisctl list deployments \
- --organization-id "ArangoGraph Organization" \
- --format json
-```
-
-## Next Steps
-
-As promised, this guide covered the basics of using Oasisctl with the ArangoDB
-API. While we primarily focused on viewing and managing deployments there is
-also a lot more to explore, including:
-
-- Organization Invites Management
-- Backups
-- API Key Management
-- Certificate Management
-- User Access Control
-
-You can check out all these features and further details on the ones discussed
-in this guide in the documentation.
diff --git a/site/content/3.11/arangograph/api/set-up-a-connection.md b/site/content/3.11/arangograph/api/set-up-a-connection.md
deleted file mode 100644
index 7cbc2b76e2..0000000000
--- a/site/content/3.11/arangograph/api/set-up-a-connection.md
+++ /dev/null
@@ -1,108 +0,0 @@
----
-title: Get started with the ArangoGraph API
-menuTitle: Get started with the API
-weight: 5
-description: >-
- Quick start guide on how to set up a connection to the ArangoGraph API
-aliases:
- - ../arangograph-api/getting-started-with-the-api
----
-The instructions below are a quick start guide on how to set up a connection to the ArangoGraph API.
-
-All examples below will use the Go programming language.
-Since the ArangoGraph API is using gRPC with protocol buffers,
-all examples can be easily translated to many different languages.
-
-## Prerequisites
-
-Make sure that you have already [signed up for ArangoGraph](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-## Creating an API key
-
-1. Go to [dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic) and login.
-2. Click the user icon in the top-right of the dashboard.
-3. Select __My API keys__
-4. Click __New API key__
-5. Click __Create__ to select the default settings.
-6. You'll now see a dialog showing the __API key ID__ and
- the __API key Secret__. This is the only time you will see
- the secret, so make sure to store it in a safe place.
-
-## Create an access token with your API key
-
-```go
-import (
- "context"
-
- "google.golang.org/grpc"
- "google.golang.org/grpc/credentials"
- "github.com/arangodb-managed/apis/common/auth"
- common "github.com/arangodb-managed/apis/common/v1"
- data "github.com/arangodb-managed/apis/data/v1"
- iam "github.com/arangodb-managed/apis/iam/v1"
-)
-
-...
-
-// Set up a connection to the API.
-tc := credentials.NewTLS(&tls.Config{})
-conn, err := grpc.Dial("https://api.cloud.arangodb.com",
- grpc.WithTransportCredentials(tc))
-if err != nil {
- // handle error
-}
-
-// Create client for IAM service
-iamc := iam.NewIAMServiceClient(conn)
-
-// Call AuthenticateAPIKey to create token
-resp, err := iamc.AuthenticateAPIKey(ctx,
- &iam.AuthenticateAPIKeyRequest{
- Id: keyID,
- Secret: keySecret,
-})
-if err != nil {
- // handle error
-}
-token := resp.GetToken()
-```
-
-## Make an authenticated API call
-
-We're going to list all deployments in a project.
-The connection and token created in the previous sample is re-used.
-
-The authentication token is passed as standard `bearer` token to the call.
-If Go, there is a helper method (`WithAccessToken`) to create a context using
-an authentication token.
-
-```go
-// Create client for Data service
-datac := data.NewDataServiceClient(conn)
-
-// Prepare context with authentication token
-ctx := auth.WithAccessToken(context.Background(), token)
-
-// Call list deployments
-list, err := datac.ListDeployments(ctx,
- &common.ListOptions{ContextId: myProjectID})
-if err != nil {
- // handle error
-}
-for _, depl := range list.GetItems() {
- fmt.Printf("Found deployment with id %s\n", depl.GetId())
-}
-
-```
-
-## API Errors
-
-All API methods return errors as gRPC error codes.
-
-The `github.com/arangodb-managed/apis/common/v1` package contains several helpers to check for common errors.
-
-```go
-if common.IsNotFound(err) {
- // Error is caused by a not-found situation
-}
-```
diff --git a/site/content/3.11/arangograph/backups.md b/site/content/3.11/arangograph/backups.md
deleted file mode 100644
index e4adcd0a0e..0000000000
--- a/site/content/3.11/arangograph/backups.md
+++ /dev/null
@@ -1,172 +0,0 @@
----
-title: Backups in ArangoGraph
-menuTitle: Backups
-weight: 50
-description: >-
- You can manually create backups or use a backup policy to schedule periodic
- backups, and both ways allow you to store your backups in multiple regions simultaneously
----
-## How to create backups
-
-To backup data in ArangoGraph for an ArangoDB installation, navigate to the
-**Backups** section of your deployment created previously.
-
-
-
-There are two ways to create backups. Create periodic backups using a
-**Backup policy**, or create a backup manually.
-Both ways allow you to create [backups in multiple regions](#multi-region-backups)
-as well.
-
-### Periodic backups
-
-Periodic backups are created at a given schedule. To see when the new backup is
-due, observe the schedule section.
-
-
-
-When a new deployment is created, a default **Backup policy** is created for it
-as well. This policy creates backups every two hours. To edit this policy
-(or any policy), highlight it in the row above and hit the pencil icon.
-
-
-
-These backups are not automatically uploaded. To enable this, use the
-**Upload backup to storage** option and choose a retention period that
-specifies how long backups are retained after creation.
-
-If the **Upload backup to storage** option is enabled for a backup policy,
-you can then create backups in different regions than the default one.
-The regions where the default backup is copied are shown in the
-**Additional regions** column in the **Policies** section.
-
-### Manual backups
-
-It's also possible to create a backup on demand. To do this, click **Back up now**.
-
-
-
-
-
-If you want to manually copy a backup to a different region than the default
-one, first ensure that the **Upload backup to storage** option is enabled.
-Then, highlight the backup row and use the
-**Copy backup to a different region** button from the **Actions** column.
-
-The source backup ID from
-which the copy is created is displayed in the **Copied from Backup** column.
-
-
-
-
-
-### Uploading backups
-
-By default, a backup is not uploaded to the cloud, instead it remains on the
-servers of the deployment. To make a backup that is resilient against server
-(disk) failures, upload the backup to cloud storage.
-
-When the **Upload backup to cloud storage** option is enabled, the backup is
-preserved for a long time and does not occupy any disk space on the servers.
-This also allows copying the backup to different regions and it can be
-configured in the **Multiple region backup** section.
-
-Uploaded backups are
-required for [cloning](#how-to-clone-deployments-using-backups).
-
-#### Best practices for uploading backups
-
-When utilizing the **Upload backup to cloud storage** feature, a recommended
-approach is to implement a backup strategy that balances granularity and storage
-efficiency.
-
-One effective strategy involves creating a combination of backup intervals and
-retention periods. For instance, consider the following example:
-
-1. Perform a backup every 4 hours with a retention period of 24 hours. This
- provides frequent snapshots of your data, allowing you to recover recent
- changes.
-2. Perform a backup every day with a retention period of a week. Daily backups
- offer a broader time range for recovery, enabling you to restore data from
- any point within the past week.
-3. Perform a backup every week with a retention period of a month. Weekly
- backups allow you to recover from more extensive data.
-4. Perform a backup every month with a retention period of a year. Monthly
- backups provide a long-term perspective, enabling you to restore data from
- any month within the past year.
-
-This backup strategy offers good granularity, providing multiple recovery
-options for different timeframes. By implementing this approach, you have a
-total number of backups that is considerable lower in comparison to other
-alternatives such as having hourly backups with a retention period of a year.
-
-## Multi-region backups
-
-Using the multi-region backup feature, you can store backups in multiple regions
-simultaneously either manually or automatically as part of a **Backup policy**.
-If a backup created in one region goes down, it is still available in other
-regions, significantly improving reliability.
-
-Multiple region backup is only available when the
-**Upload backup to cloud storage** option is enabled.
-
-
-
-## How to restore backups
-
-To restore a database from a backup, highlight the desired backup and click the restore icon.
-
-{{< warning >}}
-All current data will be lost when restoring. To make sure that new data that
-has been inserted after the backup creation is also restored, create a new
-backup before using the **Restore Backup** feature.
-
-During restore, the deployment is temporarily not available.
-{{< /warning >}}
-
-
-
-
-
-
-
-
-
-## How to clone deployments using backups
-
-Creating a deployment from a backup allows you to duplicate an existing
-deployment with all its data, for example, to create a test environment or to
-move to a different cloud provider or region within ArangoGraph.
-
-{{< info >}}
-This feature is only available if the backup you wish to clone has been
-uploaded to cloud storage.
-{{< /info >}}
-
-{{< info >}}
-The cloned deployment will have the exact same features as the previous
-deployment including node size and model. The cloud provider and the region
-can stay the same or you can select a different one.
-For restoring a deployment as quick as possible, it is recommended to create a
-deployment in the same region as where the backup resides to avoid cross-region
-data transfer.
-The data contained in the backup will be restored to this new deployment.
-
-The *root password* for this deployment will be different.
-{{< /info >}}
-
-1. Highlight the backup you wish to clone from and hit **Clone backup to new deployment**.
-
- 
-
-2. Choose whether the clone should be created using the current provider and in
- the same region as the backup or using a different provider, a different region,
- or both.
-
- 
-
-3. The view should navigate to the new deployment being bootstrapped.
-
- 
-
-This feature is also available through [oasisctl](oasisctl/_index.md).
diff --git a/site/content/3.11/arangograph/data-loader/_index.md b/site/content/3.11/arangograph/data-loader/_index.md
deleted file mode 100644
index 7955fcb47a..0000000000
--- a/site/content/3.11/arangograph/data-loader/_index.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-title: Load your data into ArangoGraph
-menuTitle: Data Loader
-weight: 22
-description: >-
- Load your data into ArangoGraph and transform it into richly-connected graph
- structures, without needing to write any code or deploy any infrastructure
----
-
-ArangoGraph provides different ways of loading your data into the platform,
-based on your migration use case.
-
-## Transform data into a graph
-
-The ArangoGraph Data Loader allows you to transform existing data from CSV file
-formats into data that can be analyzed by the ArangoGraph platform.
-
-You provide your data in CSV format, a common format used for exports of data
-from various systems. Then, using a no-code editor, you can model the schema of
-this data and the relationships between them. This allows you to ingest your
-existing datasets into your ArangoGraph database, without the need for any
-development effort.
-
-You can get started in a few easy steps.
-
-1. **Create database**:
- Choose an existing database or create a new one and enter a name for your new graph.
-
-2. **Add files**:
- Drag and drop your data files in CSV format.
-
-3. **Design your graph**:
- Model your graph schema by adding nodes and connecting them via edges.
-
-4. **Import data**:
- Once you are ready, save and start the import. The resulting graph is an
- [EnterpriseGraph](../../graphs/enterprisegraphs/_index.md) with its
- corresponding collections, available in your ArangoDB web interface.
-
-Follow this [working example](../data-loader/example.md) to see how easy it is
-to transform existing data into a graph.
-
-## Import data to the cloud
-
-To import data from various files into collections **without creating a graph**,
-get the ArangoDB client tools for your operating system from the
-[download page](https://arangodb.com/download-major/).
-
-- To import data to ArangoGraph from an existing ArangoDB instance, see
- [arangodump](../../components/tools/arangodump/) and
- [arangorestore](../../components/tools/arangorestore/).
-- To import pre-existing data in JSON, CSV, or TSV format, see
- [arangoimport](../../components/tools/arangoimport/).
-
-## How to access the Data Loader
-
-1. If you do not have a deployment yet, [create a deployment](../deployments/_index.md#how-to-create-a-new-deployment) first.
-2. Open the deployment you want to load data into.
-3. In the **Load Data** section, click the **Load your data** button.
-4. Select your migration use case.
-
-
\ No newline at end of file
diff --git a/site/content/3.11/arangograph/data-loader/add-files.md b/site/content/3.11/arangograph/data-loader/add-files.md
deleted file mode 100644
index 114b588e40..0000000000
--- a/site/content/3.11/arangograph/data-loader/add-files.md
+++ /dev/null
@@ -1,59 +0,0 @@
----
-title: Add files into Data Loader
-menuTitle: Add files
-weight: 5
-description: >-
- Provide your set of files in CSV format containing the data to be imported
----
-
-The Data Loader allows you to upload your data files in CSV format into
-ArangoGraph and then use these data sources to design a graph using the
-built-in graph designer.
-
-## Upload your files
-
-You can upload your CSV files in the following ways:
-
-- Drag and drop your files in the designated area.
-- Click the **Browse files** button and select the files you want to add.
-
-
-
-You have the option to either upload several files collectively as a batch or
-add them individually. Furthermore, you can supplement additional files later on.
-After a file has been uploaded, you can expand it to preview both the header and
-the first row of data within the file.
-
-In case you upload CSV files without fields, they will not be available for
-manipulation.
-
-Once the files are uploaded, you can start [designing your graph](../data-loader/design-graph.md).
-
-### File formatting limitations
-
-Ensure that the files you upload are correctly formatted. Otherwise, errors may
-occur, the upload may fail, or the data may not be correctly mapped.
-
-The following restrictions and limitations apply:
-
-- The only supported file format is CSV. If you submit an invalid file format,
- the upload of that specific file will be prevented.
-- It is required that all CSV files have a header row. If you upload a file
- without a header, the first row of data is treated as the header. To avoid
- losing the first row of the data, make sure to include headers in your files.
-- The CSV file should have unique header names. It is not possible to have two
- columns with the same name within the same file.
-
-For more details, see the [File validation](../data-loader/import.md#file-validation) section.
-
-### Upload limits
-
-Note that there is a cumulative file upload limit of 1GB. This means that the
-combined size of all files you upload should not exceed 1GB. If the total size
-of the uploaded files surpasses this limit, the upload may not be successful.
-
-## Delete files
-
-You can remove uploaded files by clicking the **Delete file** button in the
-**Your files** panel. Please keep in mind that in order to delete a file,
-you must first remove all graph associations associated with it.
\ No newline at end of file
diff --git a/site/content/3.11/arangograph/data-loader/design-graph.md b/site/content/3.11/arangograph/data-loader/design-graph.md
deleted file mode 100644
index b1c5eaf3af..0000000000
--- a/site/content/3.11/arangograph/data-loader/design-graph.md
+++ /dev/null
@@ -1,68 +0,0 @@
----
-title: Design your graph
-menuTitle: Design graph
-weight: 10
-description: >-
- Design your graph database schema using the integrated graph modeler in the Data Loader
----
-
-Based on the data you have uploaded, you can start designing your graph.
-The graph designer allows you to create a schema using nodes and edges.
-Once this is done, you can save and start the import. The resulting
-[EnterpriseGraph](../../graphs/enterprisegraphs/_index.md) and the
-corresponding collections are created in your ArangoDB database instance.
-
-## How to add a node
-
-Nodes are the main objects in your data model and include the attributes of the
-objects.
-
-1. To create a new node, click the **Add node** button.
-2. In the graph designer, click on the newly created node to view the **Node details**.
-3. In the **Node details** panel, fill in the following fields:
- - For **Node label**, enter a name you want to use for the node.
- - For **File**, select a file from the list to associate it with the node.
- - For **Primary Identifier**, select a field from the list. This is used to
- reference the nodes when you define relations with edges.
- - For **File Headers**, select one or more attributes from the list.
-
-
-
-## How to connect nodes
-
-Nodes can be connected by edges to express and categorize the relations between
-them. A relation always has a direction, going from one node to another. You can
-define this direction in the graph designer by dragging your cursor from one
-particular node to another.
-
-To connect two nodes, you can use the **Connect node(s)** button. Click on any
-node to self-reference it or drag it to connect it to another node. Alternatively,
-when you select a node, a plus sign will appear, allowing you to directly add a
-new node with an edge.
-
-{{< tip >}}
-To quickly recenter your elements on the canvas, you can use the **Center View**
-button located in the bottom right corner. This brings your nodes and edges back
-into focus.
-{{< /tip >}}
-
-The edge needs to be associated with a file and must have a label. Note that a
-node and an edge cannot have the same label.
-
-See below the steps to add details to an edge.
-
-1. Click on an edge in the graph designer.
-2. In the **Edit Edge** panel, fill in the following fields:
- - For **Edge label**, enter a name you want to use for the edge.
- - For **Relation file**, select a file from the list to associate it with the edge.
- - To define how the relation points from one node to another, select the
- corresponding relation file header for both the origin file (`_from`) and the
- destination file (`_to`).
- - For **File Headers**, select one or more attributes from the list.
-
-
-
-## How to delete elements
-
-To remove a node or an edge, simply select it in the graph designer and click the
-**Delete** icon.
\ No newline at end of file
diff --git a/site/content/3.11/arangograph/data-loader/example.md b/site/content/3.11/arangograph/data-loader/example.md
deleted file mode 100644
index 46fdd1b38e..0000000000
--- a/site/content/3.11/arangograph/data-loader/example.md
+++ /dev/null
@@ -1,103 +0,0 @@
----
-title: Data Loader Example
-menuTitle: Example
-weight: 20
-description: >-
- Follow this complete working example to see how easy it is to transform existing
- data into a graph and get insights from the connected entities
----
-
-To transform your data into a graph, you need to have CSV files with entities
-representing the nodes and a corresponding CSV file representing the edges.
-
-This example uses a sample data set of two files, `airports.csv`, and `flights.csv`.
-These files are used to create a graph showing flights arriving at and departing
-from various cities.
-You can download the files from [GitHub](https://github.com/arangodb/example-datasets/tree/master/Data%20Loader).
-
-The `airports.csv` contains rows of airport entries, which are the future nodes
-in your graph. The `flights.csv` contains rows of flight entries, which are the
-future edges connecting the nodes.
-
-The whole process can be broken down into these steps:
-
-1. **Database and graph setup**: Begin by choosing an existing database or
- create a new one and enter a name for your new graph.
-2. **Add files**: Upload the CSV files to the Data Loader web interface. You can
- simply drag and drop them or upload them through the file browser window.
-3. **Design graph**: Design your graph schema by adding nodes and edges and map
- data from the uploaded files to them. This allows creating the corresponding
- documents and collections for your graph.
-4. **Import data**: Import the data and start using your newly created
- [EnterpriseGraph](../../graphs/enterprisegraphs/_index.md) and its
- corresponding collections.
-
-## Step 1: Create a database and choose the graph name
-
-Start by creating a new database and adding a name for your graph.
-
-
-
-## Step 2: Add files
-
-Upload your CSV files to the Data Loader web interface. You can drag and drop
-them or upload them via a file browser window.
-
-
-
-See also [Add files into Data Loader](../data-loader/add-files.md).
-
-## Step 3: Design graph schema
-
-Once the files are added, you can start designing the graph schema. This example
-uses a simple graph consisting of:
-- Two nodes (`origin_airport` and `destination_airport`)
-- One directed edge going from the origin airport to the destination one
- representing a flight
-
-Click **Add node** to create the nodes and connect them with edges.
-
-Next, for each of the nodes and edges, you need to create a mapping to the
-corresponding file and headers.
-
-For nodes, the **Node label** is going to be a node collection name and the
-**Primary identifier** will be used to populate the `_key` attribute of documents.
-You can also select any additional headers to be included as document attributes.
-
-In this example, two node collections have been created (`origin_airport` and
-`destination_airport`) and `AirportID` header is used to create the `_key`
-attribute for documents in both node collections. The header preview makes it
-easy to select the headers you want to use.
-
-
-
-For edges, the **Edge label** is going to be an edge collection name. Then, you
-need to specify how edges will connect nodes. You can do this by selecting the
-*from* and *to* nodes to give a direction to the edge.
-In this example, the `source airport` header has been selected as a source and
-the `destination airport` header as a target for the edge.
-
-
-
-Note that the values of the source and target for the edge correspond to the
-**Primary identifier** (`_key` attribute) of the nodes. In this case, it is the
-airport code (i.e. GKA) used as the `_key` in the node documents and in the source
-and destination headers to configure the edges.
-
-See also [Design your graph in the Data Loader](../data-loader/design-graph.md).
-
-## Step 4: Import and see the resulting graph
-
-After all the mapping is done, all you need to do is click
-**Save and start import**. The report provides an overview of the files
-processed and the documents created, as well as a link to your new graph.
-See also [Start import](../data-loader/import.md).
-
-
-
-Finally, click **See your new graph** to open the ArangoDB web interface and
-explore your new collections and graph.
-
-
-
-Happy graphing!
\ No newline at end of file
diff --git a/site/content/3.11/arangograph/data-loader/import.md b/site/content/3.11/arangograph/data-loader/import.md
deleted file mode 100644
index 1589244278..0000000000
--- a/site/content/3.11/arangograph/data-loader/import.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title: Start the import
-menuTitle: Start import
-weight: 15
-description: >-
- Once the data files are provided and the graph is designed, you can start the import
----
-
-Before starting the actual import, make sure that:
-- You have selected a database for import or created a new one;
-- You have provided a valid name for your graph;
-- You have created at least one node;
-- You have created at least one edge;
-- You have uploaded at least one file;
-- Every file is related to at least one node or edge;
-- Every node and edge is linked to a file;
-- Every node and edge has a unique label;
-- Every node has a primary identifier selected;
-- Every edge has an origin and destination file header selected.
-
-To continue with the import, click the **Save and start import** button. The data
-importer provides an overview showing results with the collections that have been
-created with the data provided in the files.
-
-To access your newly created graph in the ArangoDB web interface, click the
-**See your new graph** button.
-
-## File validation
-
-Once the import has started, the files that you have provided are being validated.
-If the validation process detects parsing errors in any of the files, the import
-is temporarily paused and the validation errors are shown. You can get a full
-report by clicking the **See full report** button.
-
-At this point, you can:
-- Continue with the import without addressing the errors. The CSV files will still
- be included in the migration. However, the invalid rows are skipped and
- excluded from the migration.
-- Revisit the problematic file(s), resolve the issues, and then re-upload the
- file(s) again.
-
-{{< tip >}}
-To ensure the integrity of your data, it is recommended to address all the errors
-detected during the validation process.
-{{< /tip >}}
-
-### Validation errors and their meanings
-
-#### Invalid Quotation Mark
-
-This error indicates issues with quotation marks in the CSV data.
-It can occur due to improper use of quotes.
-
-#### Missing Quotation Marks
-
-This error occurs when quotation marks are missing or improperly placed in the
-CSV data, potentially affecting data enclosure.
-
-#### Insufficient Data Fields
-
-This error occurs when a CSV row has fewer fields than expected. It may indicate
-missing or improperly formatted data.
-
-#### Excessive Data Fields
-
-This error occurs when a CSV row has more fields than expected, possibly due to
-extra data or formatting issues.
-
-#### Unidentifiable Field Separator
-
-This error suggests that the parser could not identify the field separator
-character in the CSV data.
\ No newline at end of file
diff --git a/site/content/3.11/arangograph/deployments/_index.md b/site/content/3.11/arangograph/deployments/_index.md
deleted file mode 100644
index b8dd98d490..0000000000
--- a/site/content/3.11/arangograph/deployments/_index.md
+++ /dev/null
@@ -1,301 +0,0 @@
----
-title: Deployments in ArangoGraph
-menuTitle: Deployments
-weight: 20
-description: >-
- How to create and manage deployments in ArangoGraph
----
-An ArangoGraph deployment is an ArangoDB cluster or single server, configured
-as you choose.
-
-Each deployment belongs to a project, which belongs to an organization in turn.
-You can have any number of deployments under one project.
-
-**Organizations → Projects → Deployments **
-
-
-
-## How to create a new deployment
-
-1. If you do not have a project yet,
- [create a project](../projects.md#how-to-create-a-new-project) first.
-2. In the main navigation, click __Deployments__.
-3. Click the __New deployment__ button.
-4. Select the project you want to create the deployment for.
-5. Set up your deployment. The configuration options are described below.
-
-{{< info >}}
-Deployments contain exactly **one policy**. Within that policy, you can define
-role bindings to regulate access control on a deployment level.
-{{< /info >}}
-
-### In the **General** section
-
-- Enter the __Name__ and optionally a __Short description__ for the deployment.
-- Select the __Provider__ and __Region__ of the provider.
- {{< warning >}}
- Once a deployment has been created, it is not possible to change the
- provider and region anymore.
- {{< /warning >}}
-
-
-
-### In the **Sizing** section
-
-- Choose a __Model__ for the deployment:
-
- - __OneShard__ deployments are suitable when your data set fits in a single node.
- They are ideal for graph use cases. This model has a fixed number of 3 nodes.
-
- - __Sharded__ deployments are suitable when your data set is larger than a single
- node. The data will be sharded across multiple nodes. You can select the
- __Number of nodes__ for this deployment model. The more nodes you have, the
- higher the replication factor can be.
-
- - __Single Server__ deployments are suitable when you want to try out ArangoDB without
- the need for high availability or scalability. The deployment will contain a
- single server only. Your data will not be replicated and your deployment can
- be restarted at any time.
-
-- Select a __NODE SIZE__ from the list of available options. Each option is a
- combination of vCPUs, memory, and disk space per node.
-
-
-
-### In the **Advanced** section
-
-- Select the __DB Version__.
- If you don't know which DB version to select, use the version selected by default.
-- Select the desired __Support Plan__. Click the link below the field to get
- more information about the different support plans.
-- In the __Certificate__ field:
- - The default certificate created for your project is selected automatically.
- - If you have no default certificate, or want to use a new certificate,
- create a new certificate by typing the desired name for it and hitting
- enter or clicking __Create "\"__ when done.
- - Or, if you already have multiple certificates, select the desired one.
-- _Optional but strongly recommended:_ In the __IP allowlist__ field, select the
- desired one in case you want to limit access to your deployment to certain
- IP ranges. To create a allowlist, navigate to your project and select the
- __IP allowlists__ tab. See [How to manage IP allowlists](../projects.md#how-to-manage-ip-allowlists)
- for details.
- {{< security >}}
- For any kind of production deployment it is strongly advise to use an IP allowlist.
- {{< /security >}}
-- Select a __Deployment Profile__. Profile options are only available on request.
-
-
-
-### In the **Summary** panel
-
-1. Review the configuration, and if you're okay with the setup, press the
- __Create deployment__ button.
-2. You are taken to the deployment overview page.
- **Note:** Your deployment is being bootstrapped at that point. This process
- takes a few minutes. Once the deployment is ready, you receive a confirmation
- email.
-
-## How to access your deployment
-
-1. In the main navigation, click the __Dashboard__ icon and then click __Projects__.
-2. In the __Projects__ page, click the project for
- which you created a deployment earlier.
-3. Alternatively, you can access your deployment by clicking __Deployments__ in the
- dashboard navigation. This page shows all deployments from all projects.
- Click the name of the deployment you want to view.
-4. For each deployment in your project, you see the status. While your new
- deployment is being set up, it displays the __bootstrapping__ status.
-5. Press the __View__ button to show the deployment page.
-6. When a deployment displays a status of __OK__, you can access it.
-7. Click the __Open database UI__ button or on the database UI link to open
- the dashboard of your new ArangoDB deployment.
-
-At this point your ArangoDB deployment is available for you to use — **Have fun!**
-
-If you have disabled the [auto-login option](#auto-login-to-database-ui) to the
-database web interface, you need to follow the additional steps outlined below
-to access your deployment:
-
-1. Click the copy icon next to the root password. This copies the deployment
- root password to your clipboard. You can also click the view icon to unmask
- the root password to see it.
- {{< security >}}
- Do not use the root username/password for everyday operations. It is recommended
- to use them only to create other user accounts with appropriate permissions.
- {{< /security >}}
-2. Click the __Open database UI__ button or on the database UI link to open
- the dashboard of your new ArangoDB deployment.
-3. In the __username__ field type `root`, and in the __password__ field paste the
- password that you copied earlier.
-4. Press the __Login__ button.
-5. Press the __Select DB: \_system__ button.
-
-{{< info >}}
-Each deployment is accessible on two ports:
-
-- Port `8529` is the standard port recommended for use by web-browsers.
-- Port `18529` is the alternate port that is recommended for use by automated services.
-
-The difference between these ports is the certificate used. If you enable
-__Use well-known certificate__, the certificates used on port `8529` is well-known
-and automatically accepted by most web browsers. The certificate used on port
-`18529` is a self-signed certificate. For securing automated services, the use of
-a self-signed certificate is recommended. Read more on the
-[Certificates](../security-and-access-control/x-509-certificates.md) page.
-{{< /info >}}
-
-## Password settings
-
-### How to enable the automatic root user password rotation
-
-Password rotation refers to changing passwords regularly - a security best
-practice to reduce the vulnerability to password-based attacks and exploits
-by limiting for how long passwords are valid. The ArangoGraph Insights Platform
-can automatically change the `root` user password of an ArangoDB deployment
-periodically to improve security.
-
-1. Navigate to the __Deployment__ for which you want to enable an automatic
- password rotation for the root user.
-2. In the __Quick start__ section, click the button with the __gear__ icon next to the
- __ROOT PASSWORD__.
-3. In the __Password Settings__ dialog, turn the automatic password rotation on
- and click the __Confirm__ button.
-
- 
-4. You can expand the __Root password__ panel to see when the password was
- rotated last. The rotation takes place every three months.
-
-### Auto login to database UI
-
-ArangoGraph provides the ability to automatically login to your database using
-your existing ArangoGraph credentials. This not only provides a seamless
-experience, preventing you from having to manage multiple sets of credentials
-but also improves the overall security of your database. As your credentials
-are shared between ArangoGraph and your database, you can benefit from
-end-to-end audit traceability for a given user, as well as integration with
-ArangoGraph SSO.
-
-You can enable this feature in the **Password Settings** dialog. Please note
-that it may take a few minutes to get activated.
-Once enabled, you no longer have to fill in the `root` user and password of
-your ArangoDB deployment.
-
-{{< info >}}
-If you use the auto login feature with AWS
-[private endpoints](../deployments/private-endpoints.md), it is recommended
-to switch off the `custom DNS` setting.
-{{< /info >}}
-
-This feature can be disabled at any time. You may wish to consider explicitly
-disabling this feature in the following situations:
-- Your workflow requires you to access the database UI using different accounts
- with differing permission sets, as you cannot switch database users when
- automatic login is enabled.
-- You need to give individuals access to a database's UI without giving them
- any access to ArangoGraph. Note, however, that it's possible to only give an
- ArangoGraph user database UI access, without other ArangoGraph permissions.
-
-{{< warning >}}
-When the auto login feature is enabled, users cannot edit their permissions on
-the ArangoDB database web interface as all permissions are managed by the
-ArangoGraph platform.
-{{< /warning >}}
-
-Before getting started, make sure you are signed into ArangoGraph as a user
-with one of the following permissions in your project:
-- `data.deployment.full-access`
-- `data.deployment.read-only-access`
-
-Organization owners have these permissions enabled by default.
-The `deployment-full-access-user` and `deployment-read-only-user` roles which
-contain these permissions can also be granted to other members of the
-organization. See how to create a
-[role binding](../security-and-access-control/_index.md#how-to-view-edit-or-remove-role-bindings-of-a-policy).
-
-{{< warning >}}
-This feature is available on `443` port only.
-{{< /warning >}}
-
-## How to edit a deployment
-
-You can modify a deployment's configuration, including the ArangoDB version
-that is being used, change the memory size, or even switch from
-a OneShard deployment to a Sharded one if your data set no longer fits in a
-single node.
-
-{{< tip >}}
-To edit an existing deployment, you must have the necessary set of permissions
-attached to your role. Read more about [roles and permissions](../security-and-access-control/_index.md#roles).
-{{< /tip >}}
-
-1. In the main navigation, click **Deployments** and select an existing
- deployment from the list, or click **Projects**, select a project, and then
- select a deployment.
-2. In the **Quick start** section, click the **Edit** button.
-3. In the **General** section, you can do the following:
- - Change the deployment name
- - Change the deployment description
-4. In the **Sizing** section, you can do the following:
- - Change **OneShard** deployments into **Sharded** deployments. To do so,
- select **Sharded** in the **Model** dropdown list. You can select the
- number of nodes for your deployment. This can also be modified later on.
- {{< warning >}}
- You cannot switch from **Sharded** back to **OneShard**.
- {{< /warning >}}
- - Change **Single Server** deployments into **OneShard** or **Sharded** deployments.
- {{< warning >}}
- You cannot switch from **Sharded** or **OneShard** back to **Single Server**.
- {{< /warning >}}
- - Scale up or down the node size.
- {{< warning >}}
- When scaling up or down the size in AWS deployments, the new value gets locked
- and cannot be changed again until the cloud provider rate limit is reset.
- {{< /warning >}}
-5. In the **Advanced** section, you can do the following:
- - Upgrade the ArangoDB version that is currently being used. See also
- [Upgrades and Versioning](upgrades-and-versioning.md)
- - Select a different certificate.
- - Add or remove an IP allowlist.
- - Select a deployment profile.
-6. All changes are reflected in the **Summary** panel. Review the new
- configuration and click **Save changes**.
-
-## How to connect a driver to your deployment
-
-[ArangoDB drivers](../../develop/drivers/_index.md) allow you to use your ArangoGraph
-deployment as a database system for your applications. Drivers act as interfaces
-between different programming languages and ArangoDB, which enable you to
-connect to and manipulate ArangoDB deployments from within compiled programs
-or using scripting languages.
-
-To get started, open a deployment.
-In the **Quick start** section, click on the **Connecting drivers** button and
-select your programming language. The code snippets provide examples on how to
-connect to your instance.
-
-{{< tip >}}
-Note that ArangoGraph Insights Platform runs deployments in a cluster
-configuration. To achieve the best possible availability, your client
-application has to handle connection failures by retrying operations if needed.
-{{< /tip >}}
-
-
-
-## How to delete a deployment
-
-{{< danger >}}
-Deleting a deployment deletes all its data and backups.
-This operation is **irreversible**. Please proceed with caution.
-{{< /danger >}}
-
-1. In the main navigation, in the __Projects__ section, click the project that
- holds the deployment you wish to delete.
-2. In the __Deployments__ page, click the deployment you wish to delete.
-3. Click the __Delete/Lock__ entry in the navigation.
-4. Click the __Delete deployment__ button.
-5. In the modal dialog, confirm the deletion by entering `Delete!` into the
- designated text field.
-6. Confirm the deletion by pressing the __Yes__ button.
-7. You will be taken back to the deployments page of the project.
- The deployment being deleted will display the __Deleting__ status until it has
- been successfully removed.
diff --git a/site/content/3.11/arangograph/deployments/private-endpoints.md b/site/content/3.11/arangograph/deployments/private-endpoints.md
deleted file mode 100644
index c3400b9711..0000000000
--- a/site/content/3.11/arangograph/deployments/private-endpoints.md
+++ /dev/null
@@ -1,168 +0,0 @@
----
-title: Private endpoint deployments in ArangoGraph
-menuTitle: Private endpoints
-weight: 5
-description: >-
- Use the private endpoint feature to isolate your deployments and increase
- security
----
-This topic describes how to create a private endpoint deployment and
-securely deploy to various cloud providers such as Google Cloud Platform (GCP)
-and Amazon Web Services (AWS). Follow the steps outlined below to get started.
-
-{{< tip >}}
-In AWS, private endpoints should be located in the same region.
-{{< /tip >}}
-
-{{< info >}}
-For more information about the certificates used for private endpoints, please
-refer to the [How to manage certificates](../security-and-access-control/x-509-certificates.md)
-section.
-{{< /info >}}
-
-## Google Cloud Platform (GCP)
-
-Google Cloud Platform (GCP) offers a feature called
-[Private Service Connect](https://cloud.google.com/vpc/docs/private-service-connect)
-that allows private consumption of services across VPC networks that belong to
-different groups, teams, projects, or organizations. You can publish and consume
-services using the defined IP addresses which are internal to your VPC network.
-
-In ArangoGraph, you can
-[create a regular deployment](_index.md#how-to-create-a-new-deployment)
-and change it to a private endpoint deployment afterwards.
-
-Such a deployment is not reachable from the internet anymore, other than via
-the ArangoGraph dashboard to administrate it. To revert to a public deployment,
-please contact support via **Request help** in the help menu.
-
-To configure a private endpoint for GCP, you need to provide your Google project
-names. ArangoGraph then configures a **Private Endpoint Service** that automatically
-connect to private endpoints that are created for those projects.
-
-After the creation of the **Private Endpoint Service**, you should receive a
-service attachment that you need during the creation of your private endpoint(s).
-
-1. Open the deployment you want to change.
-2. In the **Quick start** section, click the **Edit** button with an ellipsis (`…`)
- icon.
-3. Click **Change to private endpoint** in the menu.
- 
-4. In the configuration wizard, click **Next** to enter your configuration details.
-5. Enter one or more Google project names. You can also add them later in the summary view.
- Click **Next**.
- 
-6. Configure custom DNS names. This step is optional and disabled by default.
- Note that, once enabled, this setting is immutable and cannot be reverted.
- Click **Next** to continue.
- {{< info >}}
- By default, your private endpoint is available to all VPCs that connect to it
- at `https://-pe.arangodb.cloud` with the
- [well-known certificate](../security-and-access-control/x-509-certificates.md#well-known-x509-certificates).
- If the custom DNS is enabled, you will be responsible for the DNS of your
- private endpoints.
- {{< /info >}}
- 
-7. Click **Confirm Settings** to change the deployment.
-8. Back in the **Overview** page, scroll down to the **Private Endpoint** section
- that is now displayed to see the connection status and to change the
- configuration.
-9. ArangoGraph configures a **Private Endpoint Service**. As soon as the
- **Service Attachment** is ready, you can use it to configure the Private
- Service Connect in your VPC.
-
-{{< tip >}}
-When you create a private endpoint in ArangoGraph, both endpoints (the regular
-one and the new private one) are available for two hours. During this time period,
-you can switch your application to the new private endpoint. After this period,
-the old endpoint is not available anymore.
-{{< /tip >}}
-
-## Amazon Web Services (AWS)
-
-AWS offers a feature called [AWS PrivateLink](https://aws.amazon.com/privatelink)
-that enables you to privately connect your Virtual Private Cloud (VPC) to
-services, without exposure to the internet. You can control the specific API
-endpoints, sites, and services that are reachable from your VPC.
-
-Amazon VPC allows you to launch AWS resources into a
-virtual network that you have defined. It closely resembles a traditional
-network that you would normally operate, with the benefits of using the AWS
-scalable infrastructure.
-
-In ArangoGraph, you can
-[create a regular deployment](_index.md#how-to-create-a-new-deployment) and change it
-to a private endpoint deployment afterwards.
-
-The ArangoDB private endpoint deployment is not exposed to public internet
-anymore, other than via the ArangoGraph dashboard to administrate it. To revert
-it to a public deployment, please contact the support team via **Request help**
-in the help menu.
-
-To configure a private endpoint for AWS, you need to provide the AWS principals related
-to your VPC. The ArangoGraph Insights Platform configures a **Private Endpoint Service**
-that automatically connects to private endpoints that are created in those principals.
-
-1. Open the deployment you want to change.
-2. In the **Quick start** section, click the **Edit** button with an ellipsis (`…`)
- icon.
-3. Click **Change to private endpoint** in the menu.
- 
-4. In the configuration wizard, click **Next** to enter your configuration details.
-5. Click **Add Principal** to start configuring the AWS principal(s).
- You need to enter a valid account, which is your 12 digit AWS account ID.
- Adding usernames or role names is optional. You can also
- skip this step and add them later from the summary view.
- {{< info >}}
- Principals cannot be changed anymore once a connection has been established.
- {{< /info >}}
- {{< warning >}}
- To verify your endpoint service in AWS, you must use the same principal as
- configured in ArangoGraph. Otherwise, the service name cannot be verified.
- {{< /warning >}}
- 
-6. Configure custom DNS names. This step is optional and disabled by default,
- you can also add or change them later from the summary view.
- Click **Next** to continue.
- {{< info >}}
- By default, your private endpoint is available to all VPCs that connect to it
- at `https://-pe.arangodb.cloud` with the well-known certificate.
- If the custom DNS is enabled, you will be responsible for the DNS of your
- private endpoints.
- {{< /info >}}
- 
-7. Confirm that you want to use a private endpoint for your deployment by
- clicking **Confirm Settings**.
-8. Back in the **Overview** page, scroll down to the **Private Endpoint** section
- that is now displayed to see the connection status and change the
- configuration, if needed.
- 
- {{< info >}}
- Note that
- [Availability Zones](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-availability-zones)
- are independently mapped for each AWS account. The physical location of a
- zone may differ from one account to another account. To coordinate
- Availability Zones across AWS accounts, you must use the
- [Availability Zone ID](https://docs.aws.amazon.com/ram/latest/userguide/working-with-az-ids.html).
- {{< /info >}}
-
- {{< tip >}}
- To learn more or request help from the ArangoGraph support team, click **Help**
- in the top right corner of the **Private Endpoint** section.
- {{< /tip >}}
-9. ArangoGraph configures a **Private Endpoint Service**. As soon as this is available,
- you can use it in the AWS portal to create an interface endpoint to connect
- to your endpoint service. For more details, see
- [How to connect to an endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-endpoint-service.html#share-endpoint-service).
-
-{{< tip >}}
-To establish connectivity and enable traffic flow, make sure you add a route
-from the originating machine to the interface endpoint.
-{{< /tip >}}
-
-{{< tip >}}
-When you create a private endpoint in ArangoGraph, both endpoints (the regular
-one and the new private one) are available for two hours. During this time period,
-you can switch your application to the new private endpoint. After this period,
-the old endpoint is not available anymore.
-{{< /tip >}}
diff --git a/site/content/3.11/arangograph/deployments/upgrades-and-versioning.md b/site/content/3.11/arangograph/deployments/upgrades-and-versioning.md
deleted file mode 100644
index 211d271c92..0000000000
--- a/site/content/3.11/arangograph/deployments/upgrades-and-versioning.md
+++ /dev/null
@@ -1,92 +0,0 @@
----
-title: Upgrades and Versioning in ArangoGraph
-menuTitle: Upgrades and Versioning
-weight: 10
-description: >-
- Select which version of ArangoDB you want to use within your ArangoGraph
- deployment and choose when to roll out your upgrades
----
-{{< info >}}
-Please note that this policy comes into effect in April 2023.
-{{< /info >}}
-
-## Release Definitions
-
-The following definitions are used for release types of ArangoDB within ArangoGraph:
-
-| Release | Introduces | Contains breaking changes |
-|----------|-------------|----------------------------|
-| **Major** (`X.y.z`) | Major new features and functionalities | Likely large changes |
-| **Minor** (`x.Y.z`) | Some new features or improvements | Likely small changes |
-| **Patch** (`x.y.Z`) | Essential fixes and improvements | Small changes in exceptional circumstances |
-
-## Release Channels
-
-When creating a deployment in ArangoGraph, you can select the minor version
-of ArangoDB that your deployment is going to use. This minor version is in the
-format `Major.Minor` and indicates the major and minor version of ArangoDB that
-is used in this deployment, for example `3.10` or `3.9`.
-
-To provide secure and reliable service, databases are deployed on the latest
-available patch version in the selected version. For example, if `3.10` is
-selected and `3.10.3` is the latest version of ArangoDB available for the `3.10`
-minor version, then the deployment is initially using ArangoDB `3.10.3`.
-
-## Upgrades
-
-### Manual Upgrades
-
-At any time, you can change the release channel of your deployment to a later
-release channel, but not to an earlier one. For example, if you are using `3.10`
-then you can change your deployment’s release channel to `3.11`, but you would
-not be able to change the release channel to `3.9`.
-See [how to edit a deployment](_index.md#how-to-edit-a-deployment).
-
-Upon changing your release channel, an upgrade process for your deployment is
-initiated to upgrade your running database to the latest patch release of your
-selected release channel. You can use this mechanism to upgrade your deployments
-at a time that suits you, prior to the forced upgrade when your release channel
-is no longer available.
-
-### Automatic Upgrades
-
-#### Major Versions (`X.y.z`)
-
-The potential disruption of a major version upgrade requires additional testing
-of any applications connecting to your ArangoGraph deployment. As a result, when
-a new major version is released on the ArangoGraph platform, an email is sent out
-to inform you of this release.
-
-If the ArangoDB version that you are currently using is no longer available on the
-ArangoGraph platform, you are forced to upgrade to the next available version.
-Prior to the removal of the version, an email is sent out to inform you of this
-forced upgrade.
-
-#### Minor Versions (`x.Y.z`)
-
-Although minor upgrades are not expected to cause significant compatibility
-changes like major versions, they may still require additional planning and
-validation.
-
-This is why minor upgrades are treated in the same manner as major upgrades
-within ArangoGraph. When a new minor version is released on the ArangoGraph
-platform, an email is sent out to inform you of this release.
-
-If the ArangoDB version that you are currently using is no longer available on the
-ArangoGraph platform, you are forced to upgrade to the next available version.
-Prior to the removal of the version, an email is sent out to inform you of this
-forced upgrade.
-
-#### Patch Versions (`x.y.Z`)
-
-Upgrades between patch versions are transparent, with no significant disruption
-to your applications. As such, you can expect to be automatically upgraded to
-the latest patch version of your selected minor version shortly after it becomes
-available in ArangoGraph.
-
-ArangoGraph aims to give approximately one week’s notice prior to upgrading your
-deployments to the latest patch release. Although in exceptional circumstances
-(such as a critical security issue) the upgrade may be triggered with less than
-one week's notice.
-The upgrade is carried out automatically. However, if you need the upgrade to be
-deferred temporarily, contact the ArangoGraph Support team to request that.
diff --git a/site/content/3.11/arangograph/migrate-to-the-cloud.md b/site/content/3.11/arangograph/migrate-to-the-cloud.md
deleted file mode 100644
index 8a3f4a9802..0000000000
--- a/site/content/3.11/arangograph/migrate-to-the-cloud.md
+++ /dev/null
@@ -1,259 +0,0 @@
----
-title: Cloud Migration Tool
-menuTitle: Migrate to the cloud
-weight: 30
-description: >-
- Migrating data from bare metal servers to the cloud with minimal downtime
-draft: true
----
-The `arangosync-migration` tool allows you to easily move from on-premises to
-the cloud while ensuring a smooth transition with minimal downtime.
-Start the cloud migration, let the tool do the job and, at the same time,
-keep your local cluster up and running.
-
-Some of the key benefits of the cloud migration tool include:
-- Safety comes first - pre-checks and potential failures are carefully handled.
-- Your data is secure and fully encrypted.
-- Ease-of-use with a live migration while your local cluster is still in use.
-- Get access to what a cloud-based fully managed service has to offer:
- high availability and reliability, elastic scalability, and much more.
-
-## Downloading the tool
-
-The `arangosync-migration` tool is available to download for the following
-operating systems:
-
-**Linux**
-- [AMD64 (x86_64) architecture](https://download.arangodb.com/arangosync-migration/linux/amd64/arangosync-migration)
-- [ARM64 (AArch64) architecture](https://download.arangodb.com/arangosync-migration/linux/arm64/arangosync-migration)
-
-**macOS / Darwin**
-- [AMD64 (x86_64) architecture](https://download.arangodb.com/arangosync-migration/darwin/amd64/arangosync-migration)
-- [ARM64 (AArch64) architecture](https://download.arangodb.com/arangosync-migration/darwin/arm64/arangosync-migration)
-
-**Windows**
-- [AMD64 (x86_64) architecture](https://download.arangodb.com/arangosync-migration/windows/amd64/arangosync-migration.exe)
-- [ARM64 (AArch64) architecture](https://download.arangodb.com/arangosync-migration/windows/arm64/arangosync-migration.exe)
-
-For macOS as well as other Unix-based operating systems, run the following
-command to make sure you can execute the binary:
-
-```bash
-chmod 755 ./arangosync-migration
-```
-
-## Prerequisites
-
-Before getting started, make sure the following prerequisites are in place:
-
-- Go to the [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home)
- and sign in. If you don’t have an account yet, sign-up to create one.
-
-- Generate an ArangoGraph API key and API secret. See a detailed guide on
- [how to create an API key](api/set-up-a-connection.md#creating-an-api-key).
-
-{{< info >}}
-The cloud migration tool is only available for clusters.
-{{< /info >}}
-
-### Setting up the target deployment in ArangoGraph
-
-Continue by [creating a new ArangoGraph deployment](deployments/_index.md#how-to-create-a-new-deployment)
-or choose an existing one.
-
-The target deployment in ArangoGraph requires specific configuration rules to be
-set up before the migration can start:
-
-- **Configuration settings**: The target deployment must be compatible with the
- source data cluster. This includes the ArangoDB version that is being used,
- the DB-Servers count, and disk space.
-- **Deployment region and cloud provider**: Choose the closest region to your
- data cluster. This factor can speed up your migration to the cloud.
-
-After setting up your ArangoGraph deployment, wait for a few minutes for it to become
-fully operational.
-
-{{< info >}}
-Note that Developer mode deployments are not supported.
-{{< /info >}}
-
-## Running the migration tool
-
-The `arangosync-migration` tool provides a set of commands that allow you to:
-- start the migration process
-- check whether your source and target clusters are fully compatible
-- get the current status of the migration process
-- stop or abort the migration process
-- switch the local cluster to read-only mode
-
-### Starting the migration process
-
-To start the migration process, run the following command:
-
-```bash
-arangosync-migration start
-```
-The `start` command runs some pre-checks. Among other things, it measures
-the disk space which is occupied by your ArangoDB cluster. If you are using the
-same data volume for ArangoDB servers and other data as well, the measurements
-can be incorrect. Provide the `--source.ignore-metrics` option to overcome this.
-
-You also have the option of doing a `--check-only` without starting the actual
-migration. If specified, this checks if your local cluster and target deployment
-are compatible without sending any data to ArangoGraph.
-
-Once the migration starts, the local cluster enters into monitoring mode and the
-synchronization status is displayed in real-time. If you don't want to see the
-status you can terminate this process, as the underlying agent process
-continues to work. If something goes wrong, restarting the same command restores
-the replication state.
-
-To restart the migration, first `stop` or `stop --abort` the migration. Then,
-start it again using the `start` command.
-
-{{< warning >}}
-Starting the migration creates a full copy of all data from the source cluster
-to the target deployment in ArangoGraph. All data that has previously existed in the
-target deployment will be lost.
-{{< /warning >}}
-
-### During the migration
-
-The following takes place during an active migration:
-- The source data cluster remains usable.
-- The target deployment in ArangoGraph is switched to read-only mode.
-- Your root user password is not copied to the target deployment in ArangoGraph.
- To get your root password, select the target deployment from the ArangoGraph
- Dashboard and go to the **Overview** tab. All other users are fully synchronized.
-
-{{< warning >}}
-The migration tool increases the CPU and memory usage of the server you are
-running it on. Depending on your ArangoDB usage pattern, it may take a lot of CPU
-to handle the replication. You can stop the migration process anytime
-if you see any problems.
-{{< /warning >}}
-
-```bash
-./arangosync-migration start \
- --source.endpoint=$COORDINATOR_ENDPOINT \
- --source.jwt-secret=/path-to/jwt-secret.file \
- --arango-graph.api-key=$ARANGO_GRAPH_API_KEY \
- --arango-graph.api-secret=$ARANGO_GRAPH_API_SECRET \
- --arango-graph.deployment-id=$ARANGO_GRAPH_DEPLOYMENT_ID
-```
-
-### How long does it take?
-
-The total time required to complete the migration depends on how much data you
-have and how often write operations are executed during the process.
-
-You can also track the progress by checking the **Migration status** section of
-your target deployment in ArangoGraph dashboard.
-
-
-
-### Getting the current status
-
-To print the current status of the migration, run the following command:
-
-```bash
-./arangosync-migration status \
- --arango-graph.api-key=$ARANGO_GRAPH_API_KEY \
- --arango-graph.api-secret=$ARANGO_GRAPH_API_SECRET \
- --arango-graph.deployment-id=$ARANGO_GRAPH_DEPLOYMENT_ID
-```
-
-You can also add the `--watch` option to start monitoring the status in real-time.
-
-### Stopping the migration process
-
-The `arangosync-migration stop` command stops the migration and terminates
-the migration agent process.
-
-If replication is running normally, the command waits until all shards are
-in sync. The local cluster is then switched into read-only mode.
-After all shards are in-sync and the migration stopped, the target deployment
-is switched into the mode specified in `--source.server-mode` option. If no
-option is specified, it defaults to the read/write mode.
-
-```bash
-./arangosync-migration stop \
- --arango-graph.api-key=$ARANGO_GRAPH_API_KEY \
- --arango-graph.api-secret=$ARANGO_GRAPH_API_SECRET \
- --arango-graph.deployment-id=$ARANGO_GRAPH_DEPLOYMENT_ID
-```
-
-The additional `--abort` option is supported. If specified, the `stop` command
-will not check anymore if both deployments are in-sync and stops all
-migration-related processes as soon as possible.
-
-### Switching the local cluster to read-only mode
-
-The `arangosync-migration set-server-mode` command allows switching
-[read-only mode](../develop/http-api/administration.md#set-the-server-mode-to-read-only-or-default)
-for your local cluster on and off.
-
-In a read-only mode, all write operations are going to fail with an error code
-of `1004` (ERROR_READ_ONLY).
-Creating or dropping databases and collections are also going to fail with
-error code `11` (ERROR_FORBIDDEN).
-
-```bash
-./arangosync-migration set-server-mode \
- --source.endpoint=$COORDINATOR_ENDPOINT \
- --source.jwt-secret=/path-to/jwt-secret.file \
- --source.server-mode=readonly
-```
-The `--source.server-mode` option allows you to specify the desired server mode.
-Allowed values are `readonly` or `default`.
-
-### Supported environment variables
-
-The `arangosync-migration` tool supports the following environment variables:
-
-- `$ARANGO_GRAPH_API_KEY`
-- `$ARANGO_GRAPH_API_SECRET`
-- `$ARANGO_GRAPH_DEPLOYMENT_ID`
-
-Using these environment variables is highly recommended to ensure a secure way
-of providing sensitive data to the application.
-
-### Restrictions and limitations
-
-When running the migration, ensure that your target deployment has the same (or
-bigger) amount of resources (CPU, RAM) than your cluster. Otherwise, the
-migration process might get stuck or require manual intervention. This is closely
-connected to the type of data you have and how it is distributed between shards
-and collections.
-
-In general, the most important parameters are:
-- Total number of leader shards
-- The amount of data in bytes per collection
-
-Both parameters can be retrieved from the ArangoDB Web Interface.
-
-The `arangosync-migration` tool supports migrating large datasets of up to
-5 TB of data and 3800 leader shards, as well as collections as big as 250 GB.
-
-In case you have any questions, please
-[reach out to us](https://www.arangodb.com/contact).
-
-## Cloud migration workflow for minimal downtime
-
-1. Download and start the `arangosync-migration` tool. The target deployment
- is switched into read-only mode automatically.
-2. Wait until all shards are in sync. You can use the `status` or the `start`
- command with the same parameters to track that.
-3. Optionally, when all shards are in-sync, you can switch your applications
- to use the endpoint of the ArangoGraph deployment, but note that it stays in
- read-only mode until the migration process is fully completed.
-4. Stop the migration using the `stop` subcommand. The following steps are executed:
- - The source data cluster is switched into read-only mode.
- - It waits until all shards are synchronized.
- - The target deployment is switched into default read/write mode.
-
- {{< info >}}
- If you switched the source data cluster into read-only mode,
- you can switch it back to default (read/write) mode using the
- `set-server-mode` subcommand.
- {{< /info >}}
diff --git a/site/content/3.11/arangograph/monitoring-and-metrics.md b/site/content/3.11/arangograph/monitoring-and-metrics.md
deleted file mode 100644
index 2b9ede4b4a..0000000000
--- a/site/content/3.11/arangograph/monitoring-and-metrics.md
+++ /dev/null
@@ -1,137 +0,0 @@
----
-title: Monitoring & Metrics in ArangoGraph
-menuTitle: Monitoring & Metrics
-weight: 40
-description: >-
- ArangoGraph provides various built-in tools and integrations to help you
- monitor your deployment
----
-The ArangoGraph Insights Platform provides integrated charts, metrics, and logs
-to help you monitor your deployment. This allows you to track your deployment's
-performance, resource utilization, and its overall status.
-
-The key features include:
-- **Built-in monitoring**: Get immediate access to monitoring capabilities for
- your deployments without any additional setup.
-- **Chart-based metrics representation**: Visualize the usage of the DB-Servers
- and Coordinators over a selected timeframe.
-- **Integration with Prometheus and Grafana**: Connect your metrics to Prometheus
- and Grafana for in-depth visualization and analysis.
-
-To get started, select an existing deployment from within a project and
-click **Monitoring** in the navigation.
-
-
-
-## Built-in monitoring and metrics
-
-### In the **Servers** section
-
-The **Servers** section offers an overview of the DB-Servers, Coordinators,
-and Agents used in your deployment. It provides essential details such as each
-server's ID and type, the running ArangoDB version, as well as their memory,
-CPU, and disk usage.
-
-In case you need to perform a restart on a server, you can do so by using the
-**Gracefully restart this server** action button. This shuts down all services
-normally, allowing ongoing operations to finish gracefully before the restart
-occurs.
-
-Additionally, you can access detailed logs via the **Logs** button. This allows
-you to apply filters to obtain logs from all server types or select specific ones
-(i.e. only Coordinators or only DB-Servers) within a timeframe. To download the
-logs, click the **Save** button.
-
-
-
-### In the **Metrics** section
-
-The **Metrics** section displays a chart-based representation depicting the
-resource utilization of DB-Servers and Coordinators within a specified timeframe.
-
-You can select one or more DB-Servers and choose **CPU**, **Memory**, or **Disk**
-to visualize their respective usage. The search box enables you to easily find
-a server by its ID, particularly useful when having a large number of servers
-or when needing to quickly find a particular one among many.
-
-Similarly, you can repeat the process for Coordinators to see the **CPU** and
-**Memory** usage.
-
-
-
-## Connect with Prometheus and Grafana
-
-The ArangoGraph Insights Platform provides metrics for each deployment in a
-[Prometheus](https://prometheus.io/)-compatible format.
-You can use these metrics to gather detailed insights into the current
-and previous states of your deployment.
-Once metrics are collected by Prometheus, you can inspect them using tools
-such as [Grafana](https://grafana.com/oss/grafana/).
-
-
-
-### Metrics tokens
-
-The **Metrics tokens** section allows you to create a new metrics token,
-which is required for connecting to Prometheus.
-
-1. To create a metrics token, click **New metrics token**.
-2. For **Name**, enter a name for the metrics token.
-3. Optionally, you can also enter a **Short description**.
-4. Select the **Lifetime** of the metrics token.
-5. Click **Create**.
-
-
-
-### How to connect Prometheus
-
-1. In the **Metrics** section, click **Connect Prometheus**.
-2. Create the `prometheus.yml` file with the following content:
- ```yaml
- global:
- scrape_interval: 60s
- scrape_configs:
- - job_name: 'deployment'
- bearer_token: ''
- scheme: 'https'
- static_configs:
- - targets: ['6775e7d48152.arangodb.cloud:8829']
- tls_config:
- insecure_skip_verify: true
- ```
-3. Start Prometheus with the following command:
- ```sh
- docker run -d \
- -p 9090:9090 -p 3000:3000 --name prometheus \
- -v $(pwd)/prometheus.yml:/etc/prometheus/prometheus.yml:ro \
- prom/prometheus
- ```
- {{< info >}}
- This command also opens a port 3000 for Grafana. In a production environment,
- this is not needed and not recommended to have it open.
- {{< /info >}}
-
-### How to connect Grafana
-
-1. Start Grafana with the following command:
- ```sh
- docker run -d \
- --network container:prometheus \
- grafana/grafana
- ```
-2. Go to `localhost:3000` and log in with the following credentials:
- - For username, enter *admin*.
- - For password, enter *admin*.
-
- {{< tip >}}
- After the initial login, make sure to change your password.
- {{< /tip >}}
-
-3. To add a data source, click **Add your first data source** and then do the following:
- - Select **Prometheus**.
- - For **HTTP URL**, enter `http://localhost:9090`.
- - Click **Save & Test**.
-4. To add a dashboard, open the menu and click **Create** and then **Import**.
-5. Download the [Grafana dashboard for ArangoGraph](https://github.com/arangodb-managed/grafana-dashboards).
-6. Copy the contents of the `main.json` file into the **Import via panel json** field in Grafana.
-7. Click **Load**.
diff --git a/site/content/3.11/arangograph/my-account.md b/site/content/3.11/arangograph/my-account.md
deleted file mode 100644
index e79415060a..0000000000
--- a/site/content/3.11/arangograph/my-account.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: My Account in ArangoGraph
-menuTitle: My Account
-weight: 35
-description: >-
- How to manage your user account, your organizations, and your API keys in ArangoGraph
----
-You can access information related to your account via the __User Toolbar__.
-The toolbar is in the top right corner in the ArangoGraph dashboard and
-accessible from every view. There are two elements:
-
-- __Question mark icon__: Help
-- __User icon__: My Account
-
-
-
-## Overview
-
-### How to view my account
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __Overview__ in the __My account__ section.
-3. The __Overview__ displays your name, email address, company and when the
- account was created.
-
-### How to edit the profile of my account
-
-1. Hover over or click the user icon in the __User Toolbar__ in the top right corner.
-2. Click __Overview__ in the __My account__ section.
-3. Click the __Edit__ button.
-4. Change your personal information and __Save__.
-
-
-
-## Organizations
-
-### How to view my organizations
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My organizations__ in the __My account__ section.
-3. Your organizations are listed in a table.
- Click the organization name or the eye icon in the __Actions__ column to
- jump to the organization overview.
-
-### How to create a new organization
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My organizations__ in the __My account__ section.
-3. Click the __New organization__ button.
-4. Enter a name and and a description for the new organization and click the
- __Create__ button.
-
-{{< info >}}
-The free to try tier is limited to a single organization.
-{{< /info >}}
-
-### How to delete an organization
-
-{{< danger >}}
-Removing an organization implies the deletion of projects and deployments.
-This operation cannot be undone and **all deployment data will be lost**.
-Please proceed with caution.
-{{< /danger >}}
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My organizations__ in the __My account__ section.
-3. Click the __recycle bin__ icon in the __Actions__ column.
-4. Enter `Delete!` to confirm and click __Yes__.
-
-{{< info >}}
-If you are no longer a member of any organization, then a new organization is
-created for you when you log in again.
-{{< /info >}}
-
-
-
-## Invites
-
-Invitations are requests to join organizations. You can accept or reject
-pending invites.
-
-### How to create invites
-
-See [Users and Groups: How to add a new member to the organization](organizations/users-and-groups.md#how-to-add-a-new-member-to-the-organization)
-
-### How to respond to my invites
-
-#### I am not a member of an organization yet
-
-1. Once invited, you will receive an email asking to join your
- ArangoGraph organization.
- 
-2. Click the __View my organization invite__ link in the email. You will be
- asked to log in or to create a new account.
-3. To sign up for a new account, click the __Start Free__ button or the
- __Sign up__ link in the header navigation.
- 
-4. After successfully signing up, you will receive a verification email.
-5. Click the __Verify my email address__ link in the email. It takes you back
- to the ArangoGraph Insights Platform site.
- 
-6. After successfully logging in, you can accept or reject the invite to
- join your organization.
- 
-7. After accepting the invite, you become a member of your organization and
- will be granted access to the organization and its related projects and
- deployments.
-
-#### I am already a member of an organization
-
-1. Once invited, you will receive an email asking to join your
- ArangoGraph organization, as well as a notification in the ArangoGraph dashboard.
-2. Click the __View my organization invites__ link in the email, or hover over the
- user icon in the top right corner of the dashboard and click
- __My organization invites__.
- 
-3. On the __Invites__ tab of the __My account__ view, you can accept or reject
- pending invitations, as well as see past invitations that you accepted or
- rejected. Click the button with a checkmark icon to join the organization.
- 
-
-## API Keys
-
-API keys are authentication tokens intended to be used for scripting.
-They allow a script to authenticate on behalf of a user.
-
-An API key consists of a key and a secret. You need both to complete
-authentication.
-
-### How to view my API keys
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My API keys__ in the __My account__ section.
-3. Information about the API keys are listed in the __My API keys__ section.
-
-
-
-### How to create a new API key
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My API keys__ in the __My account__ section.
-3. Click the __New API key__ button.
-4. Optionally limit the API key to a specific organization.
-5. Optionally specify after how many hours the API key should expire into the
- __Time to live__ field.
-6. Optionally limit the API key to read-only APIs
-7. Click the __Create__ button.
-8. Copy the API key ID and Secret, then click the __Close__ button.
-
-{{< security >}}
-The secret is only shown once at creation time.
-You have to store it in a safe place.
-{{< /security >}}
-
-
-
-
-
-### How to revoke or delete an API key
-
-1. Hover over or click the user icon of the __User Toolbar__ in the top right corner.
-2. Click __My API keys__ in the __My account__ section.
-3. Click an icon in the __Actions__ column:
- - __Counter-clockwise arrow__ icon: Revoke API key
- - __Recycle bin__ icon: Delete API key
-4. Click the __Yes__ button to confirm.
-
-{{% comment %}}
-TODO: Copy to clipboard button
-Access token that should expire after 1 hour unless renewed, might get removed as it's confusing.
-{{% /comment %}}
diff --git a/site/content/3.11/arangograph/notebooks.md b/site/content/3.11/arangograph/notebooks.md
deleted file mode 100644
index b581dc44d8..0000000000
--- a/site/content/3.11/arangograph/notebooks.md
+++ /dev/null
@@ -1,170 +0,0 @@
----
-title: ArangoGraph Notebooks
-menuTitle: Notebooks
-weight: 25
-description: >-
- How to create and manage colocated Jupyter Notebooks within ArangoGraph
----
-{{< info >}}
-This documentation describes the beta version of the Notebooks feature and is
-subject to change. The beta version is free for all.
-{{< /info >}}
-
-The ArangoGraph Notebook is a JupyterLab notebook embedded in the ArangoGraph
-Insights Platform. The notebook integrates seamlessly with platform,
-automatically connecting to ArangoGraph services, including ArangoDB and the
-ArangoML platform services. This makes it much easier to leverage these
-resources without having to download any data locally or to remember user IDs,
-passwords, and endpoint URLs.
-
-
-
-The ArangoGraph Notebook has built-in [ArangoGraph Magic Commands](#arangograph-magic-commands)
-that answer questions like:
-- What ArangoDB database am I connected to at the moment?
-- What data does the ArangoDB instance contain?
-- How can I access certain documents?
-- How do I create a graph?
-
-The ArangoGraph Notebook also pre-installs [python-arango](https://docs.python-arango.com/en/main/)
-and ArangoML connectors
-to [PyG](https://github.com/arangoml/pyg-adapter),
-[DGL](https://github.com/arangoml/dgl-adapter),
-[CuGraph](https://github.com/arangoml/cugraph-adapter), as well as the
-[FastGraphML](https://github.com/arangoml/fastgraphml)
-library, so you can get started
-right away accessing data in ArangoDB to develop GraphML models using your
-favorite GraphML libraries with GPUs.
-
-## How to create a new notebook
-
-1. Open the deployment in which you want to create the notebook.
-2. Go to the **Data science** section and click the **Create Notebook** button.
-3. Enter a name and optionally a description for your new notebook.
-4. Select a configuration model from the dropdown menu. Click **Save**.
-5. The notebook's phase is set to **Initializing**. Once the phase changes to
- **Running**, the notebook's endpoint is accessible.
-6. Click the **Open notebook** button to access your notebook.
-7. To access your notebook, you need to be signed into ArangoGraph as a user with
- the `notebook.notebook.execute` permission in your project. Organization
- owners have this permission enabled by default. The `notebook-executor` role
- which contains the permission can also be granted to other members of the
- organization via roles. See how to create a
- [role binding](security-and-access-control/_index.md#how-to-view-edit-or-remove-role-bindings-of-a-policy).
-
-{{< info >}}
-Depending on the tier your organization belongs to, different limitations apply:
-- On-Demand and Committed: you can create up to three notebooks per deployment.
-- Free-to-try: you can only create one notebook per deployment.
-{{< /info >}}
-
-
-
-{{< info >}}
-Notebooks in beta version have a fixed configuration of 10 GB of disk size.
-{{< /info >}}
-
-## How to edit a notebook
-
-1. Select the notebook that you want to change from the **Notebooks** tab.
-2. Click **Edit notebook**. You can modify its name and description.
-3. To pause a notebook, click the **Pause notebook** button. You can resume it
-at anytime. The notebook's phase is updated accordingly.
-
-## How to delete a notebook
-
-1. Select the notebook that you want to remove from the **Notebooks** tab.
-2. Click the **Delete notebook** button.
-
-## Getting Started notebook
-
-To get a better understanding of how to interact with your ArangoDB database
-cluster, use the ArangoGraph Getting Started template.
-The ArangoGraph Notebook automatically connects to the ArangoDB service
-endpoint, so you can immediately start interacting with it.
-
-1. Log in to the notebook you have created by using your deployment's root password.
-2. Select the `GettingStarted.ipynb` template from the file browser.
-
-## ArangoGraph Magic Commands
-
-A list of the available magic commands you can interact with.
-Single line commands have `%` prefix and multi-line commands have `%%` prefix.
-
-**Database Commands**
-
-- `%listDatabases` - lists the databases on the database server.
-- `%whichDatabase` - returns the database name you are connected to.
-- `%createDatabase databaseName` - creates a database.
-- `%selectDatabase databaseName` - selects a database as the current database.
-- `%useDatabase databasename` - uses a database as the current database;
- alias for `%selectDatabase`.
-- `%getDatabase databaseName` - gets a database. Used for assigning a database,
- e.g. `studentDB` = `getDatabase student_database`.
-- `%deleteDatabase databaseName` - deletes the database.
-
-**Graph Commands**
-
-- `%listGraphs` - lists the graphs defined in the currently selected database.
-- `%whichGraph` - returns the graph name that is currently selected.
-- `%createGraph graphName` - creates a named graph.
-- `%selectGraph graphName` - selects the graph as the current graph.
-- `%useGraph graphName` - uses the graph as the current graph;
- alias for `%selectGraph`.
-- `%getGraph graphName` - gets the graph for variable assignment,
- e.g. `studentGraph` = `%getGraph student-graph`.
-- `%deleteGraph graphName` - deletes a graph.
-
-**Collection Commands**
-
-- `%listCollections` - lists the collections on the selected current database.
-- `%whichCollection` - returns the collection name that is currently selected.
-- `%createCollection collectionName` - creates a collection.
-- `%selectCollection collectionName` - selects a collection as the current collection.
-- `%useCollection collectionName` - uses the collection as the current collection;
- alias for `%selectCollection`.
-- `%getCollection collectionName` - gets a collection for variable assignment,
- e.g. `student` = `% getCollection Student`.
-- `%createEdgeCollection` - creates an edge collection.
-- `%createVertexCollection` - creates a vertex collection.
-- `%createEdgeDefinition` - creates an edge definition.
-- `%deleteCollection collectionName` - deletes the collection.
-- `%truncateCollection collectionName` - truncates the collection.
-- `%sampleCollection collectionName` - returns a random document from the collection.
- If no collection is specified, then it uses the selected collection.
-
-**Document Commands**
-
-- `%insertDocument jsonDocument` - inserts the document into the currently selected collection.
-- `%replaceDocument jsonDocument` - replaces the document in the currently selected collection.
-- `%updateDocument jsonDocument` - updates the document in the currently selected collection.
-- `%deleteDocument jsonDocument` - deletes the document from the currently selected collection.
-- `%%importBulk jsonDocumentArray` - imports an array of documents into the currently selected collection.
-
-**AQL Commands**
-
-- `%aql single-line_aql_query` - executes a single line AQL query.
-- `%%aqlm multi-line_aql_query` - executes a multi-line AQL query.
-
-**Variables**
-
-- `_endpoint` - the endpoint (URL) of the ArangoDB Server.
-- `_system` - the system database used for creating, listing, and deleting databases.
-- `_db` - the selected (current) database. To select a different database, use `%selectDatabase`.
-- `_graph` - the selected (current) graph. To select a different graph, use `%selectGraph`.
-- `_collection` - the selected (current) collection. To select a different collection, use `%selectCollection`.
-- `_user` - the current user.
-
-You can use these variables directly, for example, `_db.collections()` to list
-collections or `_system.databases` to list databases.
-
-You can also create your own variable assignments, such as:
-
-- `schoolDB` = `%getDatabase schoolDB`
-- `school_graph` = `%getGraph school_graph`
-- `student` = `%getCollection Student`
-
-**Reset environment**
-
-In the event that any of the above variables have been unintentionally changed,
-you can revert all of them to the default state with `reset_environment()`.
diff --git a/site/content/3.11/arangograph/oasisctl/_index.md b/site/content/3.11/arangograph/oasisctl/_index.md
deleted file mode 100644
index 9d22a68e31..0000000000
--- a/site/content/3.11/arangograph/oasisctl/_index.md
+++ /dev/null
@@ -1,18 +0,0 @@
----
-title: Oasisctl
-menuTitle: Oasisctl
-weight: 65
-description: >-
- Command-line client tool for managing the ArangoGraph Insights Platform
----
-Oasisctl is a command-line tool for using the [ArangoGraph API](../api/_index.md).
-This tool makes integration of ArangoGraph in all kinds of (bash) scripts easy.
-It is also a good example on how to use the API.
-
-See [Getting started with Oasisctl](../api/get-started.md) for a
-tutorial.
-
-Oasisctl is available for Linux, macOS and Windows.
-Download and source code:
-
-[github.com/arangodb-managed/oasisctl](https://github.com/arangodb-managed/oasisctl/)
diff --git a/site/content/3.11/arangograph/oasisctl/accept/_index.md b/site/content/3.11/arangograph/oasisctl/accept/_index.md
deleted file mode 100644
index e9c0e05a01..0000000000
--- a/site/content/3.11/arangograph/oasisctl/accept/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Accept
-menuTitle: Accept
-weight: 2
----
-## oasisctl accept
-
-Accept invites
-
-```
-oasisctl accept [flags]
-```
-
-## Options
-```
- -h, --help help for accept
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl accept organization](accept-organization.md) - Accept organization related invites
-
diff --git a/site/content/3.11/arangograph/oasisctl/accept/accept-organization-invite.md b/site/content/3.11/arangograph/oasisctl/accept/accept-organization-invite.md
deleted file mode 100644
index 3f52fbb67b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/accept/accept-organization-invite.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Accept Organization Invite
-menuTitle: Accept Organization Invite
-weight: 2
----
-## oasisctl accept organization invite
-
-Accept an organization invite the authenticated user has access to
-
-```
-oasisctl accept organization invite [flags]
-```
-
-## Options
-```
- -h, --help help for invite
- -i, --invite-id string Identifier of the organization invite
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl accept organization](accept-organization.md) - Accept organization related invites
-
diff --git a/site/content/3.11/arangograph/oasisctl/accept/accept-organization.md b/site/content/3.11/arangograph/oasisctl/accept/accept-organization.md
deleted file mode 100644
index f4d5310a16..0000000000
--- a/site/content/3.11/arangograph/oasisctl/accept/accept-organization.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Accept Organization
-menuTitle: Accept Organization
-weight: 1
----
-## oasisctl accept organization
-
-Accept organization related invites
-
-```
-oasisctl accept organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl accept](_index.md) - Accept invites
-* [oasisctl accept organization invite](accept-organization-invite.md) - Accept an organization invite the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/add/_index.md b/site/content/3.11/arangograph/oasisctl/add/_index.md
deleted file mode 100644
index c44318f848..0000000000
--- a/site/content/3.11/arangograph/oasisctl/add/_index.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Add
-menuTitle: Add
-weight: 3
----
-## oasisctl add
-
-Add resources
-
-```
-oasisctl add [flags]
-```
-
-## Options
-```
- -h, --help help for add
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl add auditlog](add-auditlog.md) - Add auditlog resources
-* [oasisctl add group](add-group.md) - Add group resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/add/add-auditlog-destination.md b/site/content/3.11/arangograph/oasisctl/add/add-auditlog-destination.md
deleted file mode 100644
index 378e456833..0000000000
--- a/site/content/3.11/arangograph/oasisctl/add/add-auditlog-destination.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Oasisctl Add Audit Log Destination
-menuTitle: Add Audit Log Destination
-weight: 2
----
-## oasisctl add auditlog destination
-
-Add a destination to an auditlog.
-
-```
-oasisctl add auditlog destination [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog
- --destination-excluded-topics strings Do not send audit events with these topics to this destination.
- --destination-https-client-certificate-pem string PEM encoded public key of the client certificate.
- --destination-https-client-key-pem string PEM encoded private key of the client certificate.
- --destination-https-headers strings A key=value formatted list of headers for the request. Repeating headers are allowed.
- --destination-https-trusted-server-ca-pem string PEM encoded public key of the CA used to sign the server TLS certificate. If empty, a well known CA is expected.
- --destination-https-url string URL of the server to POST to. Scheme must be HTTPS.
- --destination-type string Type of destination. Possible values are: "cloud", "https-post"
- -h, --help help for destination
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl add auditlog](add-auditlog.md) - Add auditlog resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/add/add-auditlog.md b/site/content/3.11/arangograph/oasisctl/add/add-auditlog.md
deleted file mode 100644
index dca21fda97..0000000000
--- a/site/content/3.11/arangograph/oasisctl/add/add-auditlog.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Add Audit Log
-menuTitle: Add Audit Log
-weight: 1
----
-## oasisctl add auditlog
-
-Add auditlog resources
-
-```
-oasisctl add auditlog [flags]
-```
-
-## Options
-```
- -h, --help help for auditlog
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl add](_index.md) - Add resources
-* [oasisctl add auditlog destination](add-auditlog-destination.md) - Add a destination to an auditlog.
-
diff --git a/site/content/3.11/arangograph/oasisctl/add/add-group-members.md b/site/content/3.11/arangograph/oasisctl/add/add-group-members.md
deleted file mode 100644
index db2677d276..0000000000
--- a/site/content/3.11/arangograph/oasisctl/add/add-group-members.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Add Group Members
-menuTitle: Add Group Members
-weight: 4
----
-## oasisctl add group members
-
-Add members to group
-
-```
-oasisctl add group members [flags]
-```
-
-## Options
-```
- -g, --group-id string Identifier of the group to add members to
- -h, --help help for members
- -o, --organization-id string Identifier of the organization
- -u, --user-emails strings A comma separated list of user email addresses
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl add group](add-group.md) - Add group resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/add/add-group.md b/site/content/3.11/arangograph/oasisctl/add/add-group.md
deleted file mode 100644
index a51b0b7102..0000000000
--- a/site/content/3.11/arangograph/oasisctl/add/add-group.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Add Group
-menuTitle: Add Group
-weight: 3
----
-## oasisctl add group
-
-Add group resources
-
-```
-oasisctl add group [flags]
-```
-
-## Options
-```
- -h, --help help for group
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl add](_index.md) - Add resources
-* [oasisctl add group members](add-group-members.md) - Add members to group
-
diff --git a/site/content/3.11/arangograph/oasisctl/auditlog/_index.md b/site/content/3.11/arangograph/oasisctl/auditlog/_index.md
deleted file mode 100644
index fd71b3b653..0000000000
--- a/site/content/3.11/arangograph/oasisctl/auditlog/_index.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Audit Log
-menuTitle: Audit Log
-weight: 4
----
-## oasisctl auditlog
-
-AuditLog resources
-
-```
-oasisctl auditlog [flags]
-```
-
-## Options
-```
- -h, --help help for auditlog
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl auditlog attach](auditlog-attach.md) - Attach a project to an audit log
-* [oasisctl auditlog detach](auditlog-detach.md) - Detach a project from an auditlog
-* [oasisctl auditlog get](auditlog-get.md) - Audit log get resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-attach.md b/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-attach.md
deleted file mode 100644
index 7ffcd50360..0000000000
--- a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-attach.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Audit Log Attach
-menuTitle: Audit Log Attach
-weight: 1
----
-## oasisctl auditlog attach
-
-Attach a project to an audit log
-
-```
-oasisctl auditlog attach [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog to attach to.
- -h, --help help for attach
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl auditlog](_index.md) - AuditLog resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-detach.md b/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-detach.md
deleted file mode 100644
index 4043614c32..0000000000
--- a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-detach.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Audit Log Detach
-menuTitle: Audit Log Detach
-weight: 2
----
-## oasisctl auditlog detach
-
-Detach a project from an auditlog
-
-```
-oasisctl auditlog detach [flags]
-```
-
-## Options
-```
- -h, --help help for detach
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl auditlog](_index.md) - AuditLog resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get-attached-project.md b/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get-attached-project.md
deleted file mode 100644
index b4c2a69666..0000000000
--- a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get-attached-project.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Audit Log Get Attached Project
-menuTitle: Audit Log Get Attached Project
-weight: 5
----
-## oasisctl auditlog get attached project
-
-Get an attached log to a project
-
-```
-oasisctl auditlog get attached project [flags]
-```
-
-## Options
-```
- -h, --help help for project
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl auditlog get attached](auditlog-get-attached.md) - Audit get attached resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get-attached.md b/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get-attached.md
deleted file mode 100644
index 004400da64..0000000000
--- a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get-attached.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Audit Log Get Attached
-menuTitle: Audit Log Get Attached
-weight: 4
----
-## oasisctl auditlog get attached
-
-Audit get attached resources
-
-```
-oasisctl auditlog get attached [flags]
-```
-
-## Options
-```
- -h, --help help for attached
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl auditlog get](auditlog-get.md) - Audit log get resources
-* [oasisctl auditlog get attached project](auditlog-get-attached-project.md) - Get an attached log to a project
-
diff --git a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get.md b/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get.md
deleted file mode 100644
index d3b170c666..0000000000
--- a/site/content/3.11/arangograph/oasisctl/auditlog/auditlog-get.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Audit Log Get
-menuTitle: Audit Log Get
-weight: 3
----
-## oasisctl auditlog get
-
-Audit log get resources
-
-```
-oasisctl auditlog get [flags]
-```
-
-## Options
-```
- -h, --help help for get
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl auditlog](_index.md) - AuditLog resources
-* [oasisctl auditlog get attached](auditlog-get-attached.md) - Audit get attached resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/backup/_index.md b/site/content/3.11/arangograph/oasisctl/backup/_index.md
deleted file mode 100644
index 2a19e74e37..0000000000
--- a/site/content/3.11/arangograph/oasisctl/backup/_index.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Backup
-menuTitle: Backup
-weight: 5
----
-## oasisctl backup
-
-Backup commands
-
-```
-oasisctl backup [flags]
-```
-
-## Options
-```
- -h, --help help for backup
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl backup copy](backup-copy.md) - Copy a backup from source backup to given region
-* [oasisctl backup download](backup-download.md) - Download a backup
-
diff --git a/site/content/3.11/arangograph/oasisctl/backup/backup-copy.md b/site/content/3.11/arangograph/oasisctl/backup/backup-copy.md
deleted file mode 100644
index b492b5d92b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/backup/backup-copy.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Backup Copy
-menuTitle: Backup Copy
-weight: 1
----
-## oasisctl backup copy
-
-Copy a backup from source backup to given region
-
-```
-oasisctl backup copy [flags]
-```
-
-## Options
-```
- -h, --help help for copy
- --region-id string Identifier of the region where the new backup is to be created
- --source-backup-id string Identifier of the source backup
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl backup](_index.md) - Backup commands
-
diff --git a/site/content/3.11/arangograph/oasisctl/backup/backup-download.md b/site/content/3.11/arangograph/oasisctl/backup/backup-download.md
deleted file mode 100644
index b458b57b48..0000000000
--- a/site/content/3.11/arangograph/oasisctl/backup/backup-download.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Backup Download
-menuTitle: Backup Download
-weight: 2
----
-## oasisctl backup download
-
-Download a backup
-
-## Synopsis
-Download a backup from the cloud storage to the local deployment disks, so it can be restored.
-
-```
-oasisctl backup download [flags]
-```
-
-## Options
-```
- -h, --help help for download
- -i, --id string Identifier of the backup
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl backup](_index.md) - Backup commands
-
diff --git a/site/content/3.11/arangograph/oasisctl/clone/_index.md b/site/content/3.11/arangograph/oasisctl/clone/_index.md
deleted file mode 100644
index 7edaf4178e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/clone/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Clone
-menuTitle: Clone
-weight: 6
----
-## oasisctl clone
-
-Clone resources
-
-```
-oasisctl clone [flags]
-```
-
-## Options
-```
- -h, --help help for clone
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl clone deployment](clone-deployment.md) - Clone deployment resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/clone/clone-deployment-backup.md b/site/content/3.11/arangograph/oasisctl/clone/clone-deployment-backup.md
deleted file mode 100644
index 5f98d22b91..0000000000
--- a/site/content/3.11/arangograph/oasisctl/clone/clone-deployment-backup.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Clone Deployment Backup
-menuTitle: Clone Deployment Backup
-weight: 2
----
-## oasisctl clone deployment backup
-
-Clone a deployment from a backup.
-
-```
-oasisctl clone deployment backup [flags]
-```
-
-## Options
-```
- --accept Accept the current terms and conditions.
- -b, --backup-id string Clone a deployment from a backup using the backup's ID.
- -h, --help help for backup
- -o, --organization-id string Identifier of the organization to create the clone in
- -p, --project-id string An optional identifier of the project to create the clone in
- -r, --region-id string An optionally defined region in which the new deployment should be created in.
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl clone deployment](clone-deployment.md) - Clone deployment resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/clone/clone-deployment.md b/site/content/3.11/arangograph/oasisctl/clone/clone-deployment.md
deleted file mode 100644
index 0b76d1fec8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/clone/clone-deployment.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Clone Deployment
-menuTitle: Clone Deployment
-weight: 1
----
-## oasisctl clone deployment
-
-Clone deployment resources
-
-```
-oasisctl clone deployment [flags]
-```
-
-## Options
-```
- -h, --help help for deployment
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl clone](_index.md) - Clone resources
-* [oasisctl clone deployment backup](clone-deployment-backup.md) - Clone a deployment from a backup.
-
diff --git a/site/content/3.11/arangograph/oasisctl/completion.md b/site/content/3.11/arangograph/oasisctl/completion.md
deleted file mode 100644
index 9cd58cd4f6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/completion.md
+++ /dev/null
@@ -1,39 +0,0 @@
----
-title: Oasisctl Completion
-menuTitle: Completion
-weight: 7
----
-## oasisctl completion
-
-Generates bash completion scripts
-
-## Synopsis
-To load completion run
-
- . <(oasisctl completion [bash|fish|powershell|zsh])
-
-To configure your bash shell to load completions for each session add to your bashrc
-
- # ~/.bashrc or ~/.profile
- . <(oasisctl completion bash)
-
-
-```
-oasisctl completion [flags]
-```
-
-## Options
-```
- -h, --help help for completion
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/_index.md b/site/content/3.11/arangograph/oasisctl/create/_index.md
deleted file mode 100644
index 87ef865918..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/_index.md
+++ /dev/null
@@ -1,42 +0,0 @@
----
-title: Oasisctl Create
-menuTitle: Create
-weight: 8
----
-## oasisctl create
-
-Create resources
-
-```
-oasisctl create [flags]
-```
-
-## Options
-```
- -h, --help help for create
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl create apikey](create-apikey.md) - Create a new API key
-* [oasisctl create auditlog](create-auditlog.md) - Create an auditlog
-* [oasisctl create backup](create-backup.md) - Create backup ...
-* [oasisctl create cacertificate](create-cacertificate.md) - Create a new CA certificate
-* [oasisctl create deployment](create-deployment.md) - Create a new deployment
-* [oasisctl create example](create-example.md) - Create example ...
-* [oasisctl create group](create-group.md) - Create a new group
-* [oasisctl create ipallowlist](create-ipallowlist.md) - Create a new IP allowlist
-* [oasisctl create metrics](create-metrics.md) - Create metrics resources
-* [oasisctl create notebook](create-notebook.md) - Create a new notebook
-* [oasisctl create organization](create-organization.md) - Create a new organization
-* [oasisctl create private](create-private.md) - Create private resources
-* [oasisctl create project](create-project.md) - Create a new project
-* [oasisctl create role](create-role.md) - Create a new role
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-apikey.md b/site/content/3.11/arangograph/oasisctl/create/create-apikey.md
deleted file mode 100644
index 1177d5cc67..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-apikey.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Create API Key
-menuTitle: Create API Key
-weight: 1
----
-## oasisctl create apikey
-
-Create a new API key
-
-```
-oasisctl create apikey [flags]
-```
-
-## Options
-```
- -h, --help help for apikey
- -o, --organization-id string If set, the newly created API key will grant access to this organization only
- --readonly If set, the newly created API key will grant readonly access only
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-auditlog.md b/site/content/3.11/arangograph/oasisctl/create/create-auditlog.md
deleted file mode 100644
index 5863da66e8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-auditlog.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Create Audit Log
-menuTitle: Create Audit Log
-weight: 2
----
-## oasisctl create auditlog
-
-Create an auditlog
-
-```
-oasisctl create auditlog [flags]
-```
-
-## Options
-```
- --default If set, this AuditLog is the default for the organization.
- --description string Description of the audit log.
- -h, --help help for auditlog
- --name string Name of the audit log.
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-backup-policy.md b/site/content/3.11/arangograph/oasisctl/create/create-backup-policy.md
deleted file mode 100644
index 99f899b951..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-backup-policy.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: Oasisctl Create Backup Policy
-menuTitle: Create Backup Policy
-weight: 4
----
-## oasisctl create backup policy
-
-Create a new backup policy
-
-```
-oasisctl create backup policy [flags]
-```
-
-## Options
-```
- --additional-region-ids strings Add backup to the specified addition regions
- --day-of-the-month int32 Run the backup on the specified day of the month (1-31) (default 1)
- --deployment-id string ID of the deployment
- --description string Description of the backup policy
- --email-notification string Email notification setting (Never|FailureOnly|Always)
- --every-interval-hours int32 Schedule should run with an interval of the specified hours (1-23)
- --friday If set, a backup will be created on Fridays.
- -h, --help help for policy
- --hours int32 Hours part of the time of day (0-23)
- --minutes int32 Minutes part of the time of day (0-59)
- --minutes-offset int32 Schedule should run with specific minutes offset (0-59)
- --monday If set, a backup will be created on Mondays
- --name string Name of the deployment
- --paused The policy is paused
- --retention-period int Backups created by this policy will be automatically deleted after the specified retention period. A value of 0 means that backup will never be deleted.
- --saturday If set, a backup will be created on Saturdays
- --schedule-type string Schedule of the policy (Hourly|Daily|Monthly)
- --sunday If set, a backup will be created on Sundays
- --thursday If set, a backup will be created on Thursdays
- --time-zone string The time-zone this time of day applies to (empty means UTC). Names MUST be exactly as defined in RFC-822. (default "UTC")
- --tuesday If set, a backup will be created on Tuesdays
- --upload The backup should be uploaded
- --wednesday If set, a backup will be created on Wednesdays
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create backup](create-backup.md) - Create backup ...
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-backup.md b/site/content/3.11/arangograph/oasisctl/create/create-backup.md
deleted file mode 100644
index 8ca544206c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-backup.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Create Backup
-menuTitle: Create Backup
-weight: 3
----
-## oasisctl create backup
-
-Create backup ...
-
-```
-oasisctl create backup [flags]
-```
-
-## Options
-```
- --auto-deleted-at int Time (h) until auto delete of the backup
- --deployment-id string ID of the deployment
- --description string Description of the backup
- -h, --help help for backup
- --name string Name of the deployment
- --upload The backup should be uploaded
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-* [oasisctl create backup policy](create-backup-policy.md) - Create a new backup policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-cacertificate.md b/site/content/3.11/arangograph/oasisctl/create/create-cacertificate.md
deleted file mode 100644
index b27b6e7db8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-cacertificate.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Create CA Certificate
-menuTitle: Create CA Certificate
-weight: 5
----
-## oasisctl create cacertificate
-
-Create a new CA certificate
-
-```
-oasisctl create cacertificate [flags]
-```
-
-## Options
-```
- --description string Description of the CA certificate
- -h, --help help for cacertificate
- --lifetime duration Lifetime of the CA certificate.
- --name string Name of the CA certificate
- -o, --organization-id string Identifier of the organization to create the CA certificate in
- -p, --project-id string Identifier of the project to create the CA certificate in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-deployment.md b/site/content/3.11/arangograph/oasisctl/create/create-deployment.md
deleted file mode 100644
index c9f633fd99..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-deployment.md
+++ /dev/null
@@ -1,54 +0,0 @@
----
-title: Oasisctl Create Deployment
-menuTitle: Create Deployment
-weight: 6
----
-## oasisctl create deployment
-
-Create a new deployment
-
-```
-oasisctl create deployment [flags]
-```
-
-## Options
-```
- --accept Accept the current terms and conditions.
- -c, --cacertificate-id string Identifier of the CA certificate to use for the deployment
- --coordinator-memory-size int32 Set memory size of Coordinators for flexible deployments (GiB) (default 4)
- --coordinators int32 Set number of Coordinators for flexible deployments (default 3)
- --custom-image string Set a custom image to use for the deployment. Only available for selected customers.
- --dbserver-disk-size int32 Set disk size of DB-Servers for flexible deployments (GiB) (default 32)
- --dbserver-memory-size int32 Set memory size of DB-Servers for flexible deployments (GiB) (default 4)
- --dbservers int32 Set number of DB-Servers for flexible deployments (default 3)
- --deployment-profile-id string Set the Deployment Profile to use for this deployment.
- --description string Description of the deployment
- --disable-foxx-authentication Disable authentication of requests to Foxx application.
- --disk-performance-id string Set the disk performance to use for this deployment.
- --drop-vst-support Drop VST protocol support to improve resilience.
- -h, --help help for deployment
- -i, --ipallowlist-id string Identifier of the IP allowlist to use for the deployment
- --is-platform-authentication-enabled Enable platform authentication for deployment.
- --max-node-disk-size int32 Set maximum disk size for nodes for autoscaler (GiB)
- --model string Set model of the deployment (default "oneshard")
- --name string Name of the deployment
- --node-count int32 Set the number of desired nodes (default 3)
- --node-disk-size int32 Set disk size for nodes (GiB)
- --node-size-id string Set the node size to use for this deployment
- --notification-email-address strings Set email address(-es) that will be used for notifications related to this deployment.
- -o, --organization-id string Identifier of the organization to create the deployment in
- -p, --project-id string Identifier of the project to create the deployment in
- -r, --region-id string Identifier of the region to create the deployment in
- --version string Version of ArangoDB to use for the deployment (default "default")
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-example-installation.md b/site/content/3.11/arangograph/oasisctl/create/create-example-installation.md
deleted file mode 100644
index 121d13dccd..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-example-installation.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Create Example Installation
-menuTitle: Create Example Installation
-weight: 8
----
-## oasisctl create example installation
-
-Create a new example dataset installation
-
-```
-oasisctl create example installation [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment to list installations for
- -e, --example-dataset-id string ID of the example dataset
- -h, --help help for installation
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create example](create-example.md) - Create example ...
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-example.md b/site/content/3.11/arangograph/oasisctl/create/create-example.md
deleted file mode 100644
index 5b1a50cf0e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-example.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Create Example
-menuTitle: Create Example
-weight: 7
----
-## oasisctl create example
-
-Create example ...
-
-```
-oasisctl create example [flags]
-```
-
-## Options
-```
- -h, --help help for example
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-* [oasisctl create example installation](create-example-installation.md) - Create a new example dataset installation
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-group.md b/site/content/3.11/arangograph/oasisctl/create/create-group.md
deleted file mode 100644
index d28e7ec7d2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-group.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Create Group
-menuTitle: Create Group
-weight: 9
----
-## oasisctl create group
-
-Create a new group
-
-```
-oasisctl create group [flags]
-```
-
-## Options
-```
- --description string Description of the group
- -h, --help help for group
- --name string Name of the group
- -o, --organization-id string Identifier of the organization to create the group in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-ipallowlist.md b/site/content/3.11/arangograph/oasisctl/create/create-ipallowlist.md
deleted file mode 100644
index 07f4308515..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-ipallowlist.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Create IP Allowlist
-menuTitle: Create IP Allowlist
-weight: 10
----
-## oasisctl create ipallowlist
-
-Create a new IP allowlist
-
-```
-oasisctl create ipallowlist [flags]
-```
-
-## Options
-```
- --cidr-range strings List of CIDR ranges from which deployments are accessible
- --description string Description of the IP allowlist
- -h, --help help for ipallowlist
- --name string Name of the IP allowlist
- -o, --organization-id string Identifier of the organization to create the IP allowlist in
- -p, --project-id string Identifier of the project to create the IP allowlist in
- --remote-inspection-allowed If set, remote connectivity checks by the Oasis platform are allowed
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-metrics-token.md b/site/content/3.11/arangograph/oasisctl/create/create-metrics-token.md
deleted file mode 100644
index a5e0e9a9dd..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-metrics-token.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Create Metrics Token
-menuTitle: Create Metrics Token
-weight: 12
----
-## oasisctl create metrics token
-
-Create a new metrics access token
-
-```
-oasisctl create metrics token [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment to create the token for
- --description string Description of the token
- -h, --help help for token
- --lifetime duration Lifetime of the token.
- --name string Name of the token
- -o, --organization-id string Identifier of the organization to create the token in
- -p, --project-id string Identifier of the project to create the token in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create metrics](create-metrics.md) - Create metrics resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-metrics.md b/site/content/3.11/arangograph/oasisctl/create/create-metrics.md
deleted file mode 100644
index d504981b04..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-metrics.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Create Metrics
-menuTitle: Create Metrics
-weight: 11
----
-## oasisctl create metrics
-
-Create metrics resources
-
-```
-oasisctl create metrics [flags]
-```
-
-## Options
-```
- -h, --help help for metrics
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-* [oasisctl create metrics token](create-metrics-token.md) - Create a new metrics access token
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-notebook.md b/site/content/3.11/arangograph/oasisctl/create/create-notebook.md
deleted file mode 100644
index 8e1f2dcd53..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-notebook.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Oasisctl Create Notebook
-menuTitle: Create Notebook
-weight: 13
----
-## oasisctl create notebook
-
-Create a new notebook
-
-```
-oasisctl create notebook [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment that the notebook has to run next to
- --description string Description of the notebook
- -s, --disk-size int32 Disk size in GiB that has to be attached to given notebook
- -h, --help help for notebook
- -n, --name string Name of the notebook
- -m, --notebook-model string Identifier of the notebook model that the notebook has to use
- -o, --organization-id string Identifier of the organization to create the notebook in
- -p, --project-id string Identifier of the project to create the notebook in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-organization-invite.md b/site/content/3.11/arangograph/oasisctl/create/create-organization-invite.md
deleted file mode 100644
index 3fbe04a1fe..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-organization-invite.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Create Organization Invite
-menuTitle: Create Organization Invite
-weight: 15
----
-## oasisctl create organization invite
-
-Create a new invite to an organization
-
-```
-oasisctl create organization invite [flags]
-```
-
-## Options
-```
- --email string Email address of the person to invite
- -h, --help help for invite
- -o, --organization-id string Identifier of the organization to create the invite in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create organization](create-organization.md) - Create a new organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-organization.md b/site/content/3.11/arangograph/oasisctl/create/create-organization.md
deleted file mode 100644
index 8ca1b9065b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-organization.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Create Organization
-menuTitle: Create Organization
-weight: 14
----
-## oasisctl create organization
-
-Create a new organization
-
-```
-oasisctl create organization [flags]
-```
-
-## Options
-```
- --description string Description of the organization
- -h, --help help for organization
- --name string Name of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-* [oasisctl create organization invite](create-organization-invite.md) - Create a new invite to an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-private-endpoint-service.md b/site/content/3.11/arangograph/oasisctl/create/create-private-endpoint-service.md
deleted file mode 100644
index 01999a99ce..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-private-endpoint-service.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Oasisctl Create Private Endpoint Service
-menuTitle: Create Private Endpoint Service
-weight: 18
----
-## oasisctl create private endpoint service
-
-Create a Private Endpoint Service attached to an existing deployment
-
-```
-oasisctl create private endpoint service [flags]
-```
-
-## Options
-```
- --alternate-dns-name strings DNS names used for the deployment in the private network
- --aws-principal strings List of AWS Principals from which a Private Endpoint can be created (Format: [/Role/|/User/])
- --azure-client-subscription-id strings List of Azure subscription IDs from which a Private Endpoint can be created
- -d, --deployment-id string Identifier of the deployment that the private endpoint service is connected to
- --description string Description of the private endpoint service
- --enable-private-dns Enable private DNS for the deployment (applicable for AWS and GKE only)
- --gcp-project strings List of GCP projects from which a Private Endpoint can be created
- -h, --help help for service
- --name string Name of the private endpoint service
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create private endpoint](create-private-endpoint.md) -
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-private-endpoint.md b/site/content/3.11/arangograph/oasisctl/create/create-private-endpoint.md
deleted file mode 100644
index cac7dbfcb7..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-private-endpoint.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Create Private Endpoint
-menuTitle: Create Private Endpoint
-weight: 17
----
-## oasisctl create private endpoint
-
-
-
-```
-oasisctl create private endpoint [flags]
-```
-
-## Options
-```
- -h, --help help for endpoint
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create private](create-private.md) - Create private resources
-* [oasisctl create private endpoint service](create-private-endpoint-service.md) - Create a Private Endpoint Service attached to an existing deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-private.md b/site/content/3.11/arangograph/oasisctl/create/create-private.md
deleted file mode 100644
index 3cb40e735b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-private.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Create Private
-menuTitle: Create Private
-weight: 16
----
-## oasisctl create private
-
-Create private resources
-
-```
-oasisctl create private [flags]
-```
-
-## Options
-```
- -h, --help help for private
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-* [oasisctl create private endpoint](create-private-endpoint.md) -
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-project.md b/site/content/3.11/arangograph/oasisctl/create/create-project.md
deleted file mode 100644
index 59d997efb7..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-project.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Create Project
-menuTitle: Create Project
-weight: 19
----
-## oasisctl create project
-
-Create a new project
-
-```
-oasisctl create project [flags]
-```
-
-## Options
-```
- --description string Description of the project
- -h, --help help for project
- --name string Name of the project
- -o, --organization-id string Identifier of the organization to create the project in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/create/create-role.md b/site/content/3.11/arangograph/oasisctl/create/create-role.md
deleted file mode 100644
index 52cec0a672..0000000000
--- a/site/content/3.11/arangograph/oasisctl/create/create-role.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Create Role
-menuTitle: Create Role
-weight: 20
----
-## oasisctl create role
-
-Create a new role
-
-```
-oasisctl create role [flags]
-```
-
-## Options
-```
- --description string Description of the role
- -h, --help help for role
- --name string Name of the role
- -o, --organization-id string Identifier of the organization to create the role in
- -p, --permission strings Permissions granted by the role
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl create](_index.md) - Create resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/_index.md b/site/content/3.11/arangograph/oasisctl/delete/_index.md
deleted file mode 100644
index 75b76a80b6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/_index.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: Oasisctl Delete
-menuTitle: Delete
-weight: 9
----
-## oasisctl delete
-
-Delete resources
-
-```
-oasisctl delete [flags]
-```
-
-## Options
-```
- -h, --help help for delete
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl delete apikey](delete-apikey.md) - Delete an API key with given identifier
-* [oasisctl delete auditlog](delete-auditlog.md) - Delete an auditlog
-* [oasisctl delete backup](delete-backup.md) - Delete a backup for a given ID.
-* [oasisctl delete cacertificate](delete-cacertificate.md) - Delete a CA certificate the authenticated user has access to
-* [oasisctl delete deployment](delete-deployment.md) - Delete a deployment the authenticated user has access to
-* [oasisctl delete example](delete-example.md) - Delete example ...
-* [oasisctl delete group](delete-group.md) - Delete a group the authenticated user has access to
-* [oasisctl delete ipallowlist](delete-ipallowlist.md) - Delete an IP allowlist the authenticated user has access to
-* [oasisctl delete metrics](delete-metrics.md) - Delete metrics resources
-* [oasisctl delete notebook](delete-notebook.md) - Delete a notebook
-* [oasisctl delete organization](delete-organization.md) - Delete an organization the authenticated user has access to
-* [oasisctl delete project](delete-project.md) - Delete a project the authenticated user has access to
-* [oasisctl delete role](delete-role.md) - Delete a role the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-apikey.md b/site/content/3.11/arangograph/oasisctl/delete/delete-apikey.md
deleted file mode 100644
index d18236eac3..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-apikey.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Delete API Key
-menuTitle: Delete API Key
-weight: 1
----
-## oasisctl delete apikey
-
-Delete an API key with given identifier
-
-```
-oasisctl delete apikey [flags]
-```
-
-## Options
-```
- -i, --apikey-id string Identifier of the API key to delete
- -h, --help help for apikey
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-archive-events.md b/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-archive-events.md
deleted file mode 100644
index d337652b7b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-archive-events.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Audit Log Archive Events
-menuTitle: Delete Audit Log Archive Events
-weight: 4
----
-## oasisctl delete auditlog archive events
-
-Delete auditlog archive events
-
-```
-oasisctl delete auditlog archive events [flags]
-```
-
-## Options
-```
- -i, --auditlog-archive-id string Identifier of the auditlog archive to delete events from.
- -h, --help help for events
- --to string Remove events created before this timestamp.
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete auditlog archive](delete-auditlog-archive.md) - Delete an auditlog archive
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-archive.md b/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-archive.md
deleted file mode 100644
index 59153bfbdd..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-archive.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Audit Log Archive
-menuTitle: Delete Audit Log Archive
-weight: 3
----
-## oasisctl delete auditlog archive
-
-Delete an auditlog archive
-
-```
-oasisctl delete auditlog archive [flags]
-```
-
-## Options
-```
- -i, --auditlog-archive-id string Identifier of the auditlog archive to delete.
- -h, --help help for archive
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete auditlog](delete-auditlog.md) - Delete an auditlog
-* [oasisctl delete auditlog archive events](delete-auditlog-archive-events.md) - Delete auditlog archive events
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-destination.md b/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-destination.md
deleted file mode 100644
index 6dcb135925..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog-destination.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Delete Audit Log Destination
-menuTitle: Delete Audit Log Destination
-weight: 5
----
-## oasisctl delete auditlog destination
-
-Delete a destination from an auditlog
-
-```
-oasisctl delete auditlog destination [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog to delete.
- -h, --help help for destination
- --index int Index of the destination to remove. Only one delete option can be specified. (default -1)
- -o, --organization-id string Identifier of the organization.
- --type string Type of the destination to remove. This will remove ALL destinations with that type.
- --url string An optional URL which will be used to delete a single destination instead of all.
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete auditlog](delete-auditlog.md) - Delete an auditlog
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog.md b/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog.md
deleted file mode 100644
index 6895de151f..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-auditlog.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Delete Audit Log
-menuTitle: Delete Audit Log
-weight: 2
----
-## oasisctl delete auditlog
-
-Delete an auditlog
-
-```
-oasisctl delete auditlog [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog to delete.
- -h, --help help for auditlog
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-* [oasisctl delete auditlog archive](delete-auditlog-archive.md) - Delete an auditlog archive
-* [oasisctl delete auditlog destination](delete-auditlog-destination.md) - Delete a destination from an auditlog
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-backup-policy.md b/site/content/3.11/arangograph/oasisctl/delete/delete-backup-policy.md
deleted file mode 100644
index 99e8ac2deb..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-backup-policy.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete Backup Policy
-menuTitle: Delete Backup Policy
-weight: 7
----
-## oasisctl delete backup policy
-
-Delete a backup policy for a given ID.
-
-```
-oasisctl delete backup policy [flags]
-```
-
-## Options
-```
- -h, --help help for policy
- -i, --id string Identifier of the backup policy
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete backup](delete-backup.md) - Delete a backup for a given ID.
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-backup.md b/site/content/3.11/arangograph/oasisctl/delete/delete-backup.md
deleted file mode 100644
index cf116f93a1..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-backup.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Backup
-menuTitle: Delete Backup
-weight: 6
----
-## oasisctl delete backup
-
-Delete a backup for a given ID.
-
-```
-oasisctl delete backup [flags]
-```
-
-## Options
-```
- -h, --help help for backup
- -i, --id string Identifier of the backup
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-* [oasisctl delete backup policy](delete-backup-policy.md) - Delete a backup policy for a given ID.
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-cacertificate.md b/site/content/3.11/arangograph/oasisctl/delete/delete-cacertificate.md
deleted file mode 100644
index aad85c751b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-cacertificate.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete CA Certificate
-menuTitle: Delete CA Certificate
-weight: 8
----
-## oasisctl delete cacertificate
-
-Delete a CA certificate the authenticated user has access to
-
-```
-oasisctl delete cacertificate [flags]
-```
-
-## Options
-```
- -c, --cacertificate-id string Identifier of the CA certificate
- -h, --help help for cacertificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-deployment.md b/site/content/3.11/arangograph/oasisctl/delete/delete-deployment.md
deleted file mode 100644
index 15450ecb9b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-deployment.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete Deployment
-menuTitle: Delete Deployment
-weight: 9
----
-## oasisctl delete deployment
-
-Delete a deployment the authenticated user has access to
-
-```
-oasisctl delete deployment [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for deployment
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-example-installation.md b/site/content/3.11/arangograph/oasisctl/delete/delete-example-installation.md
deleted file mode 100644
index 569152227e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-example-installation.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Delete Example Installation
-menuTitle: Delete Example Installation
-weight: 11
----
-## oasisctl delete example installation
-
-Delete an example datasets installation
-
-```
-oasisctl delete example installation [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment to list installations for
- -h, --help help for installation
- --installation-id string The ID of the installation to delete.
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete example](delete-example.md) - Delete example ...
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-example.md b/site/content/3.11/arangograph/oasisctl/delete/delete-example.md
deleted file mode 100644
index 9518b2d7d1..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-example.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Delete Example
-menuTitle: Delete Example
-weight: 10
----
-## oasisctl delete example
-
-Delete example ...
-
-```
-oasisctl delete example [flags]
-```
-
-## Options
-```
- -h, --help help for example
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-* [oasisctl delete example installation](delete-example-installation.md) - Delete an example datasets installation
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-group-members.md b/site/content/3.11/arangograph/oasisctl/delete/delete-group-members.md
deleted file mode 100644
index ae6dc82a96..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-group-members.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete Group Members
-menuTitle: Delete Group Members
-weight: 13
----
-## oasisctl delete group members
-
-Delete members from group
-
-```
-oasisctl delete group members [flags]
-```
-
-## Options
-```
- -g, --group-id string Identifier of the group to delete members from
- -h, --help help for members
- -o, --organization-id string Identifier of the organization
- -u, --user-emails strings A comma separated list of user email addresses
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete group](delete-group.md) - Delete a group the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-group.md b/site/content/3.11/arangograph/oasisctl/delete/delete-group.md
deleted file mode 100644
index 4f6fe7d91c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-group.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete Group
-menuTitle: Delete Group
-weight: 12
----
-## oasisctl delete group
-
-Delete a group the authenticated user has access to
-
-```
-oasisctl delete group [flags]
-```
-
-## Options
-```
- -g, --group-id string Identifier of the group
- -h, --help help for group
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-* [oasisctl delete group members](delete-group-members.md) - Delete members from group
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-ipallowlist.md b/site/content/3.11/arangograph/oasisctl/delete/delete-ipallowlist.md
deleted file mode 100644
index 1806667457..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-ipallowlist.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete IP Allowlist
-menuTitle: Delete IP Allowlist
-weight: 14
----
-## oasisctl delete ipallowlist
-
-Delete an IP allowlist the authenticated user has access to
-
-```
-oasisctl delete ipallowlist [flags]
-```
-
-## Options
-```
- -h, --help help for ipallowlist
- -i, --ipallowlist-id string Identifier of the IP allowlist
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-metrics-token.md b/site/content/3.11/arangograph/oasisctl/delete/delete-metrics-token.md
deleted file mode 100644
index c18927e996..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-metrics-token.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Delete Metrics Token
-menuTitle: Delete Metrics Token
-weight: 16
----
-## oasisctl delete metrics token
-
-Delete a metrics token for a deployment
-
-```
-oasisctl delete metrics token [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for token
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -t, --token-id string Identifier of the metrics token
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete metrics](delete-metrics.md) - Delete metrics resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-metrics.md b/site/content/3.11/arangograph/oasisctl/delete/delete-metrics.md
deleted file mode 100644
index 36052afbce..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-metrics.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Delete Metrics
-menuTitle: Delete Metrics
-weight: 15
----
-## oasisctl delete metrics
-
-Delete metrics resources
-
-```
-oasisctl delete metrics [flags]
-```
-
-## Options
-```
- -h, --help help for metrics
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-* [oasisctl delete metrics token](delete-metrics-token.md) - Delete a metrics token for a deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-notebook.md b/site/content/3.11/arangograph/oasisctl/delete/delete-notebook.md
deleted file mode 100644
index 3992653923..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-notebook.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Delete Notebook
-menuTitle: Delete Notebook
-weight: 17
----
-## oasisctl delete notebook
-
-Delete a notebook
-
-```
-oasisctl delete notebook [flags]
-```
-
-## Options
-```
- -h, --help help for notebook
- -n, --notebook-id string Identifier of the notebook
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-organization-invite.md b/site/content/3.11/arangograph/oasisctl/delete/delete-organization-invite.md
deleted file mode 100644
index dae7596f39..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-organization-invite.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Organization Invite
-menuTitle: Delete Organization Invite
-weight: 19
----
-## oasisctl delete organization invite
-
-Delete an organization invite the authenticated user has access to
-
-```
-oasisctl delete organization invite [flags]
-```
-
-## Options
-```
- -h, --help help for invite
- -i, --invite-id string Identifier of the organization invite
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete organization](delete-organization.md) - Delete an organization the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-organization-members.md b/site/content/3.11/arangograph/oasisctl/delete/delete-organization-members.md
deleted file mode 100644
index c3d4151366..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-organization-members.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Organization Members
-menuTitle: Delete Organization Members
-weight: 20
----
-## oasisctl delete organization members
-
-Delete members from organization
-
-```
-oasisctl delete organization members [flags]
-```
-
-## Options
-```
- -h, --help help for members
- -o, --organization-id string Identifier of the organization
- -u, --user-emails strings A comma separated list of user email addresses
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete organization](delete-organization.md) - Delete an organization the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-organization.md b/site/content/3.11/arangograph/oasisctl/delete/delete-organization.md
deleted file mode 100644
index 362056323c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-organization.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Delete Organization
-menuTitle: Delete Organization
-weight: 18
----
-## oasisctl delete organization
-
-Delete an organization the authenticated user has access to
-
-```
-oasisctl delete organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-* [oasisctl delete organization invite](delete-organization-invite.md) - Delete an organization invite the authenticated user has access to
-* [oasisctl delete organization members](delete-organization-members.md) - Delete members from organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-project.md b/site/content/3.11/arangograph/oasisctl/delete/delete-project.md
deleted file mode 100644
index 9b45160be9..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-project.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Project
-menuTitle: Delete Project
-weight: 21
----
-## oasisctl delete project
-
-Delete a project the authenticated user has access to
-
-```
-oasisctl delete project [flags]
-```
-
-## Options
-```
- -h, --help help for project
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/delete/delete-role.md b/site/content/3.11/arangograph/oasisctl/delete/delete-role.md
deleted file mode 100644
index c8bcbb67f2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/delete/delete-role.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Delete Role
-menuTitle: Delete Role
-weight: 22
----
-## oasisctl delete role
-
-Delete a role the authenticated user has access to
-
-```
-oasisctl delete role [flags]
-```
-
-## Options
-```
- -h, --help help for role
- -o, --organization-id string Identifier of the organization
- -r, --role-id string Identifier of the role
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl delete](_index.md) - Delete resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/disable/_index.md b/site/content/3.11/arangograph/oasisctl/disable/_index.md
deleted file mode 100644
index 916ad96f01..0000000000
--- a/site/content/3.11/arangograph/oasisctl/disable/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Disable
-menuTitle: Disable
-weight: 10
----
-## oasisctl disable
-
-Disable some settings related to deployment
-
-```
-oasisctl disable [flags]
-```
-
-## Options
-```
- -h, --help help for disable
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl disable scheduled-root-password-rotation](disable-scheduled-root-password-rotation.md) - Disable scheduled root password rotation
-
diff --git a/site/content/3.11/arangograph/oasisctl/disable/disable-scheduled-root-password-rotation.md b/site/content/3.11/arangograph/oasisctl/disable/disable-scheduled-root-password-rotation.md
deleted file mode 100644
index 52ac637431..0000000000
--- a/site/content/3.11/arangograph/oasisctl/disable/disable-scheduled-root-password-rotation.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Disable Scheduled-Root-Password-Rotation
-menuTitle: Disable Scheduled-Root-Password-Rotation
-weight: 1
----
-## oasisctl disable scheduled-root-password-rotation
-
-Disable scheduled root password rotation
-
-```
-oasisctl disable scheduled-root-password-rotation [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for scheduled-root-password-rotation
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl disable](_index.md) - Disable some settings related to deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/enable/_index.md b/site/content/3.11/arangograph/oasisctl/enable/_index.md
deleted file mode 100644
index 61a3b03d10..0000000000
--- a/site/content/3.11/arangograph/oasisctl/enable/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Enable
-menuTitle: Enable
-weight: 11
----
-## oasisctl enable
-
-Enable some settings related to deployment
-
-```
-oasisctl enable [flags]
-```
-
-## Options
-```
- -h, --help help for enable
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl enable scheduled-root-password-rotation](enable-scheduled-root-password-rotation.md) - Enable scheduled root password rotation
-
diff --git a/site/content/3.11/arangograph/oasisctl/enable/enable-scheduled-root-password-rotation.md b/site/content/3.11/arangograph/oasisctl/enable/enable-scheduled-root-password-rotation.md
deleted file mode 100644
index 8628abc79c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/enable/enable-scheduled-root-password-rotation.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Enable Scheduled-Root-Password-Rotation
-menuTitle: Enable Scheduled-Root-Password-Rotation
-weight: 1
----
-## oasisctl enable scheduled-root-password-rotation
-
-Enable scheduled root password rotation
-
-```
-oasisctl enable scheduled-root-password-rotation [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for scheduled-root-password-rotation
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl enable](_index.md) - Enable some settings related to deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/generate-docs.md b/site/content/3.11/arangograph/oasisctl/generate-docs.md
deleted file mode 100644
index f1d83f8437..0000000000
--- a/site/content/3.11/arangograph/oasisctl/generate-docs.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Generate Documentation
-menuTitle: Generate Documentation
-weight: 12
----
-## oasisctl generate-docs
-
-Generate output
-
-```
-oasisctl generate-docs [flags]
-```
-
-## Options
-```
- -h, --help help for generate-docs
- -l, --link-file-ext string What file extensions the links should point to
- -o, --output-dir string Output directory (default "./docs")
- -r, --replace-underscore-with string Replace the underscore in links with the given character
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/_index.md b/site/content/3.11/arangograph/oasisctl/get/_index.md
deleted file mode 100644
index 20021e7831..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/_index.md
+++ /dev/null
@@ -1,47 +0,0 @@
----
-title: Oasisctl Get
-menuTitle: Get
-weight: 13
----
-## oasisctl get
-
-Get information
-
-```
-oasisctl get [flags]
-```
-
-## Options
-```
- -h, --help help for get
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl get auditlog](get-auditlog.md) - Get auditlog archive
-* [oasisctl get backup](get-backup.md) - Get a backup
-* [oasisctl get cacertificate](get-cacertificate.md) - Get a CA certificate the authenticated user has access to
-* [oasisctl get deployment](get-deployment.md) - Get a deployment the authenticated user has access to
-* [oasisctl get example](get-example.md) - Get a single example dataset
-* [oasisctl get group](get-group.md) - Get a group the authenticated user has access to
-* [oasisctl get ipallowlist](get-ipallowlist.md) - Get an IP allowlist the authenticated user has access to
-* [oasisctl get metrics](get-metrics.md) - Get metrics information
-* [oasisctl get notebook](get-notebook.md) - Get a notebook
-* [oasisctl get organization](get-organization.md) - Get an organization the authenticated user is a member of
-* [oasisctl get policy](get-policy.md) - Get a policy the authenticated user has access to
-* [oasisctl get private](get-private.md) - Get private information
-* [oasisctl get project](get-project.md) - Get a project the authenticated user has access to
-* [oasisctl get provider](get-provider.md) - Get a provider the authenticated user has access to
-* [oasisctl get region](get-region.md) - Get a region the authenticated user has access to
-* [oasisctl get role](get-role.md) - Get a role the authenticated user has access to
-* [oasisctl get self](get-self.md) - Get information about the authenticated user
-* [oasisctl get server](get-server.md) - Get server information
-* [oasisctl get tandc](get-tandc.md) - Get current terms and conditions or get one by ID
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-auditlog-archive.md b/site/content/3.11/arangograph/oasisctl/get/get-auditlog-archive.md
deleted file mode 100644
index 546b9a55c4..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-auditlog-archive.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Audit Log Archive
-menuTitle: Get Audit Log Archive
-weight: 2
----
-## oasisctl get auditlog archive
-
-Get auditlog archive
-
-```
-oasisctl get auditlog archive [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog
- -h, --help help for archive
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get auditlog](get-auditlog.md) - Get auditlog archive
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-auditlog-events.md b/site/content/3.11/arangograph/oasisctl/get/get-auditlog-events.md
deleted file mode 100644
index 44c9088765..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-auditlog-events.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Oasisctl Get Audit Log Events
-menuTitle: Get Audit Log Events
-weight: 3
----
-## oasisctl get auditlog events
-
-Get auditlog events
-
-```
-oasisctl get auditlog events [flags]
-```
-
-## Options
-```
- --auditlog-archive-id string If set, include only events from this AuditLogArchive
- -i, --auditlog-id string Identifier of the auditlog
- --excluded-topics strings If non-empty, leave out events with one of these topics. This takes priority over included
- --from string Request events created at or after this timestamp
- -h, --help help for events
- --included-topics strings If non-empty, only request events with one of these topics
- --limit int Limit the number of audit log events. Defaults to 0, meaning no limit
- --to string Request events created before this timestamp
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get auditlog](get-auditlog.md) - Get auditlog archive
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-auditlog.md b/site/content/3.11/arangograph/oasisctl/get/get-auditlog.md
deleted file mode 100644
index 025710b835..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-auditlog.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Get Audit Log
-menuTitle: Get Audit Log
-weight: 1
----
-## oasisctl get auditlog
-
-Get auditlog archive
-
-```
-oasisctl get auditlog [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog
- -h, --help help for auditlog
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get auditlog archive](get-auditlog-archive.md) - Get auditlog archive
-* [oasisctl get auditlog events](get-auditlog-events.md) - Get auditlog events
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-backup-policy.md b/site/content/3.11/arangograph/oasisctl/get/get-backup-policy.md
deleted file mode 100644
index 916ad22e61..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-backup-policy.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Backup Policy
-menuTitle: Get Backup Policy
-weight: 5
----
-## oasisctl get backup policy
-
-Get an existing backup policy
-
-```
-oasisctl get backup policy [flags]
-```
-
-## Options
-```
- -h, --help help for policy
- -i, --id string Identifier of the backup policy (Id|Name|Url)
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get backup](get-backup.md) - Get a backup
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-backup.md b/site/content/3.11/arangograph/oasisctl/get/get-backup.md
deleted file mode 100644
index 2792a98b02..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-backup.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Backup
-menuTitle: Get Backup
-weight: 4
----
-## oasisctl get backup
-
-Get a backup
-
-```
-oasisctl get backup [flags]
-```
-
-## Options
-```
- -h, --help help for backup
- -i, --id string Identifier of the backup
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get backup policy](get-backup-policy.md) - Get an existing backup policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-cacertificate.md b/site/content/3.11/arangograph/oasisctl/get/get-cacertificate.md
deleted file mode 100644
index 0be6d11e44..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-cacertificate.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Get CA Certificate
-menuTitle: Get CA Certificate
-weight: 6
----
-## oasisctl get cacertificate
-
-Get a CA certificate the authenticated user has access to
-
-```
-oasisctl get cacertificate [flags]
-```
-
-## Options
-```
- -c, --cacertificate-id string Identifier of the CA certificate
- -h, --help help for cacertificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-deployment.md b/site/content/3.11/arangograph/oasisctl/get/get-deployment.md
deleted file mode 100644
index ab8d86e3d3..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-deployment.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Get Deployment
-menuTitle: Get Deployment
-weight: 7
----
-## oasisctl get deployment
-
-Get a deployment the authenticated user has access to
-
-```
-oasisctl get deployment [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for deployment
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- --show-root-password show the root password of the database
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-example-installation.md b/site/content/3.11/arangograph/oasisctl/get/get-example-installation.md
deleted file mode 100644
index 4190e8e288..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-example-installation.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Get Example Installation
-menuTitle: Get Example Installation
-weight: 9
----
-## oasisctl get example installation
-
-Get a single example dataset installation
-
-```
-oasisctl get example installation [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment to list installations for
- -h, --help help for installation
- --installation-id string The ID of the installation to get.
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get example](get-example.md) - Get a single example dataset
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-example.md b/site/content/3.11/arangograph/oasisctl/get/get-example.md
deleted file mode 100644
index 1238d443ed..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-example.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Example
-menuTitle: Get Example
-weight: 8
----
-## oasisctl get example
-
-Get a single example dataset
-
-```
-oasisctl get example [flags]
-```
-
-## Options
-```
- -e, --example-dataset-id string ID of the example dataset
- -h, --help help for example
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get example installation](get-example-installation.md) - Get a single example dataset installation
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-group.md b/site/content/3.11/arangograph/oasisctl/get/get-group.md
deleted file mode 100644
index 9b8e72e16b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-group.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Group
-menuTitle: Get Group
-weight: 10
----
-## oasisctl get group
-
-Get a group the authenticated user has access to
-
-```
-oasisctl get group [flags]
-```
-
-## Options
-```
- -g, --group-id string Identifier of the group
- -h, --help help for group
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-ipallowlist.md b/site/content/3.11/arangograph/oasisctl/get/get-ipallowlist.md
deleted file mode 100644
index 379c324604..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-ipallowlist.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Get IP Allowlist
-menuTitle: Get IP Allowlist
-weight: 11
----
-## oasisctl get ipallowlist
-
-Get an IP allowlist the authenticated user has access to
-
-```
-oasisctl get ipallowlist [flags]
-```
-
-## Options
-```
- -h, --help help for ipallowlist
- -i, --ipallowlist-id string Identifier of the IP allowlist
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-metrics-token.md b/site/content/3.11/arangograph/oasisctl/get/get-metrics-token.md
deleted file mode 100644
index 6226b02793..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-metrics-token.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Get Metrics Token
-menuTitle: Get Metrics Token
-weight: 13
----
-## oasisctl get metrics token
-
-Get a metrics token
-
-```
-oasisctl get metrics token [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for token
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -t, --token-id string Identifier of the metrics token
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get metrics](get-metrics.md) - Get metrics information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-metrics.md b/site/content/3.11/arangograph/oasisctl/get/get-metrics.md
deleted file mode 100644
index f2699417aa..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-metrics.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Metrics
-menuTitle: Get Metrics
-weight: 12
----
-## oasisctl get metrics
-
-Get metrics information
-
-```
-oasisctl get metrics [flags]
-```
-
-## Options
-```
- -h, --help help for metrics
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get metrics token](get-metrics-token.md) - Get a metrics token
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-notebook.md b/site/content/3.11/arangograph/oasisctl/get/get-notebook.md
deleted file mode 100644
index 8526fb293a..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-notebook.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Notebook
-menuTitle: Get Notebook
-weight: 14
----
-## oasisctl get notebook
-
-Get a notebook
-
-```
-oasisctl get notebook [flags]
-```
-
-## Options
-```
- -h, --help help for notebook
- -n, --notebook-id string Identifier of the notebook
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization-authentication-providers.md b/site/content/3.11/arangograph/oasisctl/get/get-organization-authentication-providers.md
deleted file mode 100644
index da20b01a1a..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization-authentication-providers.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Organization Authentication Providers
-menuTitle: Get Organization Authentication Providers
-weight: 17
----
-## oasisctl get organization authentication providers
-
-Get which authentication providers are allowed for accessing a specific organization
-
-```
-oasisctl get organization authentication providers [flags]
-```
-
-## Options
-```
- -h, --help help for providers
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get organization authentication](get-organization-authentication.md) - Get authentication specific information for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization-authentication.md b/site/content/3.11/arangograph/oasisctl/get/get-organization-authentication.md
deleted file mode 100644
index cd16e2841b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization-authentication.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Organization Authentication
-menuTitle: Get Organization Authentication
-weight: 16
----
-## oasisctl get organization authentication
-
-Get authentication specific information for an organization
-
-```
-oasisctl get organization authentication [flags]
-```
-
-## Options
-```
- -h, --help help for authentication
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get organization](get-organization.md) - Get an organization the authenticated user is a member of
-* [oasisctl get organization authentication providers](get-organization-authentication-providers.md) - Get which authentication providers are allowed for accessing a specific organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization-email-domain-restrictions.md b/site/content/3.11/arangograph/oasisctl/get/get-organization-email-domain-restrictions.md
deleted file mode 100644
index 400ad06087..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization-email-domain-restrictions.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Organization Email Domain Restrictions
-menuTitle: Get Organization Email Domain Restrictions
-weight: 20
----
-## oasisctl get organization email domain restrictions
-
-Get which email domain restrictions are placed on accessing a specific organization
-
-```
-oasisctl get organization email domain restrictions [flags]
-```
-
-## Options
-```
- -h, --help help for restrictions
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get organization email domain](get-organization-email-domain.md) - Get email domain specific information for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization-email-domain.md b/site/content/3.11/arangograph/oasisctl/get/get-organization-email-domain.md
deleted file mode 100644
index 305097e72f..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization-email-domain.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Organization Email Domain
-menuTitle: Get Organization Email Domain
-weight: 19
----
-## oasisctl get organization email domain
-
-Get email domain specific information for an organization
-
-```
-oasisctl get organization email domain [flags]
-```
-
-## Options
-```
- -h, --help help for domain
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get organization email](get-organization-email.md) - Get email specific information for an organization
-* [oasisctl get organization email domain restrictions](get-organization-email-domain-restrictions.md) - Get which email domain restrictions are placed on accessing a specific organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization-email.md b/site/content/3.11/arangograph/oasisctl/get/get-organization-email.md
deleted file mode 100644
index 5ca941ffcd..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization-email.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Organization Email
-menuTitle: Get Organization Email
-weight: 18
----
-## oasisctl get organization email
-
-Get email specific information for an organization
-
-```
-oasisctl get organization email [flags]
-```
-
-## Options
-```
- -h, --help help for email
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get organization](get-organization.md) - Get an organization the authenticated user is a member of
-* [oasisctl get organization email domain](get-organization-email-domain.md) - Get email domain specific information for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization-invite.md b/site/content/3.11/arangograph/oasisctl/get/get-organization-invite.md
deleted file mode 100644
index 093ed06c05..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization-invite.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Organization Invite
-menuTitle: Get Organization Invite
-weight: 21
----
-## oasisctl get organization invite
-
-Get an organization invite the authenticated user has access to
-
-```
-oasisctl get organization invite [flags]
-```
-
-## Options
-```
- -h, --help help for invite
- -i, --invite-id string Identifier of the organization invite
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get organization](get-organization.md) - Get an organization the authenticated user is a member of
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-organization.md b/site/content/3.11/arangograph/oasisctl/get/get-organization.md
deleted file mode 100644
index b05c6201ab..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-organization.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Get Organization
-menuTitle: Get Organization
-weight: 15
----
-## oasisctl get organization
-
-Get an organization the authenticated user is a member of
-
-```
-oasisctl get organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get organization authentication](get-organization-authentication.md) - Get authentication specific information for an organization
-* [oasisctl get organization email](get-organization-email.md) - Get email specific information for an organization
-* [oasisctl get organization invite](get-organization-invite.md) - Get an organization invite the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-policy.md b/site/content/3.11/arangograph/oasisctl/get/get-policy.md
deleted file mode 100644
index 599e5601cb..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-policy.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Policy
-menuTitle: Get Policy
-weight: 22
----
-## oasisctl get policy
-
-Get a policy the authenticated user has access to
-
-```
-oasisctl get policy [flags]
-```
-
-## Options
-```
- -h, --help help for policy
- -u, --url string URL of the resource to inspect the policy for
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-private-endpoint-service.md b/site/content/3.11/arangograph/oasisctl/get/get-private-endpoint-service.md
deleted file mode 100644
index a9c56b8b0f..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-private-endpoint-service.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Get Private Endpoint Service
-menuTitle: Get Private Endpoint Service
-weight: 25
----
-## oasisctl get private endpoint service
-
-Get a Private Endpoint Service the authenticated user has access to
-
-```
-oasisctl get private endpoint service [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment that the private endpoint service is connected to
- -h, --help help for service
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get private endpoint](get-private-endpoint.md) -
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-private-endpoint.md b/site/content/3.11/arangograph/oasisctl/get/get-private-endpoint.md
deleted file mode 100644
index 38afeb2dd8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-private-endpoint.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Private Endpoint
-menuTitle: Get Private Endpoint
-weight: 24
----
-## oasisctl get private endpoint
-
-
-
-```
-oasisctl get private endpoint [flags]
-```
-
-## Options
-```
- -h, --help help for endpoint
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get private](get-private.md) - Get private information
-* [oasisctl get private endpoint service](get-private-endpoint-service.md) - Get a Private Endpoint Service the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-private.md b/site/content/3.11/arangograph/oasisctl/get/get-private.md
deleted file mode 100644
index e84921fd32..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-private.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Private
-menuTitle: Get Private
-weight: 23
----
-## oasisctl get private
-
-Get private information
-
-```
-oasisctl get private [flags]
-```
-
-## Options
-```
- -h, --help help for private
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get private endpoint](get-private-endpoint.md) -
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-project.md b/site/content/3.11/arangograph/oasisctl/get/get-project.md
deleted file mode 100644
index 5bfb087e53..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-project.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Project
-menuTitle: Get Project
-weight: 26
----
-## oasisctl get project
-
-Get a project the authenticated user has access to
-
-```
-oasisctl get project [flags]
-```
-
-## Options
-```
- -h, --help help for project
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-provider.md b/site/content/3.11/arangograph/oasisctl/get/get-provider.md
deleted file mode 100644
index da7d632e1b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-provider.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Provider
-menuTitle: Get Provider
-weight: 27
----
-## oasisctl get provider
-
-Get a provider the authenticated user has access to
-
-```
-oasisctl get provider [flags]
-```
-
-## Options
-```
- -h, --help help for provider
- -o, --organization-id string Optional Identifier of the organization
- -p, --provider-id string Identifier of the provider
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-region.md b/site/content/3.11/arangograph/oasisctl/get/get-region.md
deleted file mode 100644
index 25ca81e867..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-region.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Get Region
-menuTitle: Get Region
-weight: 28
----
-## oasisctl get region
-
-Get a region the authenticated user has access to
-
-```
-oasisctl get region [flags]
-```
-
-## Options
-```
- -h, --help help for region
- -o, --organization-id string Optional Identifier of the organization
- -p, --provider-id string Identifier of the provider
- -r, --region-id string Identifier of the region
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-role.md b/site/content/3.11/arangograph/oasisctl/get/get-role.md
deleted file mode 100644
index 898605e245..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-role.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Role
-menuTitle: Get Role
-weight: 29
----
-## oasisctl get role
-
-Get a role the authenticated user has access to
-
-```
-oasisctl get role [flags]
-```
-
-## Options
-```
- -h, --help help for role
- -o, --organization-id string Identifier of the organization
- -r, --role-id string Identifier of the role
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-self.md b/site/content/3.11/arangograph/oasisctl/get/get-self.md
deleted file mode 100644
index 26d48ad423..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-self.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Oasisctl Get Self
-menuTitle: Get Self
-weight: 30
----
-## oasisctl get self
-
-Get information about the authenticated user
-
-```
-oasisctl get self [flags]
-```
-
-## Options
-```
- -h, --help help for self
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-server-status.md b/site/content/3.11/arangograph/oasisctl/get/get-server-status.md
deleted file mode 100644
index 302fb17a1d..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-server-status.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Get Server Status
-menuTitle: Get Server Status
-weight: 32
----
-## oasisctl get server status
-
-Get the status of servers for a deployment
-
-```
-oasisctl get server status [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for status
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get server](get-server.md) - Get server information
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-server.md b/site/content/3.11/arangograph/oasisctl/get/get-server.md
deleted file mode 100644
index ad54b9dfd2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-server.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Get Server
-menuTitle: Get Server
-weight: 31
----
-## oasisctl get server
-
-Get server information
-
-```
-oasisctl get server [flags]
-```
-
-## Options
-```
- -h, --help help for server
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-* [oasisctl get server status](get-server-status.md) - Get the status of servers for a deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/get/get-tandc.md b/site/content/3.11/arangograph/oasisctl/get/get-tandc.md
deleted file mode 100644
index c33b546252..0000000000
--- a/site/content/3.11/arangograph/oasisctl/get/get-tandc.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Get Terms & Conditions
-menuTitle: Get Terms & Conditions
-weight: 33
----
-## oasisctl get tandc
-
-Get current terms and conditions or get one by ID
-
-```
-oasisctl get tandc [flags]
-```
-
-## Options
-```
- -h, --help help for tandc
- -o, --organization-id string Identifier of the organization
- -t, --terms-and-conditions-id string Identifier of the terms and conditions to accept.
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl get](_index.md) - Get information
-
diff --git a/site/content/3.11/arangograph/oasisctl/import.md b/site/content/3.11/arangograph/oasisctl/import.md
deleted file mode 100644
index 385375d640..0000000000
--- a/site/content/3.11/arangograph/oasisctl/import.md
+++ /dev/null
@@ -1,46 +0,0 @@
----
-title: Oasisctl Import
-menuTitle: Import
-weight: 14
----
-## oasisctl import
-
-Import data from a local database or from another remote database into an Oasis deployment.
-
-```
-oasisctl import [flags]
-```
-
-## Options
-```
- -b, --batch-size int The number of documents to write at once. (default 4096)
- -d, --destination-deployment-id string Destination deployment id to import data into. It can be provided instead of address, username and password.
- --excluded-collection strings A list of collections names which should be excluded. Exclusion takes priority over inclusion.
- --excluded-database strings A list of database names which should be excluded. Exclusion takes priority over inclusion.
- --excluded-graph strings A list of graph names which should be excluded. Exclusion takes priority over inclusion.
- --excluded-view strings A list of view names which should be excluded. Exclusion takes priority over inclusion.
- -f, --force Force the copy automatically overwriting everything at destination.
- -h, --help help for import
- --included-collection strings A list of collection names which should be included. If provided, only these collections will be copied.
- --included-database strings A list of database names which should be included. If provided, only these databases will be copied.
- --included-graph strings A list of graph names which should be included. If provided, only these graphs will be copied.
- --included-view strings A list of view names which should be included. If provided, only these views will be copied.
- -r, --max-retries int The number of maximum retries attempts. Increasing this number will also increase the exponential fallback timer. (default 9)
- -m, --maximum-parallel-collections int Maximum number of collections being copied in parallel. (default 10)
- --no-progress-bar Disable the progress bar but still have partial progress output.
- --query-ttl duration Cursor TTL defined as a duration. (default 2h0m0s)
- --source-address string Source database address to copy data from.
- --source-password string Source database password if required.
- --source-username string Source database username if required.
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/_index.md b/site/content/3.11/arangograph/oasisctl/list/_index.md
deleted file mode 100644
index b8a7496441..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/_index.md
+++ /dev/null
@@ -1,56 +0,0 @@
----
-title: Oasisctl List
-menuTitle: List
-weight: 15
----
-## oasisctl list
-
-List resources
-
-```
-oasisctl list [flags]
-```
-
-## Options
-```
- -h, --help help for list
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl list apikeys](list-apikeys.md) - List all API keys created for the current user
-* [oasisctl list arangodb](list-arangodb.md) - List ArangoDB information
-* [oasisctl list auditlog](list-auditlog.md) - List resources for auditlogs
-* [oasisctl list auditlogs](list-auditlogs.md) - List auditlogs
-* [oasisctl list backup](list-backup.md) - A list command for various backup resources
-* [oasisctl list backups](list-backups.md) - List backups
-* [oasisctl list cacertificates](list-cacertificates.md) - List all CA certificates of the given project
-* [oasisctl list cpusizes](list-cpusizes.md) - List CPU sizes
-* [oasisctl list deployments](list-deployments.md) - List all deployments of the given project
-* [oasisctl list diskperformances](list-diskperformances.md) - List disk performances
-* [oasisctl list effective](list-effective.md) - List effective information
-* [oasisctl list example](list-example.md) - List example ...
-* [oasisctl list examples](list-examples.md) - List all example datasets
-* [oasisctl list group](list-group.md) - List group resources
-* [oasisctl list groups](list-groups.md) - List all groups of the given organization
-* [oasisctl list ipallowlists](list-ipallowlists.md) - List all IP allowlists of the given project
-* [oasisctl list metrics](list-metrics.md) - List metrics resources
-* [oasisctl list nodesizes](list-nodesizes.md) - List node sizes
-* [oasisctl list notebookmodels](list-notebookmodels.md) - List notebook models
-* [oasisctl list notebooks](list-notebooks.md) - List notebooks
-* [oasisctl list organization](list-organization.md) - List organization resources
-* [oasisctl list organizations](list-organizations.md) - List all organizations the authenticated user is a member of
-* [oasisctl list permissions](list-permissions.md) - List the known permissions
-* [oasisctl list projects](list-projects.md) - List all projects of the given organization
-* [oasisctl list providers](list-providers.md) - List all providers the authenticated user has access to
-* [oasisctl list regions](list-regions.md) - List all regions of the given provider
-* [oasisctl list roles](list-roles.md) - List all roles of the given organization
-* [oasisctl list servers](list-servers.md) - List servers information
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-apikeys.md b/site/content/3.11/arangograph/oasisctl/list/list-apikeys.md
deleted file mode 100644
index 44984cb38b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-apikeys.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Oasisctl List API Keys
-menuTitle: List API Keys
-weight: 1
----
-## oasisctl list apikeys
-
-List all API keys created for the current user
-
-```
-oasisctl list apikeys [flags]
-```
-
-## Options
-```
- -h, --help help for apikeys
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-arangodb-versions.md b/site/content/3.11/arangograph/oasisctl/list/list-arangodb-versions.md
deleted file mode 100644
index 22411cf8a8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-arangodb-versions.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List ArangoDB Versions
-menuTitle: List ArangoDB Versions
-weight: 3
----
-## oasisctl list arangodb versions
-
-List all supported ArangoDB versions
-
-```
-oasisctl list arangodb versions [flags]
-```
-
-## Options
-```
- -h, --help help for versions
- -o, --organization-id string Optional Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list arangodb](list-arangodb.md) - List ArangoDB information
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-arangodb.md b/site/content/3.11/arangograph/oasisctl/list/list-arangodb.md
deleted file mode 100644
index 04445b917d..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-arangodb.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List ArangoDB
-menuTitle: List ArangoDB
-weight: 2
----
-## oasisctl list arangodb
-
-List ArangoDB information
-
-```
-oasisctl list arangodb [flags]
-```
-
-## Options
-```
- -h, --help help for arangodb
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list arangodb versions](list-arangodb-versions.md) - List all supported ArangoDB versions
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-auditlog-archives.md b/site/content/3.11/arangograph/oasisctl/list/list-auditlog-archives.md
deleted file mode 100644
index efe237a2b6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-auditlog-archives.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Audit Log Archives
-menuTitle: List Audit Log Archives
-weight: 5
----
-## oasisctl list auditlog archives
-
-List auditlog archives
-
-```
-oasisctl list auditlog archives [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog
- -h, --help help for archives
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list auditlog](list-auditlog.md) - List resources for auditlogs
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-auditlog-destinations.md b/site/content/3.11/arangograph/oasisctl/list/list-auditlog-destinations.md
deleted file mode 100644
index f6fc395ce0..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-auditlog-destinations.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Audit Log Destinations
-menuTitle: List Audit Log Destinations
-weight: 6
----
-## oasisctl list auditlog destinations
-
-List auditlog destinations
-
-```
-oasisctl list auditlog destinations [flags]
-```
-
-## Options
-```
- --auditlog-id string Identifier of the auditlog to list the destinations for
- -h, --help help for destinations
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list auditlog](list-auditlog.md) - List resources for auditlogs
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-auditlog.md b/site/content/3.11/arangograph/oasisctl/list/list-auditlog.md
deleted file mode 100644
index 4a86f9969e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-auditlog.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Audit Log
-menuTitle: List Audit Log
-weight: 4
----
-## oasisctl list auditlog
-
-List resources for auditlogs
-
-```
-oasisctl list auditlog [flags]
-```
-
-## Options
-```
- -h, --help help for auditlog
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list auditlog archives](list-auditlog-archives.md) - List auditlog archives
-* [oasisctl list auditlog destinations](list-auditlog-destinations.md) - List auditlog destinations
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-auditlogs.md b/site/content/3.11/arangograph/oasisctl/list/list-auditlogs.md
deleted file mode 100644
index 83e17ba2e2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-auditlogs.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Audit Logs
-menuTitle: List Audit Logs
-weight: 7
----
-## oasisctl list auditlogs
-
-List auditlogs
-
-```
-oasisctl list auditlogs [flags]
-```
-
-## Options
-```
- -h, --help help for auditlogs
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-backup-policies.md b/site/content/3.11/arangograph/oasisctl/list/list-backup-policies.md
deleted file mode 100644
index ec1b895990..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-backup-policies.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Backup Policies
-menuTitle: List Backup Policies
-weight: 9
----
-## oasisctl list backup policies
-
-List backup policies
-
-```
-oasisctl list backup policies [flags]
-```
-
-## Options
-```
- --deployment-id string The ID of the deployment to list backup policies for
- -h, --help help for policies
- --include-deleted If set, the result includes all backup policies, including those who set to deleted, however are not removed from the system currently
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list backup](list-backup.md) - A list command for various backup resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-backup.md b/site/content/3.11/arangograph/oasisctl/list/list-backup.md
deleted file mode 100644
index 3c0b2d78a8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-backup.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Backup
-menuTitle: List Backup
-weight: 8
----
-## oasisctl list backup
-
-A list command for various backup resources
-
-```
-oasisctl list backup [flags]
-```
-
-## Options
-```
- -h, --help help for backup
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list backup policies](list-backup-policies.md) - List backup policies
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-backups.md b/site/content/3.11/arangograph/oasisctl/list/list-backups.md
deleted file mode 100644
index ace03c781e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-backups.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl List Backups
-menuTitle: List Backups
-weight: 10
----
-## oasisctl list backups
-
-List backups
-
-```
-oasisctl list backups [flags]
-```
-
-## Options
-```
- --deployment-id string The ID of the deployment to list backups for
- --from string Request backups that are created at or after this timestamp
- -h, --help help for backups
- --to string Request backups that are created before this timestamp
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-cacertificates.md b/site/content/3.11/arangograph/oasisctl/list/list-cacertificates.md
deleted file mode 100644
index 903063bb34..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-cacertificates.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List CA Certificates
-menuTitle: List CA Certificates
-weight: 11
----
-## oasisctl list cacertificates
-
-List all CA certificates of the given project
-
-```
-oasisctl list cacertificates [flags]
-```
-
-## Options
-```
- -h, --help help for cacertificates
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-cpusizes.md b/site/content/3.11/arangograph/oasisctl/list/list-cpusizes.md
deleted file mode 100644
index 85188eac3b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-cpusizes.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl List CPU Sizes
-menuTitle: List CPU Sizes
-weight: 12
----
-## oasisctl list cpusizes
-
-List CPU sizes
-
-```
-oasisctl list cpusizes [flags]
-```
-
-## Options
-```
- -h, --help help for cpusizes
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- --provider-id string Identifier of the provider
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-deployments.md b/site/content/3.11/arangograph/oasisctl/list/list-deployments.md
deleted file mode 100644
index 66b3d739d2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-deployments.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Deployments
-menuTitle: List Deployments
-weight: 13
----
-## oasisctl list deployments
-
-List all deployments of the given project
-
-```
-oasisctl list deployments [flags]
-```
-
-## Options
-```
- -h, --help help for deployments
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-diskperformances.md b/site/content/3.11/arangograph/oasisctl/list/list-diskperformances.md
deleted file mode 100644
index ddbd5714c0..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-diskperformances.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl List Disk Performances
-menuTitle: List Disk Performances
-weight: 14
----
-## oasisctl list diskperformances
-
-List disk performances
-
-```
-oasisctl list diskperformances [flags]
-```
-
-## Options
-```
- --dbserver-disk-size int32 The disk size of DB-Servers (GiB) (default 32)
- -h, --help help for diskperformances
- --node-size-id string Identifier of the node size
- -o, --organization-id string Identifier of the organization
- --provider-id string Identifier of the provider
- -r, --region-id string Identifier of the region
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-effective-permissions.md b/site/content/3.11/arangograph/oasisctl/list/list-effective-permissions.md
deleted file mode 100644
index 394cc1006e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-effective-permissions.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Effective Permissions
-menuTitle: List Effective Permissions
-weight: 16
----
-## oasisctl list effective permissions
-
-List the effective permissions, the authenticated user has for a given URL
-
-```
-oasisctl list effective permissions [flags]
-```
-
-## Options
-```
- -h, --help help for permissions
- -u, --url string URL of resource to get effective permissions for
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list effective](list-effective.md) - List effective information
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-effective.md b/site/content/3.11/arangograph/oasisctl/list/list-effective.md
deleted file mode 100644
index 431f601dc1..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-effective.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Effective
-menuTitle: List Effective
-weight: 15
----
-## oasisctl list effective
-
-List effective information
-
-```
-oasisctl list effective [flags]
-```
-
-## Options
-```
- -h, --help help for effective
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list effective permissions](list-effective-permissions.md) - List the effective permissions, the authenticated user has for a given URL
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-example-installations.md b/site/content/3.11/arangograph/oasisctl/list/list-example-installations.md
deleted file mode 100644
index 5a9167f5b9..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-example-installations.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl List Example Installations
-menuTitle: List Example Installations
-weight: 18
----
-## oasisctl list example installations
-
-List all example dataset installations for a deployment
-
-```
-oasisctl list example installations [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment to list installations for
- -h, --help help for installations
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list example](list-example.md) - List example ...
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-example.md b/site/content/3.11/arangograph/oasisctl/list/list-example.md
deleted file mode 100644
index e389b299c2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-example.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Example
-menuTitle: List Example
-weight: 17
----
-## oasisctl list example
-
-List example ...
-
-```
-oasisctl list example [flags]
-```
-
-## Options
-```
- -h, --help help for example
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list example installations](list-example-installations.md) - List all example dataset installations for a deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-examples.md b/site/content/3.11/arangograph/oasisctl/list/list-examples.md
deleted file mode 100644
index 1cc1d11b86..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-examples.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Examples
-menuTitle: List Examples
-weight: 19
----
-## oasisctl list examples
-
-List all example datasets
-
-```
-oasisctl list examples [flags]
-```
-
-## Options
-```
- -h, --help help for examples
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-group-members.md b/site/content/3.11/arangograph/oasisctl/list/list-group-members.md
deleted file mode 100644
index 6bc87e0b73..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-group-members.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Group Members
-menuTitle: List Group Members
-weight: 21
----
-## oasisctl list group members
-
-List members of a group the authenticated user is a member of
-
-```
-oasisctl list group members [flags]
-```
-
-## Options
-```
- -g, --group-id string Identifier of the group
- -h, --help help for members
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list group](list-group.md) - List group resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-group.md b/site/content/3.11/arangograph/oasisctl/list/list-group.md
deleted file mode 100644
index 28f5caa79d..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-group.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Group
-menuTitle: List Group
-weight: 20
----
-## oasisctl list group
-
-List group resources
-
-```
-oasisctl list group [flags]
-```
-
-## Options
-```
- -h, --help help for group
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list group members](list-group-members.md) - List members of a group the authenticated user is a member of
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-groups.md b/site/content/3.11/arangograph/oasisctl/list/list-groups.md
deleted file mode 100644
index 8908ae0fb3..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-groups.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Groups
-menuTitle: List Groups
-weight: 22
----
-## oasisctl list groups
-
-List all groups of the given organization
-
-```
-oasisctl list groups [flags]
-```
-
-## Options
-```
- -h, --help help for groups
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-ipallowlists.md b/site/content/3.11/arangograph/oasisctl/list/list-ipallowlists.md
deleted file mode 100644
index 33ef91495d..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-ipallowlists.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List IP Allowlists
-menuTitle: List IP Allowlists
-weight: 23
----
-## oasisctl list ipallowlists
-
-List all IP allowlists of the given project
-
-```
-oasisctl list ipallowlists [flags]
-```
-
-## Options
-```
- -h, --help help for ipallowlists
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-metrics-tokens.md b/site/content/3.11/arangograph/oasisctl/list/list-metrics-tokens.md
deleted file mode 100644
index ce1713add8..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-metrics-tokens.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl List Metrics Tokens
-menuTitle: List Metrics Tokens
-weight: 25
----
-## oasisctl list metrics tokens
-
-List all metrics tokens of the given deployment
-
-```
-oasisctl list metrics tokens [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for tokens
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list metrics](list-metrics.md) - List metrics resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-metrics.md b/site/content/3.11/arangograph/oasisctl/list/list-metrics.md
deleted file mode 100644
index fe3a321be3..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-metrics.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Metrics
-menuTitle: List Metrics
-weight: 24
----
-## oasisctl list metrics
-
-List metrics resources
-
-```
-oasisctl list metrics [flags]
-```
-
-## Options
-```
- -h, --help help for metrics
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list metrics tokens](list-metrics-tokens.md) - List all metrics tokens of the given deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-nodesizes.md b/site/content/3.11/arangograph/oasisctl/list/list-nodesizes.md
deleted file mode 100644
index 60c0bc9d56..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-nodesizes.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl List Node Sizes
-menuTitle: List Node Sizes
-weight: 26
----
-## oasisctl list nodesizes
-
-List node sizes
-
-```
-oasisctl list nodesizes [flags]
-```
-
-## Options
-```
- -h, --help help for nodesizes
- --model string Identifier of the model (default "oneshard")
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- --provider-id string Identifier of the provider
- -r, --region-id string Identifier of the region
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-notebookmodels.md b/site/content/3.11/arangograph/oasisctl/list/list-notebookmodels.md
deleted file mode 100644
index cdca9cb6a5..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-notebookmodels.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl List Notebook Models
-menuTitle: List Notebook Models
-weight: 27
----
-## oasisctl list notebookmodels
-
-List notebook models
-
-```
-oasisctl list notebookmodels [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment that the notebook has to run next to
- -h, --help help for notebookmodels
- -o, --organization-id string Identifier of the organization that deployment is in
- -p, --project-id string Identifier of the project that deployment is in
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-notebooks.md b/site/content/3.11/arangograph/oasisctl/list/list-notebooks.md
deleted file mode 100644
index 29aa77467f..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-notebooks.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl List Notebooks
-menuTitle: List Notebooks
-weight: 28
----
-## oasisctl list notebooks
-
-List notebooks
-
-```
-oasisctl list notebooks [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment that the notebooks run next to
- -h, --help help for notebooks
- -o, --organization-id string Identifier of the organization that has notebooks
- -p, --project-id string Identifier of the project that has notebooks
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-organization-invites.md b/site/content/3.11/arangograph/oasisctl/list/list-organization-invites.md
deleted file mode 100644
index d3fbe58668..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-organization-invites.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Organization Invites
-menuTitle: List Organization Invites
-weight: 30
----
-## oasisctl list organization invites
-
-List invites of an organization the authenticated user is a member of
-
-```
-oasisctl list organization invites [flags]
-```
-
-## Options
-```
- -h, --help help for invites
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list organization](list-organization.md) - List organization resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-organization-members.md b/site/content/3.11/arangograph/oasisctl/list/list-organization-members.md
deleted file mode 100644
index f143d66886..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-organization-members.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Organization Members
-menuTitle: List Organization Members
-weight: 31
----
-## oasisctl list organization members
-
-List members of an organization the authenticated user is a member of
-
-```
-oasisctl list organization members [flags]
-```
-
-## Options
-```
- -h, --help help for members
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list organization](list-organization.md) - List organization resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-organization.md b/site/content/3.11/arangograph/oasisctl/list/list-organization.md
deleted file mode 100644
index c41e4a9750..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-organization.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Organization
-menuTitle: List Organization
-weight: 29
----
-## oasisctl list organization
-
-List organization resources
-
-```
-oasisctl list organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-* [oasisctl list organization invites](list-organization-invites.md) - List invites of an organization the authenticated user is a member of
-* [oasisctl list organization members](list-organization-members.md) - List members of an organization the authenticated user is a member of
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-organizations.md b/site/content/3.11/arangograph/oasisctl/list/list-organizations.md
deleted file mode 100644
index 7cde4a6da1..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-organizations.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Oasisctl List Organizations
-menuTitle: List Organizations
-weight: 32
----
-## oasisctl list organizations
-
-List all organizations the authenticated user is a member of
-
-```
-oasisctl list organizations [flags]
-```
-
-## Options
-```
- -h, --help help for organizations
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-permissions.md b/site/content/3.11/arangograph/oasisctl/list/list-permissions.md
deleted file mode 100644
index db4c2bd43c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-permissions.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Oasisctl List Permissions
-menuTitle: List Permissions
-weight: 33
----
-## oasisctl list permissions
-
-List the known permissions
-
-```
-oasisctl list permissions [flags]
-```
-
-## Options
-```
- -h, --help help for permissions
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-projects.md b/site/content/3.11/arangograph/oasisctl/list/list-projects.md
deleted file mode 100644
index 959e80a2fa..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-projects.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Projects
-menuTitle: List Projects
-weight: 34
----
-## oasisctl list projects
-
-List all projects of the given organization
-
-```
-oasisctl list projects [flags]
-```
-
-## Options
-```
- -h, --help help for projects
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-providers.md b/site/content/3.11/arangograph/oasisctl/list/list-providers.md
deleted file mode 100644
index 1b9c90f744..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-providers.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Providers
-menuTitle: List Providers
-weight: 35
----
-## oasisctl list providers
-
-List all providers the authenticated user has access to
-
-```
-oasisctl list providers [flags]
-```
-
-## Options
-```
- -h, --help help for providers
- -o, --organization-id string Optional Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-regions.md b/site/content/3.11/arangograph/oasisctl/list/list-regions.md
deleted file mode 100644
index 083b85a4a5..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-regions.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl List Regions
-menuTitle: List Regions
-weight: 36
----
-## oasisctl list regions
-
-List all regions of the given provider
-
-```
-oasisctl list regions [flags]
-```
-
-## Options
-```
- -h, --help help for regions
- -o, --organization-id string Optional Identifier of the organization
- -p, --provider-id string Identifier of the provider
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-roles.md b/site/content/3.11/arangograph/oasisctl/list/list-roles.md
deleted file mode 100644
index ffa2a3ee89..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-roles.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl List Roles
-menuTitle: List Roles
-weight: 37
----
-## oasisctl list roles
-
-List all roles of the given organization
-
-```
-oasisctl list roles [flags]
-```
-
-## Options
-```
- -h, --help help for roles
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/list/list-servers.md b/site/content/3.11/arangograph/oasisctl/list/list-servers.md
deleted file mode 100644
index f1e3a5f636..0000000000
--- a/site/content/3.11/arangograph/oasisctl/list/list-servers.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Oasisctl List Servers
-menuTitle: List Servers
-weight: 38
----
-## oasisctl list servers
-
-List servers information
-
-```
-oasisctl list servers [flags]
-```
-
-## Options
-```
- -h, --help help for servers
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl list](_index.md) - List resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/_index.md b/site/content/3.11/arangograph/oasisctl/lock/_index.md
deleted file mode 100644
index 1b432aa982..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/_index.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Lock
-menuTitle: Lock
-weight: 16
----
-## oasisctl lock
-
-Lock resources
-
-```
-oasisctl lock [flags]
-```
-
-## Options
-```
- -h, --help help for lock
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl lock cacertificate](lock-cacertificate.md) - Lock a CA certificate, so it cannot be deleted
-* [oasisctl lock deployment](lock-deployment.md) - Lock a deployment, so it cannot be deleted
-* [oasisctl lock ipallowlist](lock-ipallowlist.md) - Lock an IP allowlist, so it cannot be deleted
-* [oasisctl lock organization](lock-organization.md) - Lock an organization, so it cannot be deleted
-* [oasisctl lock policy](lock-policy.md) - Lock a backup policy
-* [oasisctl lock project](lock-project.md) - Lock a project, so it cannot be deleted
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/lock-cacertificate.md b/site/content/3.11/arangograph/oasisctl/lock/lock-cacertificate.md
deleted file mode 100644
index 274471190b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/lock-cacertificate.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Lock CA Certificate
-menuTitle: Lock CA Certificate
-weight: 1
----
-## oasisctl lock cacertificate
-
-Lock a CA certificate, so it cannot be deleted
-
-```
-oasisctl lock cacertificate [flags]
-```
-
-## Options
-```
- -c, --cacertificate-id string Identifier of the CA certificate
- -h, --help help for cacertificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl lock](_index.md) - Lock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/lock-deployment.md b/site/content/3.11/arangograph/oasisctl/lock/lock-deployment.md
deleted file mode 100644
index 3a64c29d17..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/lock-deployment.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Lock Deployment
-menuTitle: Lock Deployment
-weight: 2
----
-## oasisctl lock deployment
-
-Lock a deployment, so it cannot be deleted
-
-```
-oasisctl lock deployment [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for deployment
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl lock](_index.md) - Lock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/lock-ipallowlist.md b/site/content/3.11/arangograph/oasisctl/lock/lock-ipallowlist.md
deleted file mode 100644
index 9f4460b2e2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/lock-ipallowlist.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Lock IP Allowlist
-menuTitle: Lock IP Allowlist
-weight: 3
----
-## oasisctl lock ipallowlist
-
-Lock an IP allowlist, so it cannot be deleted
-
-```
-oasisctl lock ipallowlist [flags]
-```
-
-## Options
-```
- -h, --help help for ipallowlist
- -i, --ipallowlist-id string Identifier of the IP allowlist
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl lock](_index.md) - Lock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/lock-organization.md b/site/content/3.11/arangograph/oasisctl/lock/lock-organization.md
deleted file mode 100644
index e65abeea81..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/lock-organization.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Lock Organization
-menuTitle: Lock Organization
-weight: 4
----
-## oasisctl lock organization
-
-Lock an organization, so it cannot be deleted
-
-```
-oasisctl lock organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl lock](_index.md) - Lock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/lock-policy.md b/site/content/3.11/arangograph/oasisctl/lock/lock-policy.md
deleted file mode 100644
index 8b70ed3617..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/lock-policy.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Lock Policy
-menuTitle: Lock Policy
-weight: 5
----
-## oasisctl lock policy
-
-Lock a backup policy
-
-```
-oasisctl lock policy [flags]
-```
-
-## Options
-```
- -d, --backup-policy-id string Identifier of the backup policy
- -h, --help help for policy
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl lock](_index.md) - Lock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/lock/lock-project.md b/site/content/3.11/arangograph/oasisctl/lock/lock-project.md
deleted file mode 100644
index f71ac58f82..0000000000
--- a/site/content/3.11/arangograph/oasisctl/lock/lock-project.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Lock Project
-menuTitle: Lock Project
-weight: 6
----
-## oasisctl lock project
-
-Lock a project, so it cannot be deleted
-
-```
-oasisctl lock project [flags]
-```
-
-## Options
-```
- -h, --help help for project
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl lock](_index.md) - Lock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/login.md b/site/content/3.11/arangograph/oasisctl/login.md
deleted file mode 100644
index a507d3e942..0000000000
--- a/site/content/3.11/arangograph/oasisctl/login.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Oasisctl Login
-menuTitle: Login
-weight: 17
----
-## oasisctl login
-
-Login to ArangoDB Oasis using an API key
-
-## Synopsis
-To authenticate in a script environment, run:
-
- export OASIS_TOKEN=$(oasisctl login --key-id= --key-secret=)
-
-
-```
-oasisctl login [flags]
-```
-
-## Options
-```
- -h, --help help for login
- -i, --key-id string API key identifier
- -s, --key-secret string API key secret
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/logs.md b/site/content/3.11/arangograph/oasisctl/logs.md
deleted file mode 100644
index 71f2555f94..0000000000
--- a/site/content/3.11/arangograph/oasisctl/logs.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Oasisctl Logs
-menuTitle: Logs
-weight: 18
----
-## oasisctl logs
-
-Get logs of the servers of a deployment the authenticated user has access to
-
-```
-oasisctl logs [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- --end string End fetching logs at this timestamp (pass timestamp or duration before now)
- --format string Formatting of the log output. It can be one of two: text, json. Text is the default value. (default "text")
- -h, --help help for logs
- -l, --limit int Limit the number of log lines
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -r, --role string Limit logs to servers with given role only (agents|coordinators|dbservers)
- --start string Start fetching logs from this timestamp (pass timestamp or duration before now)
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/options.md b/site/content/3.11/arangograph/oasisctl/options.md
deleted file mode 100644
index 75823ecb85..0000000000
--- a/site/content/3.11/arangograph/oasisctl/options.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-description: Command-line client tool for managing ArangoGraph
-title: ArangoGraph Shell oasisctl
-menuTitle: Options
-weight: 1
----
-## oasisctl
-
-ArangoGraph Insights Platform
-
-## Synopsis
-ArangoGraph Insights Platform (formerly called Oasis): The Managed Cloud for ArangoDB
-
-```
-oasisctl [flags]
-```
-
-## Options
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- -h, --help help for oasisctl
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl accept](accept/_index.md) - Accept invites
-* [oasisctl add](add/_index.md) - Add resources
-* [oasisctl auditlog](auditlog/_index.md) - AuditLog resources
-* [oasisctl backup](backup/_index.md) - Backup commands
-* [oasisctl clone](clone/_index.md) - Clone resources
-* [oasisctl completion](completion.md) - Generates bash completion scripts
-* [oasisctl create](create/_index.md) - Create resources
-* [oasisctl delete](delete/_index.md) - Delete resources
-* [oasisctl disable](disable/_index.md) - Disable some settings related to deployment
-* [oasisctl enable](enable/_index.md) - Enable some settings related to deployment
-* [oasisctl generate-docs](generate-docs.md) - Generate output
-* [oasisctl get](get/_index.md) - Get information
-* [oasisctl import](import.md) - Import data from a local database or from another remote database into an Oasis deployment.
-* [oasisctl list](list/_index.md) - List resources
-* [oasisctl lock](lock/_index.md) - Lock resources
-* [oasisctl login](login.md) - Login to ArangoDB Oasis using an API key
-* [oasisctl logs](logs.md) - Get logs of the servers of a deployment the authenticated user has access to
-* [oasisctl pause](pause/_index.md) - Pause resources
-* [oasisctl rebalance](rebalance/_index.md) - Rebalance resources
-* [oasisctl reject](reject/_index.md) - Reject invites
-* [oasisctl renew](renew/_index.md) - Renew keys & tokens
-* [oasisctl resume](resume/_index.md) - Resume resources
-* [oasisctl revoke](revoke/_index.md) - Revoke keys & tokens
-* [oasisctl rotate](rotate/_index.md) - Rotate resources
-* [oasisctl top](top.md) - Show the most important server metrics
-* [oasisctl unlock](unlock/_index.md) - Unlock resources
-* [oasisctl update](update/_index.md) - Update resources
-* [oasisctl upgrade](upgrade.md) - Upgrade Oasisctl tool
-* [oasisctl version](version.md) - Show the current version of this tool
-* [oasisctl wait](wait/_index.md) - Wait for a status change
-
diff --git a/site/content/3.11/arangograph/oasisctl/pause/_index.md b/site/content/3.11/arangograph/oasisctl/pause/_index.md
deleted file mode 100644
index ce02e840c5..0000000000
--- a/site/content/3.11/arangograph/oasisctl/pause/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Pause
-menuTitle: Pause
-weight: 19
----
-## oasisctl pause
-
-Pause resources
-
-```
-oasisctl pause [flags]
-```
-
-## Options
-```
- -h, --help help for pause
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl pause notebook](pause-notebook.md) - Pause a notebook
-
diff --git a/site/content/3.11/arangograph/oasisctl/pause/pause-notebook.md b/site/content/3.11/arangograph/oasisctl/pause/pause-notebook.md
deleted file mode 100644
index 0db646d81b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/pause/pause-notebook.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Pause Notebook
-menuTitle: Pause Notebook
-weight: 1
----
-## oasisctl pause notebook
-
-Pause a notebook
-
-```
-oasisctl pause notebook [flags]
-```
-
-## Options
-```
- -h, --help help for notebook
- -n, --notebook-id string Identifier of the notebook
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl pause](_index.md) - Pause resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/rebalance/_index.md b/site/content/3.11/arangograph/oasisctl/rebalance/_index.md
deleted file mode 100644
index c1532b7f91..0000000000
--- a/site/content/3.11/arangograph/oasisctl/rebalance/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Rebalance
-menuTitle: Rebalance
-weight: 20
----
-## oasisctl rebalance
-
-Rebalance resources
-
-```
-oasisctl rebalance [flags]
-```
-
-## Options
-```
- -h, --help help for rebalance
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl rebalance deployment](rebalance-deployment.md) - Rebalance deployment resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/rebalance/rebalance-deployment-shards.md b/site/content/3.11/arangograph/oasisctl/rebalance/rebalance-deployment-shards.md
deleted file mode 100644
index 706b6339e9..0000000000
--- a/site/content/3.11/arangograph/oasisctl/rebalance/rebalance-deployment-shards.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Rebalance Deployment Shards
-menuTitle: Rebalance Deployment Shards
-weight: 2
----
-## oasisctl rebalance deployment shards
-
-Rebalance shards of a deployment
-
-```
-oasisctl rebalance deployment shards [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for shards
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl rebalance deployment](rebalance-deployment.md) - Rebalance deployment resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/rebalance/rebalance-deployment.md b/site/content/3.11/arangograph/oasisctl/rebalance/rebalance-deployment.md
deleted file mode 100644
index 7759314ec5..0000000000
--- a/site/content/3.11/arangograph/oasisctl/rebalance/rebalance-deployment.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Rebalance Deployment
-menuTitle: Rebalance Deployment
-weight: 1
----
-## oasisctl rebalance deployment
-
-Rebalance deployment resources
-
-```
-oasisctl rebalance deployment [flags]
-```
-
-## Options
-```
- -h, --help help for deployment
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl rebalance](_index.md) - Rebalance resources
-* [oasisctl rebalance deployment shards](rebalance-deployment-shards.md) - Rebalance shards of a deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/reject/_index.md b/site/content/3.11/arangograph/oasisctl/reject/_index.md
deleted file mode 100644
index 69cff60ece..0000000000
--- a/site/content/3.11/arangograph/oasisctl/reject/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Reject
-menuTitle: Reject
-weight: 21
----
-## oasisctl reject
-
-Reject invites
-
-```
-oasisctl reject [flags]
-```
-
-## Options
-```
- -h, --help help for reject
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl reject organization](reject-organization.md) - Reject organization related invites
-
diff --git a/site/content/3.11/arangograph/oasisctl/reject/reject-organization-invite.md b/site/content/3.11/arangograph/oasisctl/reject/reject-organization-invite.md
deleted file mode 100644
index d43ecfca52..0000000000
--- a/site/content/3.11/arangograph/oasisctl/reject/reject-organization-invite.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Reject Organization Invite
-menuTitle: Reject Organization Invite
-weight: 2
----
-## oasisctl reject organization invite
-
-Reject an organization invite the authenticated user has access to
-
-```
-oasisctl reject organization invite [flags]
-```
-
-## Options
-```
- -h, --help help for invite
- -i, --invite-id string Identifier of the organization invite
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl reject organization](reject-organization.md) - Reject organization related invites
-
diff --git a/site/content/3.11/arangograph/oasisctl/reject/reject-organization.md b/site/content/3.11/arangograph/oasisctl/reject/reject-organization.md
deleted file mode 100644
index c688b02cd1..0000000000
--- a/site/content/3.11/arangograph/oasisctl/reject/reject-organization.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Reject Organization
-menuTitle: Reject Organization
-weight: 1
----
-## oasisctl reject organization
-
-Reject organization related invites
-
-```
-oasisctl reject organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl reject](_index.md) - Reject invites
-* [oasisctl reject organization invite](reject-organization-invite.md) - Reject an organization invite the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/renew/_index.md b/site/content/3.11/arangograph/oasisctl/renew/_index.md
deleted file mode 100644
index b140a835de..0000000000
--- a/site/content/3.11/arangograph/oasisctl/renew/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Renew
-menuTitle: Renew
-weight: 22
----
-## oasisctl renew
-
-Renew keys & tokens
-
-```
-oasisctl renew [flags]
-```
-
-## Options
-```
- -h, --help help for renew
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl renew apikey](renew-apikey.md) - Renew API keys & tokens
-
diff --git a/site/content/3.11/arangograph/oasisctl/renew/renew-apikey-token.md b/site/content/3.11/arangograph/oasisctl/renew/renew-apikey-token.md
deleted file mode 100644
index e6a1798243..0000000000
--- a/site/content/3.11/arangograph/oasisctl/renew/renew-apikey-token.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Renew API Key Token
-menuTitle: Renew API Key Token
-weight: 2
----
-## oasisctl renew apikey token
-
-Renew an API key token
-
-## Synopsis
-Renew the token (resulting from API key authentication)
-
-```
-oasisctl renew apikey token [flags]
-```
-
-## Options
-```
- -h, --help help for token
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl renew apikey](renew-apikey.md) - Renew API keys & tokens
-
diff --git a/site/content/3.11/arangograph/oasisctl/renew/renew-apikey.md b/site/content/3.11/arangograph/oasisctl/renew/renew-apikey.md
deleted file mode 100644
index 14c1b7ec4d..0000000000
--- a/site/content/3.11/arangograph/oasisctl/renew/renew-apikey.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Renew API Key
-menuTitle: Renew API Key
-weight: 1
----
-## oasisctl renew apikey
-
-Renew API keys & tokens
-
-```
-oasisctl renew apikey [flags]
-```
-
-## Options
-```
- -h, --help help for apikey
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl renew](_index.md) - Renew keys & tokens
-* [oasisctl renew apikey token](renew-apikey-token.md) - Renew an API key token
-
diff --git a/site/content/3.11/arangograph/oasisctl/resume/_index.md b/site/content/3.11/arangograph/oasisctl/resume/_index.md
deleted file mode 100644
index 78485175c1..0000000000
--- a/site/content/3.11/arangograph/oasisctl/resume/_index.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Resume
-menuTitle: Resume
-weight: 23
----
-## oasisctl resume
-
-Resume resources
-
-```
-oasisctl resume [flags]
-```
-
-## Options
-```
- -h, --help help for resume
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl resume deployment](resume-deployment.md) - Resume a paused deployment the authenticated user has access to
-* [oasisctl resume notebook](resume-notebook.md) - Resume a notebook
-
diff --git a/site/content/3.11/arangograph/oasisctl/resume/resume-deployment.md b/site/content/3.11/arangograph/oasisctl/resume/resume-deployment.md
deleted file mode 100644
index 7cbc18ef00..0000000000
--- a/site/content/3.11/arangograph/oasisctl/resume/resume-deployment.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Resume Deployment
-menuTitle: Resume Deployment
-weight: 1
----
-## oasisctl resume deployment
-
-Resume a paused deployment the authenticated user has access to
-
-```
-oasisctl resume deployment [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for deployment
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl resume](_index.md) - Resume resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/resume/resume-notebook.md b/site/content/3.11/arangograph/oasisctl/resume/resume-notebook.md
deleted file mode 100644
index 17add47562..0000000000
--- a/site/content/3.11/arangograph/oasisctl/resume/resume-notebook.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Resume Notebook
-menuTitle: Resume Notebook
-weight: 2
----
-## oasisctl resume notebook
-
-Resume a notebook
-
-```
-oasisctl resume notebook [flags]
-```
-
-## Options
-```
- -h, --help help for notebook
- -n, --notebook-id string Identifier of the notebook
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl resume](_index.md) - Resume resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/revoke/_index.md b/site/content/3.11/arangograph/oasisctl/revoke/_index.md
deleted file mode 100644
index 80ad7af060..0000000000
--- a/site/content/3.11/arangograph/oasisctl/revoke/_index.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Revoke
-menuTitle: Revoke
-weight: 24
----
-## oasisctl revoke
-
-Revoke keys & tokens
-
-```
-oasisctl revoke [flags]
-```
-
-## Options
-```
- -h, --help help for revoke
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl revoke apikey](revoke-apikey.md) - Revoke an API key with given identifier
-* [oasisctl revoke metrics](revoke-metrics.md) - Revoke keys & tokens
-
diff --git a/site/content/3.11/arangograph/oasisctl/revoke/revoke-apikey-token.md b/site/content/3.11/arangograph/oasisctl/revoke/revoke-apikey-token.md
deleted file mode 100644
index 795b5e5605..0000000000
--- a/site/content/3.11/arangograph/oasisctl/revoke/revoke-apikey-token.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Revoke API Key Token
-menuTitle: Revoke API Key Token
-weight: 2
----
-## oasisctl revoke apikey token
-
-Revoke an API key token
-
-## Synopsis
-Revoke the token (resulting from API key authentication)
-
-```
-oasisctl revoke apikey token [flags]
-```
-
-## Options
-```
- -h, --help help for token
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl revoke apikey](revoke-apikey.md) - Revoke an API key with given identifier
-
diff --git a/site/content/3.11/arangograph/oasisctl/revoke/revoke-apikey.md b/site/content/3.11/arangograph/oasisctl/revoke/revoke-apikey.md
deleted file mode 100644
index 5c15ef927a..0000000000
--- a/site/content/3.11/arangograph/oasisctl/revoke/revoke-apikey.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Revoke API Key
-menuTitle: Revoke API Key
-weight: 1
----
-## oasisctl revoke apikey
-
-Revoke an API key with given identifier
-
-```
-oasisctl revoke apikey [flags]
-```
-
-## Options
-```
- -i, --apikey-id string Identifier of the API key to revoke
- -h, --help help for apikey
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl revoke](_index.md) - Revoke keys & tokens
-* [oasisctl revoke apikey token](revoke-apikey-token.md) - Revoke an API key token
-
diff --git a/site/content/3.11/arangograph/oasisctl/revoke/revoke-metrics-token.md b/site/content/3.11/arangograph/oasisctl/revoke/revoke-metrics-token.md
deleted file mode 100644
index 0876f21606..0000000000
--- a/site/content/3.11/arangograph/oasisctl/revoke/revoke-metrics-token.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Revoke Metrics Token
-menuTitle: Revoke Metrics Token
-weight: 4
----
-## oasisctl revoke metrics token
-
-Revoke a metrics token for a deployment
-
-```
-oasisctl revoke metrics token [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for token
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -t, --token-id string Identifier of the metrics token
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl revoke metrics](revoke-metrics.md) - Revoke keys & tokens
-
diff --git a/site/content/3.11/arangograph/oasisctl/revoke/revoke-metrics.md b/site/content/3.11/arangograph/oasisctl/revoke/revoke-metrics.md
deleted file mode 100644
index 638a17df00..0000000000
--- a/site/content/3.11/arangograph/oasisctl/revoke/revoke-metrics.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Revoke Metrics
-menuTitle: Revoke Metrics
-weight: 3
----
-## oasisctl revoke metrics
-
-Revoke keys & tokens
-
-```
-oasisctl revoke metrics [flags]
-```
-
-## Options
-```
- -h, --help help for metrics
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl revoke](_index.md) - Revoke keys & tokens
-* [oasisctl revoke metrics token](revoke-metrics-token.md) - Revoke a metrics token for a deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/rotate/_index.md b/site/content/3.11/arangograph/oasisctl/rotate/_index.md
deleted file mode 100644
index e24096c868..0000000000
--- a/site/content/3.11/arangograph/oasisctl/rotate/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Rotate
-menuTitle: Rotate
-weight: 25
----
-## oasisctl rotate
-
-Rotate resources
-
-```
-oasisctl rotate [flags]
-```
-
-## Options
-```
- -h, --help help for rotate
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl rotate deployment](rotate-deployment.md) - Rotate deployment resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/rotate/rotate-deployment-server.md b/site/content/3.11/arangograph/oasisctl/rotate/rotate-deployment-server.md
deleted file mode 100644
index 5d281d1ae4..0000000000
--- a/site/content/3.11/arangograph/oasisctl/rotate/rotate-deployment-server.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Rotate Deployment Server
-menuTitle: Rotate Deployment Server
-weight: 2
----
-## oasisctl rotate deployment server
-
-Rotate a single server of a deployment
-
-```
-oasisctl rotate deployment server [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for server
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -s, --server-id strings Identifier of the deployment server
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl rotate deployment](rotate-deployment.md) - Rotate deployment resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/rotate/rotate-deployment.md b/site/content/3.11/arangograph/oasisctl/rotate/rotate-deployment.md
deleted file mode 100644
index de899d924d..0000000000
--- a/site/content/3.11/arangograph/oasisctl/rotate/rotate-deployment.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Rotate Deployment
-menuTitle: Rotate Deployment
-weight: 1
----
-## oasisctl rotate deployment
-
-Rotate deployment resources
-
-```
-oasisctl rotate deployment [flags]
-```
-
-## Options
-```
- -h, --help help for deployment
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl rotate](_index.md) - Rotate resources
-* [oasisctl rotate deployment server](rotate-deployment-server.md) - Rotate a single server of a deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/top.md b/site/content/3.11/arangograph/oasisctl/top.md
deleted file mode 100644
index d89a83ebfe..0000000000
--- a/site/content/3.11/arangograph/oasisctl/top.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Top
-menuTitle: Top
-weight: 26
----
-## oasisctl top
-
-Show the most important server metrics
-
-```
-oasisctl top [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for top
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/_index.md b/site/content/3.11/arangograph/oasisctl/unlock/_index.md
deleted file mode 100644
index 2c376ce6fd..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/_index.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Unlock
-menuTitle: Unlock
-weight: 27
----
-## oasisctl unlock
-
-Unlock resources
-
-```
-oasisctl unlock [flags]
-```
-
-## Options
-```
- -h, --help help for unlock
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl unlock cacertificate](unlock-cacertificate.md) - Unlock a CA certificate, so it can be deleted
-* [oasisctl unlock deployment](unlock-deployment.md) - Unlock a deployment, so it can be deleted
-* [oasisctl unlock ipallowlist](unlock-ipallowlist.md) - Unlock an IP allowlist, so it can be deleted
-* [oasisctl unlock organization](unlock-organization.md) - Unlock an organization, so it can be deleted
-* [oasisctl unlock policy](unlock-policy.md) - Unlock a backup policy
-* [oasisctl unlock project](unlock-project.md) - Unlock a project, so it can be deleted
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/unlock-cacertificate.md b/site/content/3.11/arangograph/oasisctl/unlock/unlock-cacertificate.md
deleted file mode 100644
index 418fb91ae6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/unlock-cacertificate.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Unlock CA Certificate
-menuTitle: Unlock CA Certificate
-weight: 1
----
-## oasisctl unlock cacertificate
-
-Unlock a CA certificate, so it can be deleted
-
-```
-oasisctl unlock cacertificate [flags]
-```
-
-## Options
-```
- -c, --cacertificate-id string Identifier of the CA certificate
- -h, --help help for cacertificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl unlock](_index.md) - Unlock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/unlock-deployment.md b/site/content/3.11/arangograph/oasisctl/unlock/unlock-deployment.md
deleted file mode 100644
index 6d870921e6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/unlock-deployment.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Unlock Deployment
-menuTitle: Unlock Deployment
-weight: 2
----
-## oasisctl unlock deployment
-
-Unlock a deployment, so it can be deleted
-
-```
-oasisctl unlock deployment [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for deployment
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl unlock](_index.md) - Unlock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/unlock-ipallowlist.md b/site/content/3.11/arangograph/oasisctl/unlock/unlock-ipallowlist.md
deleted file mode 100644
index 36f8fdbaed..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/unlock-ipallowlist.md
+++ /dev/null
@@ -1,31 +0,0 @@
----
-title: Oasisctl Unlock IP Allowlist
-menuTitle: Unlock IP Allowlist
-weight: 3
----
-## oasisctl unlock ipallowlist
-
-Unlock an IP allowlist, so it can be deleted
-
-```
-oasisctl unlock ipallowlist [flags]
-```
-
-## Options
-```
- -h, --help help for ipallowlist
- -i, --ipallowlist-id string Identifier of the IP allowlist
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl unlock](_index.md) - Unlock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/unlock-organization.md b/site/content/3.11/arangograph/oasisctl/unlock/unlock-organization.md
deleted file mode 100644
index bfc70efccd..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/unlock-organization.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Unlock Organization
-menuTitle: Unlock Organization
-weight: 4
----
-## oasisctl unlock organization
-
-Unlock an organization, so it can be deleted
-
-```
-oasisctl unlock organization [flags]
-```
-
-## Options
-```
- -h, --help help for organization
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl unlock](_index.md) - Unlock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/unlock-policy.md b/site/content/3.11/arangograph/oasisctl/unlock/unlock-policy.md
deleted file mode 100644
index 2646b5af51..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/unlock-policy.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Unlock Policy
-menuTitle: Unlock Policy
-weight: 5
----
-## oasisctl unlock policy
-
-Unlock a backup policy
-
-```
-oasisctl unlock policy [flags]
-```
-
-## Options
-```
- -d, --backup-policy-id string Identifier of the backup policy
- -h, --help help for policy
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl unlock](_index.md) - Unlock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/unlock/unlock-project.md b/site/content/3.11/arangograph/oasisctl/unlock/unlock-project.md
deleted file mode 100644
index 211e810283..0000000000
--- a/site/content/3.11/arangograph/oasisctl/unlock/unlock-project.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Unlock Project
-menuTitle: Unlock Project
-weight: 6
----
-## oasisctl unlock project
-
-Unlock a project, so it can be deleted
-
-```
-oasisctl unlock project [flags]
-```
-
-## Options
-```
- -h, --help help for project
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl unlock](_index.md) - Unlock resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/_index.md b/site/content/3.11/arangograph/oasisctl/update/_index.md
deleted file mode 100644
index 0d1501cbe5..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/_index.md
+++ /dev/null
@@ -1,41 +0,0 @@
----
-title: Oasisctl Update
-menuTitle: Update
-weight: 28
----
-## oasisctl update
-
-Update resources
-
-```
-oasisctl update [flags]
-```
-
-## Options
-```
- -h, --help help for update
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl update auditlog](update-auditlog.md) - Update an auditlog
-* [oasisctl update backup](update-backup.md) - Update a backup
-* [oasisctl update cacertificate](update-cacertificate.md) - Update a CA certificate the authenticated user has access to
-* [oasisctl update deployment](update-deployment.md) - Update a deployment the authenticated user has access to
-* [oasisctl update group](update-group.md) - Update a group the authenticated user has access to
-* [oasisctl update ipallowlist](update-ipallowlist.md) - Update an IP allowlist the authenticated user has access to
-* [oasisctl update metrics](update-metrics.md) - Update metrics resources
-* [oasisctl update notebook](update-notebook.md) - Update notebook
-* [oasisctl update organization](update-organization.md) - Update an organization the authenticated user has access to
-* [oasisctl update policy](update-policy.md) - Update a policy
-* [oasisctl update private](update-private.md) - Update private resources
-* [oasisctl update project](update-project.md) - Update a project the authenticated user has access to
-* [oasisctl update role](update-role.md) - Update a role the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-auditlog.md b/site/content/3.11/arangograph/oasisctl/update/update-auditlog.md
deleted file mode 100644
index 8c92aa1c12..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-auditlog.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Update Audit Log
-menuTitle: Update Audit Log
-weight: 1
----
-## oasisctl update auditlog
-
-Update an auditlog
-
-```
-oasisctl update auditlog [flags]
-```
-
-## Options
-```
- -i, --auditlog-id string Identifier of the auditlog to update.
- --default If set, this AuditLog is the default for the organization.
- --description string Description of the audit log.
- -h, --help help for auditlog
- --name string Name of the audit log.
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-backup-policy.md b/site/content/3.11/arangograph/oasisctl/update/update-backup-policy.md
deleted file mode 100644
index cad0d2417f..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-backup-policy.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: Oasisctl Update Backup Policy
-menuTitle: Update Backup Policy
-weight: 3
----
-## oasisctl update backup policy
-
-Update a backup policy
-
-```
-oasisctl update backup policy [flags]
-```
-
-## Options
-```
- --additional-region-ids strings Add backup to the specified addition regions
- -d, --backup-policy-id string Identifier of the backup policy
- --day-of-the-month int32 Run the backup on the specified day of the month (1-31) (default 1)
- --description string Description of the backup
- --email-notification string Email notification setting (Never|FailureOnly|Always)
- --every-interval-hours int32 Schedule should run with an interval of the specified hours (1-23)
- --friday If set, a backup will be created on Fridays. Set to false explicitly to clear the flag.
- -h, --help help for policy
- --hours int32 Hours part of the time of day (0-23)
- --minutes int32 Minutes part of the time of day (0-59)
- --minutes-offset int32 Schedule should run with specific minutes offset (0-59)
- --monday If set, a backup will be created on Mondays. Set to false explicitly to clear the flag.
- --name string Name of the deployment
- --paused The policy is paused. Set to false explicitly to clear the flag.
- --retention-period int Backups created by this policy will be automatically deleted after the specified retention period. A value of 0 means that backup will never be deleted.
- --saturday If set, a backup will be created on Saturdays. Set to false explicitly to clear the flag.
- --schedule-type string Schedule of the policy (Hourly|Daily|Monthly)
- --sunday If set, a backup will be created on Sundays. Set to false explicitly to clear the flag.
- --thursday If set, a backup will be created on Thursdays. Set to false explicitly to clear the flag.
- --time-zone string The time-zone this time of day applies to (empty means UTC). Names MUST be exactly as defined in RFC-822. (default "UTC")
- --tuesday If set, a backup will be created on Tuesdays. Set to false explicitly to clear the flag.
- --upload The backup should be uploaded. Set to false explicitly to clear the flag.
- --wednesday If set, a backup will be created on Wednesdays. Set to false explicitly to clear the flag.
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update backup](update-backup.md) - Update a backup
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-backup.md b/site/content/3.11/arangograph/oasisctl/update/update-backup.md
deleted file mode 100644
index 9ce085b61b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-backup.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Update Backup
-menuTitle: Update Backup
-weight: 2
----
-## oasisctl update backup
-
-Update a backup
-
-```
-oasisctl update backup [flags]
-```
-
-## Options
-```
- --auto-deleted-at int Time (h) until auto delete of the backup
- -d, --backup-id string Identifier of the backup
- --description string Description of the backup
- -h, --help help for backup
- --name string Name of the backup
- --upload The backups should be uploaded
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-* [oasisctl update backup policy](update-backup-policy.md) - Update a backup policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-cacertificate.md b/site/content/3.11/arangograph/oasisctl/update/update-cacertificate.md
deleted file mode 100644
index 1b97fe7a45..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-cacertificate.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Update CA Certificate
-menuTitle: Update CA Certificate
-weight: 4
----
-## oasisctl update cacertificate
-
-Update a CA certificate the authenticated user has access to
-
-```
-oasisctl update cacertificate [flags]
-```
-
-## Options
-```
- -c, --cacertificate-id string Identifier of the CA certificate
- --description string Description of the CA certificate
- -h, --help help for cacertificate
- --name string Name of the CA certificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- --use-well-known-certificate Sets the usage of a well known certificate ie. Let's Encrypt
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-deployment.md b/site/content/3.11/arangograph/oasisctl/update/update-deployment.md
deleted file mode 100644
index b7c36cace2..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-deployment.md
+++ /dev/null
@@ -1,52 +0,0 @@
----
-title: Oasisctl Update Deployment
-menuTitle: Update Deployment
-weight: 5
----
-## oasisctl update deployment
-
-Update a deployment the authenticated user has access to
-
-```
-oasisctl update deployment [flags]
-```
-
-## Options
-```
- -c, --cacertificate-id string Identifier of the CA certificate to use for the deployment
- --coordinator-memory-size int32 Set memory size of Coordinators for flexible deployments (GiB) (default 4)
- --coordinators int32 Set number of Coordinators for flexible deployments (default 3)
- --custom-image string Set a custom image to use for the deployment. Only available for selected customers.
- --dbserver-disk-size int32 Set disk size of DB-Servers for flexible deployments (GiB) (default 32)
- --dbserver-memory-size int32 Set memory size of DB-Servers for flexible deployments (GiB) (default 4)
- --dbservers int32 Set number of DB-Servers for flexible deployments (default 3)
- -d, --deployment-id string Identifier of the deployment
- --description string Description of the deployment
- --disable-foxx-authentication Disable authentication of requests to Foxx application.
- --disk-performance-id string Set the disk performance to use for this deployment.
- --drop-vst-support Drop VST protocol support to improve resilience.
- -h, --help help for deployment
- -i, --ipallowlist-id string Identifier of the IP allowlist to use for the deployment
- --is-platform-authentication-enabled Enable platform authentication for deployment.
- --max-node-disk-size int32 Set maximum disk size for nodes for autoscaler (GiB)
- --model string Set model of the deployment (default "oneshard")
- --name string Name of the deployment
- --node-count int32 Set the number of desired nodes (default 3)
- --node-disk-size int32 Set disk size for nodes (GiB)
- --node-size-id string Set the node size to use for this deployment
- --notification-email-address strings Set email address(-es) that will be used for notifications related to this deployment.
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- --version string Version of ArangoDB to use for the deployment
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-group.md b/site/content/3.11/arangograph/oasisctl/update/update-group.md
deleted file mode 100644
index 7021923d4c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-group.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Update Group
-menuTitle: Update Group
-weight: 6
----
-## oasisctl update group
-
-Update a group the authenticated user has access to
-
-```
-oasisctl update group [flags]
-```
-
-## Options
-```
- --description string Description of the group
- -g, --group-id string Identifier of the group
- -h, --help help for group
- --name string Name of the group
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-ipallowlist.md b/site/content/3.11/arangograph/oasisctl/update/update-ipallowlist.md
deleted file mode 100644
index 089d41026c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-ipallowlist.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Oasisctl Update IP Allowlist
-menuTitle: Update IP Allowlist
-weight: 7
----
-## oasisctl update ipallowlist
-
-Update an IP allowlist the authenticated user has access to
-
-```
-oasisctl update ipallowlist [flags]
-```
-
-## Options
-```
- --add-cidr-range strings List of CIDR ranges to add to the IP allowlist
- --description string Description of the CA certificate
- -h, --help help for ipallowlist
- -i, --ipallowlist-id string Identifier of the IP allowlist
- --name string Name of the CA certificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- --remote-inspection-allowed If set, remote connectivity checks by the Oasis platform are allowed
- --remove-cidr-range strings List of CIDR ranges to remove from the IP allowlist
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-metrics-token.md b/site/content/3.11/arangograph/oasisctl/update/update-metrics-token.md
deleted file mode 100644
index 2ff4a301aa..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-metrics-token.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Update Metrics Token
-menuTitle: Update Metrics Token
-weight: 9
----
-## oasisctl update metrics token
-
-Update a metrics token
-
-```
-oasisctl update metrics token [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- --description string Description of the CA certificate
- -h, --help help for token
- --name string Name of the CA certificate
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -t, --token-id string Identifier of the metrics token
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update metrics](update-metrics.md) - Update metrics resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-metrics.md b/site/content/3.11/arangograph/oasisctl/update/update-metrics.md
deleted file mode 100644
index d8fc683f1e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-metrics.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Metrics
-menuTitle: Update Metrics
-weight: 8
----
-## oasisctl update metrics
-
-Update metrics resources
-
-```
-oasisctl update metrics [flags]
-```
-
-## Options
-```
- -h, --help help for metrics
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-* [oasisctl update metrics token](update-metrics-token.md) - Update a metrics token
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-notebook.md b/site/content/3.11/arangograph/oasisctl/update/update-notebook.md
deleted file mode 100644
index 2b6fee7bb0..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-notebook.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Update Notebook
-menuTitle: Update Notebook
-weight: 10
----
-## oasisctl update notebook
-
-Update notebook
-
-```
-oasisctl update notebook [flags]
-```
-
-## Options
-```
- -d, --description string Description of the notebook
- -s, --disk-size int32 Notebook disk size in GiB
- -h, --help help for notebook
- --name string Name of the notebook
- -n, --notebook-id string Identifier of the notebook
- -m, --notebook-model string Identifier of the notebook model
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-organization-authentication-providers.md b/site/content/3.11/arangograph/oasisctl/update/update-organization-authentication-providers.md
deleted file mode 100644
index 8d8d9be5de..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-organization-authentication-providers.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Update Organization Authentication Providers
-menuTitle: Update Organization Authentication Providers
-weight: 13
----
-## oasisctl update organization authentication providers
-
-Update allowed authentication providers for an organization the authenticated user has access to
-
-```
-oasisctl update organization authentication providers [flags]
-```
-
-## Options
-```
- --enable-github If set, allow access from user accounts authentication through Github
- --enable-google If set, allow access from user accounts authentication through Google
- --enable-microsoft If set, allow access from user accounts authentication through Microsoft
- --enable-sso If set, allow access from user accounts authentication through single sign on (sso)
- --enable-username-password If set, allow access from user accounts authentication through username-password
- -h, --help help for providers
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update organization authentication](update-organization-authentication.md) - Update authentication settings for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-organization-authentication.md b/site/content/3.11/arangograph/oasisctl/update/update-organization-authentication.md
deleted file mode 100644
index 328b81b297..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-organization-authentication.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Organization Authentication
-menuTitle: Update Organization Authentication
-weight: 12
----
-## oasisctl update organization authentication
-
-Update authentication settings for an organization
-
-```
-oasisctl update organization authentication [flags]
-```
-
-## Options
-```
- -h, --help help for authentication
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update organization](update-organization.md) - Update an organization the authenticated user has access to
-* [oasisctl update organization authentication providers](update-organization-authentication-providers.md) - Update allowed authentication providers for an organization the authenticated user has access to
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-organization-email-domain-restrictions.md b/site/content/3.11/arangograph/oasisctl/update/update-organization-email-domain-restrictions.md
deleted file mode 100644
index 6d860fa8d6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-organization-email-domain-restrictions.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Update Organization Email Domain Restrictions
-menuTitle: Update Organization Email Domain Restrictions
-weight: 16
----
-## oasisctl update organization email domain restrictions
-
-Update which domain restrictions are placed on accessing a specific organization
-
-```
-oasisctl update organization email domain restrictions [flags]
-```
-
-## Options
-```
- -d, --allowed-domain strings Allowed email domains for users of the organization
- -h, --help help for restrictions
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update organization email domain](update-organization-email-domain.md) - Update email domain specific information for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-organization-email-domain.md b/site/content/3.11/arangograph/oasisctl/update/update-organization-email-domain.md
deleted file mode 100644
index 57f79b6fbb..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-organization-email-domain.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Organization Email Domain
-menuTitle: Update Organization Email Domain
-weight: 15
----
-## oasisctl update organization email domain
-
-Update email domain specific information for an organization
-
-```
-oasisctl update organization email domain [flags]
-```
-
-## Options
-```
- -h, --help help for domain
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update organization email](update-organization-email.md) - Update email specific information for an organization
-* [oasisctl update organization email domain restrictions](update-organization-email-domain-restrictions.md) - Update which domain restrictions are placed on accessing a specific organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-organization-email.md b/site/content/3.11/arangograph/oasisctl/update/update-organization-email.md
deleted file mode 100644
index 89f05ed737..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-organization-email.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Organization Email
-menuTitle: Update Organization Email
-weight: 14
----
-## oasisctl update organization email
-
-Update email specific information for an organization
-
-```
-oasisctl update organization email [flags]
-```
-
-## Options
-```
- -h, --help help for email
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update organization](update-organization.md) - Update an organization the authenticated user has access to
-* [oasisctl update organization email domain](update-organization-email-domain.md) - Update email domain specific information for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-organization.md b/site/content/3.11/arangograph/oasisctl/update/update-organization.md
deleted file mode 100644
index 670d291842..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-organization.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Update Organization
-menuTitle: Update Organization
-weight: 11
----
-## oasisctl update organization
-
-Update an organization the authenticated user has access to
-
-```
-oasisctl update organization [flags]
-```
-
-## Options
-```
- --description string Description of the organization
- -h, --help help for organization
- --name string Name of the organization
- -o, --organization-id string Identifier of the organization
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-* [oasisctl update organization authentication](update-organization-authentication.md) - Update authentication settings for an organization
-* [oasisctl update organization email](update-organization-email.md) - Update email specific information for an organization
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-policy-add-binding.md b/site/content/3.11/arangograph/oasisctl/update/update-policy-add-binding.md
deleted file mode 100644
index df89601244..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-policy-add-binding.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Update Policy Add Binding
-menuTitle: Update Policy Add Binding
-weight: 19
----
-## oasisctl update policy add binding
-
-Add a role binding to a policy
-
-```
-oasisctl update policy add binding [flags]
-```
-
-## Options
-```
- --group-id strings Identifiers of the groups to add bindings for
- -h, --help help for binding
- -r, --role-id string Identifier of the role to bind to
- -u, --url string URL of the resource to update the policy for
- --user-id strings Identifiers of the users to add bindings for
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update policy add](update-policy-add.md) - Add to a policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-policy-add.md b/site/content/3.11/arangograph/oasisctl/update/update-policy-add.md
deleted file mode 100644
index 42e655fe7c..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-policy-add.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Policy Add
-menuTitle: Update Policy Add
-weight: 18
----
-## oasisctl update policy add
-
-Add to a policy
-
-```
-oasisctl update policy add [flags]
-```
-
-## Options
-```
- -h, --help help for add
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update policy](update-policy.md) - Update a policy
-* [oasisctl update policy add binding](update-policy-add-binding.md) - Add a role binding to a policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-policy-delete-binding.md b/site/content/3.11/arangograph/oasisctl/update/update-policy-delete-binding.md
deleted file mode 100644
index 602bc93e93..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-policy-delete-binding.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Update Policy Delete Binding
-menuTitle: Update Policy Delete Binding
-weight: 21
----
-## oasisctl update policy delete binding
-
-Delete a role binding from a policy
-
-```
-oasisctl update policy delete binding [flags]
-```
-
-## Options
-```
- --group-id strings Identifiers of the groups to delete bindings for
- -h, --help help for binding
- -r, --role-id string Identifier of the role to delete bind for
- -u, --url string URL of the resource to update the policy for
- --user-id strings Identifiers of the users to delete bindings for
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update policy delete](update-policy-delete.md) - Delete from a policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-policy-delete.md b/site/content/3.11/arangograph/oasisctl/update/update-policy-delete.md
deleted file mode 100644
index dec2527590..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-policy-delete.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Policy Delete
-menuTitle: Update Policy Delete
-weight: 20
----
-## oasisctl update policy delete
-
-Delete from a policy
-
-```
-oasisctl update policy delete [flags]
-```
-
-## Options
-```
- -h, --help help for delete
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update policy](update-policy.md) - Update a policy
-* [oasisctl update policy delete binding](update-policy-delete-binding.md) - Delete a role binding from a policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-policy.md b/site/content/3.11/arangograph/oasisctl/update/update-policy.md
deleted file mode 100644
index 132c0b4123..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-policy.md
+++ /dev/null
@@ -1,30 +0,0 @@
----
-title: Oasisctl Update Policy
-menuTitle: Update Policy
-weight: 17
----
-## oasisctl update policy
-
-Update a policy
-
-```
-oasisctl update policy [flags]
-```
-
-## Options
-```
- -h, --help help for policy
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-* [oasisctl update policy add](update-policy-add.md) - Add to a policy
-* [oasisctl update policy delete](update-policy-delete.md) - Delete from a policy
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-private-endpoint-service.md b/site/content/3.11/arangograph/oasisctl/update/update-private-endpoint-service.md
deleted file mode 100644
index 81aa0917f6..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-private-endpoint-service.md
+++ /dev/null
@@ -1,38 +0,0 @@
----
-title: Oasisctl Update Private Endpoint Service
-menuTitle: Update Private Endpoint Service
-weight: 24
----
-## oasisctl update private endpoint service
-
-Update a Private Endpoint Service attached to an existing deployment
-
-```
-oasisctl update private endpoint service [flags]
-```
-
-## Options
-```
- --alternate-dns-name strings DNS names used for the deployment in the private network
- --aws-principal strings List of AWS Principals from which a Private Endpoint can be created (Format: [/Role/|/User/])
- --azure-client-subscription-id strings List of Azure subscription IDs from which a Private Endpoint can be created
- -d, --deployment-id string Identifier of the deployment that the private endpoint service is connected to
- --description string Description of the private endpoint service
- --enable-private-dns Enable private DNS (applicable for AWS only)
- --gcp-project strings List of GCP projects from which a Private Endpoint can be created
- -h, --help help for service
- --name string Name of the private endpoint service
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update private endpoint](update-private-endpoint.md) -
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-private-endpoint.md b/site/content/3.11/arangograph/oasisctl/update/update-private-endpoint.md
deleted file mode 100644
index a66ead3924..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-private-endpoint.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Private Endpoint
-menuTitle: Update Private Endpoint
-weight: 23
----
-## oasisctl update private endpoint
-
-
-
-```
-oasisctl update private endpoint [flags]
-```
-
-## Options
-```
- -h, --help help for endpoint
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update private](update-private.md) - Update private resources
-* [oasisctl update private endpoint service](update-private-endpoint-service.md) - Update a Private Endpoint Service attached to an existing deployment
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-private.md b/site/content/3.11/arangograph/oasisctl/update/update-private.md
deleted file mode 100644
index 8c414612ac..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-private.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Update Private
-menuTitle: Update Private
-weight: 22
----
-## oasisctl update private
-
-Update private resources
-
-```
-oasisctl update private [flags]
-```
-
-## Options
-```
- -h, --help help for private
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-* [oasisctl update private endpoint](update-private-endpoint.md) -
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-project.md b/site/content/3.11/arangograph/oasisctl/update/update-project.md
deleted file mode 100644
index 0965a3684e..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-project.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Update Project
-menuTitle: Update Project
-weight: 25
----
-## oasisctl update project
-
-Update a project the authenticated user has access to
-
-```
-oasisctl update project [flags]
-```
-
-## Options
-```
- --description string Description of the project
- -h, --help help for project
- --name string Name of the project
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/update/update-role.md b/site/content/3.11/arangograph/oasisctl/update/update-role.md
deleted file mode 100644
index 58d7f2e8ab..0000000000
--- a/site/content/3.11/arangograph/oasisctl/update/update-role.md
+++ /dev/null
@@ -1,34 +0,0 @@
----
-title: Oasisctl Update Role
-menuTitle: Update Role
-weight: 26
----
-## oasisctl update role
-
-Update a role the authenticated user has access to
-
-```
-oasisctl update role [flags]
-```
-
-## Options
-```
- --add-permission strings Permissions to add to the role
- --description string Description of the role
- -h, --help help for role
- --name string Name of the role
- -o, --organization-id string Identifier of the organization
- --remove-permission strings Permissions to remove from the role
- -r, --role-id string Identifier of the role
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl update](_index.md) - Update resources
-
diff --git a/site/content/3.11/arangograph/oasisctl/upgrade.md b/site/content/3.11/arangograph/oasisctl/upgrade.md
deleted file mode 100644
index 8d77aa3ecf..0000000000
--- a/site/content/3.11/arangograph/oasisctl/upgrade.md
+++ /dev/null
@@ -1,33 +0,0 @@
----
-title: Oasisctl Upgrade
-menuTitle: Upgrade
-weight: 29
----
-## oasisctl upgrade
-
-Upgrade Oasisctl tool
-
-## Synopsis
-Check the latest, compatible version and upgrade this tool to that.
-
-```
-oasisctl upgrade [flags]
-```
-
-## Options
-```
- -d, --dry-run Do an upgrade without applying the version.
- -f, --force Force an upgrade even if the versions match.
- -h, --help help for upgrade
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/version.md b/site/content/3.11/arangograph/oasisctl/version.md
deleted file mode 100644
index e8e5ee7c8b..0000000000
--- a/site/content/3.11/arangograph/oasisctl/version.md
+++ /dev/null
@@ -1,28 +0,0 @@
----
-title: Oasisctl Version
-menuTitle: Version
-weight: 30
----
-## oasisctl version
-
-Show the current version of this tool
-
-```
-oasisctl version [flags]
-```
-
-## Options
-```
- -h, --help help for version
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](options.md) - ArangoGraph Insights Platform
-
diff --git a/site/content/3.11/arangograph/oasisctl/wait/_index.md b/site/content/3.11/arangograph/oasisctl/wait/_index.md
deleted file mode 100644
index 1ccac25259..0000000000
--- a/site/content/3.11/arangograph/oasisctl/wait/_index.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Oasisctl Wait
-menuTitle: Wait
-weight: 31
----
-## oasisctl wait
-
-Wait for a status change
-
-```
-oasisctl wait [flags]
-```
-
-## Options
-```
- -h, --help help for wait
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl](../options.md) - ArangoGraph Insights Platform
-* [oasisctl wait deployment](wait-deployment.md) - Wait for a deployment to reach the ready status
-
diff --git a/site/content/3.11/arangograph/oasisctl/wait/wait-deployment.md b/site/content/3.11/arangograph/oasisctl/wait/wait-deployment.md
deleted file mode 100644
index ddc2c82d76..0000000000
--- a/site/content/3.11/arangograph/oasisctl/wait/wait-deployment.md
+++ /dev/null
@@ -1,32 +0,0 @@
----
-title: Oasisctl Wait Deployment
-menuTitle: Wait Deployment
-weight: 1
----
-## oasisctl wait deployment
-
-Wait for a deployment to reach the ready status
-
-```
-oasisctl wait deployment [flags]
-```
-
-## Options
-```
- -d, --deployment-id string Identifier of the deployment
- -h, --help help for deployment
- -o, --organization-id string Identifier of the organization
- -p, --project-id string Identifier of the project
- -t, --timeout duration How long to wait for the deployment to reach the ready status (default 20m0s)
-```
-
-## Options Inherited From Parent Commands
-```
- --endpoint string API endpoint of the ArangoDB Oasis (default "api.cloud.arangodb.com")
- --format string Output format (table|json) (default "table")
- --token string Token used to authenticate at ArangoDB Oasis
-```
-
-## See also
-* [oasisctl wait](_index.md) - Wait for a status change
-
diff --git a/site/content/3.11/arangograph/organizations/_index.md b/site/content/3.11/arangograph/organizations/_index.md
deleted file mode 100644
index 083b746dda..0000000000
--- a/site/content/3.11/arangograph/organizations/_index.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: Organizations in ArangoGraph
-menuTitle: Organizations
-weight: 10
-description: >-
- How to manage organizations and what type of packages ArangoGraph offers
----
-An ArangoGraph organizations is a container for projects. An organization
-typically represents a (commercial) entity such as a company, a company division,
-an institution, or a non-profit organization.
-
-**Organizations → Projects → Deployments**
-
-Users can be members of one or more organizations. However, you can only be a
-member of one _Free-to-try_ tier organization at a time.
-
-## How to switch between my organizations
-
-1. The first entry in the main navigation (with a double arrow icon) indicates
- the current organization.
-2. Click it to bring up a dropdown menu to select another organization of which you
- are a member.
-3. The overview will open for the selected organization, showing the number of
- projects, the tier and when it was created.
-
-
-
-
-
-## ArangoGraph Packages
-
-With the ArangoGraph Insights Platform, your organization can choose one of the
-following packages.
-
-### Free Trial
-
-ArangoGraph comes with a free-to-try tier that lets you test ArangoGraph for
-free for 14 days. You can get started quickly, without needing to enter a
-credit card.
-
-The free trial gives you access to:
-- One small deployment (4GB) in a region of your choice for 14 days
-- Local backups
-- One ArangoGraph Notebook for learning and data science
-
-After the trial period, your deployment will be deleted automatically.
-
-### On-Demand
-
-Add a payment method to gain access to ArangoGraph's full feature set.
-Pay monthly via a credit card for what you actually use.
-
-This package unlocks all ArangoGraph functionality, including:
-- Multiple and larger deployments
-- Backups to cloud storage, with multi-region support
-- Enhanced security features such as Private Endpoints
-
-### Committed
-
-Commit up-front for a year and pay via the Sales team. This package provides
-the same flexibility of On-Demand, but at a lower price.
-
-In addition, you gain access to:
-- 24/7 Premium Support
-- ArangoDB Professional Services Engagements
-- Ability to transact via the AWS and GCP marketplaces
-
-To take advantage of this, you need to get in touch with the ArangoDB
-team. [Contact us](https://www.arangodb.com/contact/) for more details.
-
-## How to unlock all features
-
-You can unlock all features in ArangoGraph at any time by adding your billing
-details and a payment method. As soon as you have added a payment method, all
-ArangoGraph functionalities are immediately unlocked. From that point on, your
-deployments will no longer expire and you can create more and larger deployments.
-
-See [Billing: How to add billing details / payment methods](billing.md)
-
-
-
-## How to create a new organization
-
-See [My Account: How to create a new organization](../my-account.md#how-to-create-a-new-organization)
-
-## How to restrict access to an organization
-
-If you want to restrict access to an organization, you can do it by specifying which authentication providers are accepted for users trying to access the organization. For more information, refer to the [Access Control](../security-and-access-control/_index.md#restricting-access-to-organizations) section.
-
-## How to delete the current organization
-
-{{< danger >}}
-Removing an organization implies the deletion of projects and deployments.
-This operation cannot be undone and **all deployment data will be lost**.
-Please proceed with caution.
-{{< /danger >}}
-
-1. Click **Overview** in the **Organization** section of the main navigation.
-2. Open the **Danger zone** tab.
-3. Click the **Delete organization** button.
-4. Enter `Delete!` to confirm and click **Yes**.
-
-{{< info >}}
-If you are no longer a member of any organization, then a new organization is
-created for you when you log in again.
-{{< /info >}}
-
-{{< tip >}}
-If the organization has a locked resource (a project or a deployment), you need to [unlock](../security-and-access-control/_index.md#locked-resources)
-that resource first to be able to delete the organization.
-{{< /tip >}}
diff --git a/site/content/3.11/arangograph/organizations/billing.md b/site/content/3.11/arangograph/organizations/billing.md
deleted file mode 100644
index 9b892b5500..0000000000
--- a/site/content/3.11/arangograph/organizations/billing.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Billing in ArangoGraph
-menuTitle: Billing
-weight: 10
-description: >-
- How to manage billing details and payment methods in ArangoGraph
----
-## How to add billing details
-
-1. In the main navigation menu, click the **Organization** icon.
-2. Click **Billing** in the **Organization** section.
-3. In the **Billing Details** section, click **Edit**.
-4. Enter your company name, billing address, and EU VAT identification number (if applicable).
-5. Optionally, enter the email address(es) to which invoices should be emailed
- to automatically.
-6. Click **Save**.
-
-
-
-## How to add a payment method
-
-1. In the main navigation menu, click the **Organization** icon.
-2. Click **Billing** in the **Organization** section.
-3. In the **Payment methods** section, click **Add**.
-4. Fill out the form with your credit card details. Currently, a credit card is the only available payment method.
-5. Click **Save**.
-
-
-
-{{% comment %}}
-TODO: Need screenshot with invoice
-
-### How to view invoices
-
-
-{{% /comment %}}
diff --git a/site/content/3.11/arangograph/organizations/credits-and-usage.md b/site/content/3.11/arangograph/organizations/credits-and-usage.md
deleted file mode 100644
index 34dafb8488..0000000000
--- a/site/content/3.11/arangograph/organizations/credits-and-usage.md
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: Credits & Usage in ArangoGraph
-menuTitle: Credits & Usage
-weight: 15
-description: >-
- Credits give you access to a flexible prepaid model, so you can allocate them
- across multiple deployments as needed
----
-{{< info >}}
-Credits are only available if your organization has signed up for
-ArangoGraph's [Committed](../organizations/_index.md#committed) package.
-{{< /info >}}
-
-The ArangoGraph credit model is a versatile prepaid model that allows you to
-purchase credits and use them in a flexible way, based on what you have running
-in ArangoGraph.
-
-Instead of purchasing a particular deployment for a year, you can purchase a
-number of ArangoGraph credits that expire a year after purchase. These credits
-are then consumed over that time period, based on the deployments you run
-in ArangoGraph.
-
-For example, a OneShard (three nodes) A64 deployment consumes more credits per
-hour than a smaller deployment such as A8. If you are running multiple deployments,
-like pre-production environments or for different use-cases, these would each consume
-from the same credit balance. However, if you are not running any deployments
-and do not have any backup storage, then none of your credits will be consumed.
-
-{{< tip >}}
-To purchase credits for your organization, you need to get in touch with the
-ArangoDB team. [Contact us](https://www.arangodb.com/contact/) for more details.
-{{< /tip >}}
-
-There are a number of benefits that ArangoGraph credits provide:
-- **Adaptability**: The pre-paid credit model allows you to adapt your usage to
- changing project requirements or fluctuating workloads. By enabling the use of
- credits for various instance types and sizes, you can easily adjust your
- resource allocation.
-- **Efficient handling of resources**: With the ability to purchase credits in
- advance, you can better align your needs in terms of resources and costs.
- You can purchase credits in bulk and then allocate them as needed.
-- **Workload Optimization**: By having a clear view of credit consumption and
- remaining balance, you can identify inefficiencies to further optimize your
- infrastructure, resulting in cost savings and better performance.
-
-## How to view the credit usage
-
-1. In the main navigation, click the **Organization** icon.
-2. Click **Credits & Usage** in the **Organization** section.
-3. In the **Credits & Usage** page, you can:
- - See the remaining credit balance.
- - Track your total credit balance.
- - See a projection of when you will run out of credits, based on the last 30 days of usage.
- - Get a detailed consumption report in PDF format that shows:
- - The number of credits you had at the start of the month.
- - The number of credits consumed in the month.
- - The number of credits remaining.
- - The number of credits consumed for each deployment.
-
-
-
-## FAQs
-
-### Are there any configuration constraints for using the credits?
-
-No. Credits are designed to be used completely flexibly. You can use all of your
-credits for multiple small deployments (i.e. A8s) or you can use them for a single
-large deployment (i.e. A256), or even multiple large deployments, as long as you
-have enough credits remaining.
-
-### What is the flexibility of moving up or down in configuration size of the infrastructure?
-
-You can move up sizes in configuration at any point by editing your deployment
-within ArangoGraph, once every 6 hours to allow for in-place disk expansion.
-
-### Is there a limit to how many deployments I can use my credits on?
-
-There is no specific limit to the number of deployments you can use your credits
-on. The credit model is designed to provide you with the flexibility to allocate
-credits across multiple deployments as needed. This enables you to effectively
-manage and distribute your resources according to your specific requirements and
-priorities. However, it is essential to monitor your credit consumption to ensure
-that you have sufficient credits to cover your deployments.
-
-### Do the credits I purchase expire?
-
-Yes, credits expire 1 year after purchase. You should ensure that you consume
-all of these credits within the year.
-
-### Can I make multiple purchases of credits within a year?
-
-As an organization’s usage of ArangoGraph grows, particularly in the initial
-phases of application development and early production release, it is common
-to purchase a smaller credit package that is later supplemented by a larger
-credit package part-way through the initial credit expiry term.
-In this case, all sets of credits will be available for ArangoGraph consumption
-as a single credit balance. The credits with the earlier expiry date are consumed
-first to avoid credit expiry where possible.
-
-### Can I purchase a specific number of credits (i.e. 3361, 4185)?
-
-ArangoGraph offers a variety of predefined credit packages designed to
-accommodate different needs and stages of the application lifecycle.
-For any credit purchasing needs, please [contact us](https://www.arangodb.com/contact/)
-and we are happy to help find an appropriate package for you.
-
-### How quickly will the credits I purchase be consumed?
-
-The rate at which your purchased credits will be consumed depends on several
-factors, including the type and size of instances you deploy, the amount of
-resources used, and the duration of usage. Each machine size has an hourly credit
-consumption rate, and the overall rate of credit consumption will increase for
-larger sizes or for more machines/deployments. Credits will also be consumed for
-any variable usage charges such as outbound network traffic and backup storage.
-
-### How can I see how many credits I have remaining?
-
-All details about credits, including how many credits have been purchased,
-how many remain, and how they are being consumed are available in the
-**Credits & Usage** page within the ArangoGraph web interface.
-
-### I have a large sharded deployment, how do I know how many credits it will consume?
-
-If you are using credits, then you will be able to see how many credits your
-configured deployment will consume when [creating](../deployments/_index.md#how-to-create-a-new-deployment)
-or [editing a deployment](../deployments/_index.md#how-to-edit-a-deployment).
-
-You can download a detailed consumption report in the
-[**Credits & Usage** section](#how-to-view-the-credit-usage). It shows you the
-number of credits consumed by any deployment you are creating or editing.
-
-All users can see the credit price of each node size in the **Pricing** section.
-
-### What happens if I run out of credits?
-
-If you run out of credits, your access to ArangoGraph's services and resources
-will be temporarily suspended until you purchase additional credits.
-
-### Can I buy credits for a short time period (e.g. 2 months)?
-
-No, you cannot but credits with an expiry of less than 12 months.
-If you require credits for a shorter time frame, such as 2 months, you can still
-purchase one of the standard credit packages and consume the credits as needed
-during that time. You may opt for a smaller credit package that aligns with your
-expected usage during the desired period, rather than the full year’s expected usage.
-Although the credits will have a longer expiration period, this allows you to have
-the flexibility of utilizing the remaining credits for any future needs.
\ No newline at end of file
diff --git a/site/content/3.11/arangograph/organizations/users-and-groups.md b/site/content/3.11/arangograph/organizations/users-and-groups.md
deleted file mode 100644
index abed36697b..0000000000
--- a/site/content/3.11/arangograph/organizations/users-and-groups.md
+++ /dev/null
@@ -1,125 +0,0 @@
----
-title: Users and Groups in ArangoGraph
-menuTitle: Users & Groups
-weight: 5
-description: >-
- How to manage individual members and user groups in ArangoGraph
----
-## Users, groups & members
-
-When you use ArangoGraph, you are logged in as a user.
-A user has properties such as name & email address.
-Most important of the user is that it serves as an identity of a person.
-
-A user is member of one or more organizations in ArangoGraph.
-You can become a member of an organization in the following ways:
-
-- Create a new organization. You will become the first member and owner of that
- organization.
-- Be invited to join an organization. Once accepted (by the invited user), this
- user becomes a member of the organization.
-
-If the number of members of an organization becomes large, it helps to group
-users. In ArangoGraph a group is part of an organization and a group contains
-a list of users. All users of the group must be member of the owning organization.
-
-In the **People** section of the dashboard you can manage users, groups and
-invites for the organization.
-
-To edit permissions of members see [Access Control](../security-and-access-control/_index.md).
-
-## Members
-
-Members are a list of users that can access an organization.
-
-
-
-### How to add a new member to the organization
-
-1. In the main navigation, click the __Organization__ icon.
-2. Click __Members__ in the __People__ section.
-3. Optionally, click the __Invites__ entry.
-4. Click the __Invite new member__ button.
-5. In the form that appears, enter the email address of the person you want to
- invite.
-6. Click the __Create__ button.
-7. An email with an organization invite will now be sent to the specified
- email address.
-8. After accepting the invite the person will be added to the organization
- [members](#members).
-
-
-
-### How to respond to an organization invite
-
-See [My Account: How to respond to my invites](../my-account.md#how-to-respond-to-my-invites)
-
-### How to remove a member from the organization
-
-1. Click __Members__ in the __People__ section of the main navigation.
-2. Delete a member by pressing the __recycle bin__ icon in the __Actions__ column.
-3. Confirm the deletion in the dialog that pops up.
-
-{{< info >}}
-You cannot delete members who are organization owners.
-{{< /info >}}
-
-### How to make a member an organization owner
-
-1. Click __Members__ in the __People__ section of the main navigation.
-2. You can convert a member to an organization owner by pressing the __Key__ icon
- in the __Actions__ column.
-3. You can convert a member back to a normal user by pressing the __User__ icon
- in the __Actions__ column.
-
-## Groups
-
-A group is a defined set of members. Groups can then be bound to roles. These
-bindings contribute to the respective organization, project or deployment policy.
-
-
-
-### How to create a new group
-
-1. Click __Groups__ in the __People__ section of the main navigation.
-2. Press the __New group__ button.
-3. Enter a name and optionally a description for your new group.
-4. Select the members you want to be part of the group.
-5. Press the __Create__ button.
-
-
-
-### How to view, edit or remove a group
-
-1. Click __Groups__ in the __People__ section of the main navigation.
-2. Click an icon in the __Actions__ column:
- - __Eye__: View group
- - __Pencil__: Edit group
- - __Recycle bin__: Delete group
-
-You can also click a group name to view it. There are buttons to __Edit__ and
-__Delete__ the currently viewed group.
-
-
-
-{{< info >}}
-The groups __Organization members__ and __Organization owners__ are virtual groups
-and cannot be changed. They always reflect the current set of organization
-members and owners.
-{{< /info >}}
-
-## Invites
-
-### How to create a new organization invite
-
-See [How to add a new member to the organization](#how-to-add-a-new-member-to-the-organization)
-
-### How to view the status of invitations
-
-1. Click __Invites__ in the __People__ section of the main navigation.
-2. The created invites are displayed, grouped by status __Pending__,
- __Accepted__ and __Rejected__.
-3. You may delete pending invites by clicking the __recycle bin__ icon in the
- __Actions__ column.
-
-
diff --git a/site/content/3.11/arangograph/projects.md b/site/content/3.11/arangograph/projects.md
deleted file mode 100644
index f4efd27833..0000000000
--- a/site/content/3.11/arangograph/projects.md
+++ /dev/null
@@ -1,82 +0,0 @@
----
-title: Projects in ArangoGraph
-menuTitle: Projects
-weight: 15
-description: >-
- How to manage projects and IP allowlists in ArangoGraph
----
-ArangoGraph projects can represent organizational units such as teams,
-product groups, environments (e.g. staging vs. production). You can have any
-number of projects under one organization.
-
-**Organizations → Projects → Deployments**
-
-Projects are a container for related deployments, certificates & IP allowlists.
-Projects also come with their own policy for access control. You can have any
-number of deployment under one project.
-
-
-
-## How to create a new project
-
-1. In the main navigation, click the __Dashboard__ icon.
-2. Click __Projects__ in the __Dashboard__ section.
-3. Click the __New project__ button.
-4. Enter a name and optionally a description for your new project.
-5. Click the __Create__ button.
-6. You will be taken to the project page.
-7. To change the name or description, click either at the top of the page.
-
-
-
-
-
-{{< info >}}
-Projects contain exactly **one policy**. Within that policy, you can define
-role bindings to regulate access control on a project level.
-{{< /info >}}
-
-## How to create a new deployment
-
-See [Deployments: How to create a new deployment](deployments/_index.md#how-to-create-a-new-deployment)
-
-## How to delete a project
-
-{{< danger >}}
-Deleting a project will delete contained deployments, certificates & IP allowlists.
-This operation is **irreversible**.
-{{< /danger >}}
-
-1. Click __Projects__ in the __Dashboard__ section of the main navigation.
-2. Click the __recycle bin__ icon in the __Actions__ column of the project to be deleted.
-3. Enter `Delete!` to confirm and click __Yes__.
-
-{{< tip >}}
-If the project has a locked deployment, you need to [unlock](security-and-access-control/_index.md#locked-resources)
-it first to be able to delete the project.
-{{< /tip >}}
-
-## How to manage IP allowlists
-
-IP allowlists let you limit access to your deployment to certain IP ranges.
-It is optional, but strongly recommended to do so.
-
-You can create an allowlist as part of a project.
-
-1. Click a project name in the __Projects__ section of the main navigation.
-2. Click the __Security__ entry.
-3. In the __IP allowlists__ section, click:
- - The __New IP allowlist__ button to create a new allowlist.
- When creating or editing a list, you can add comments
- in the __Allowed CIDR ranges (1 per line)__ section.
- Everything after `//` or `#` is considered a comment until the end of the line.
- - A name or the __eye__ icon in the __Actions__ column to view the allowlist.
- - The __pencil__ icon to edit the allowlist.
- You can also view the allowlist and click the __Edit__ button.
- - The __recycle bin__ icon to delete the allowlist.
-
-## How to manage role bindings
-
-See:
-- [Access Control: How to view, edit or remove role bindings of a policy](security-and-access-control/_index.md#how-to-view-edit-or-remove-role-bindings-of-a-policy)
-- [Access Control: How to add a role binding to a policy](security-and-access-control/_index.md#how-to-add-a-role-binding-to-a-policy)
diff --git a/site/content/3.11/arangograph/security-and-access-control/_index.md b/site/content/3.11/arangograph/security-and-access-control/_index.md
deleted file mode 100644
index fa37f9af13..0000000000
--- a/site/content/3.11/arangograph/security-and-access-control/_index.md
+++ /dev/null
@@ -1,699 +0,0 @@
----
-title: Security and access control in ArangoGraph
-menuTitle: Security and Access Control
-weight: 45
-description: >-
- This guide explains which access control concepts are available in
- ArangoGraph and how to use them
----
-The ArangoGraph Insights Platform has a structured set of resources that are subject to security and
-access control:
-
-- Organizations
-- Projects
-- Deployments
-
-For each of these resources, you can perform various operations.
-For example, you can create a project in an organization and create a deployment
-inside a project.
-
-## Locked resources
-
-In ArangoGraph, you can lock the resources to prevent accidental deletion. When
-a resource is locked, it cannot be deleted and must be unlocked first.
-
-The hierarchical structure of the resources (organization-project-deployment)
-is used in the locking functionality: if a child resource is locked
-(for example, a deployment), you cannot delete the parent project without
-unlocking that deployment first.
-
-{{< info >}}
-If you lock a backup policy of a deployment or an IP allowlist, CA certificate,
-and IAM provider of a project, it is still possible to delete
-the corresponding parent resource without unlocking those properties first.
-{{< /info >}}
-
-## Policy
-
-Various actions in ArangoGraph require different permissions, which can be
-granted to users via **roles**.
-
-The association of a member with a role is called a **role binding**.
-All role bindings of a resource comprise a **policy**.
-
-Roles can be bound on an organization, project, and deployment level (listed in
-the high to low level order, with lower levels inheriting permissions from their
-parents). This means that there is a unique policy per resource (an organization,
-a project, or a deployment).
-
-For example, an organization has exactly one policy,
-which binds roles to members of the organization. These bindings are used to
-give the users permissions to perform operations in this organization.
-This is useful when, as an organization owner, you need to extend the permissions
-for an organization member.
-
-{{< info >}}
-Permissions linked to predefined roles vary between organization owners and
-organization members. If you need to extend permissions for an organization
-member, you can create a new role binding. The complete list of roles and
-their respective permissions for both organization owners and members can be
-viewed on the **Policy** page of an organization within the ArangoGraph dashboard.
-{{< /info >}}
-
-### How to view, edit, or remove role bindings of a policy
-
-Decide whether you want to edit the policy for an organization, a project,
-or a deployment:
-
-- **Organization**: In the main navigation, click the __Organization__ icon and
- then click __Policy__.
-- **Project**: In the main navigation, click the __Dashboard__ icon, then click
- __Projects__, click the name of the desired project, and finally click __Policy__.
-- **Deployment**: In the main navigation, click the __Dashboard__ icon, then
- click __Deployments__, click the name of the desired deployment, and finally
- click __Policy__.
-
-To delete a role binding, click the **Recycle Bin** icon in the **Actions** column.
-
-{{< info >}}
-Currently, you cannot edit a role binding, you can only delete it.
-{{< /info >}}
-
-
-
-### How to add a role binding to a policy
-
-1. Navigate to the **Policy** tab of an organization, a project or a deployment.
-2. Click the **New role binding** button.
-3. Select one or more users and/or groups.
-4. Select one or more roles you want to assign to the specified members.
-5. Click **Create**.
-
-
-
-## Roles
-
-Operations on resources in ArangoGraph require zero (just an authentication) or
-more permissions. Since the
-number of permissions is large and very detailed, it is not practical to assign
-permissions directly to users. Instead, ArangoGraph uses **roles**.
-
-A role is a set of permissions. Roles can be bound to groups (preferably)
-or individual users. You can create such bindings for the respective organization,
-project, or deployment policy.
-
-There are predefined roles, but you can also create custom ones.
-
-
-
-### Predefined roles
-
-Predefined roles are created by ArangoGraph and group related permissions together.
-An example of a predefined role is `deployment-viewer`. This role
-contains all permissions needed to view deployments in a project.
-
-Predefined roles cannot be deleted. Note that permissions linked to predefined
-roles vary between organization owners and organization members.
-
-{{% comment %}}
-Command to generate below list with (Git)Bash:
-
-export OASIS_TOKEN=''
-./oasisctl list roles --organization-id --format json | jq -r '.[] | select(.predefined == true) | "**\(.description)** (`\(.id)`):\n\(.permissions | split(", ") | map("- `\(.)`\n") | join(""))"'
-{{% /comment %}}
-
-{{< details summary="List of predefined roles and their permissions" >}}
-
-{{* tip */>}}
-The roles below are described following this pattern:
-
-**Role description** (`role ID`):
-- `Permission`
-{{* /tip */>}}
-
-**Audit Log Admin** (`auditlog-admin`):
-- `audit.auditlog.create`
-- `audit.auditlog.delete`
-- `audit.auditlog.get`
-- `audit.auditlog.list`
-- `audit.auditlog.set-default`
-- `audit.auditlog.test-https-post-destination`
-- `audit.auditlog.update`
-
-**Audit Log Archive Admin** (`auditlog-archive-admin`):
-- `audit.auditlogarchive.delete`
-- `audit.auditlogarchive.get`
-- `audit.auditlogarchive.list`
-
-**Audit Log Archive Viewer** (`auditlog-archive-viewer`):
-- `audit.auditlogarchive.get`
-- `audit.auditlogarchive.list`
-
-**Audit Log Attachment Admin** (`auditlog-attachment-admin`):
-- `audit.auditlogattachment.create`
-- `audit.auditlogattachment.delete`
-- `audit.auditlogattachment.get`
-
-**Audit Log Attachment Viewer** (`auditlog-attachment-viewer`):
-- `audit.auditlogattachment.get`
-
-**Audit Log Event Admin** (`auditlog-event-admin`):
-- `audit.auditlogevent.delete`
-- `audit.auditlogevents.get`
-
-**Audit Log Event Viewer** (`auditlog-event-viewer`):
-- `audit.auditlogevents.get`
-
-**Audit Log Viewer** (`auditlog-viewer`):
-- `audit.auditlog.get`
-- `audit.auditlog.list`
-
-**Backup Administrator** (`backup-admin`):
-- `backup.backup.copy`
-- `backup.backup.create`
-- `backup.backup.delete`
-- `backup.backup.download`
-- `backup.backup.get`
-- `backup.backup.list`
-- `backup.backup.restore`
-- `backup.backup.update`
-- `backup.feature.get`
-- `data.deployment.restore-backup`
-
-**Backup Viewer** (`backup-viewer`):
-- `backup.backup.get`
-- `backup.backup.list`
-- `backup.feature.get`
-
-**Backup Policy Administrator** (`backuppolicy-admin`):
-- `backup.backuppolicy.create`
-- `backup.backuppolicy.delete`
-- `backup.backuppolicy.get`
-- `backup.backuppolicy.list`
-- `backup.backuppolicy.update`
-- `backup.feature.get`
-
-**Backup Policy Viewer** (`backuppolicy-viewer`):
-- `backup.backuppolicy.get`
-- `backup.backuppolicy.list`
-- `backup.feature.get`
-
-**Billing Administrator** (`billing-admin`):
-- `billing.config.get`
-- `billing.config.set`
-- `billing.invoice.get`
-- `billing.invoice.get-preliminary`
-- `billing.invoice.get-statistics`
-- `billing.invoice.list`
-- `billing.organization.get`
-- `billing.paymentmethod.create`
-- `billing.paymentmethod.delete`
-- `billing.paymentmethod.get`
-- `billing.paymentmethod.get-default`
-- `billing.paymentmethod.list`
-- `billing.paymentmethod.set-default`
-- `billing.paymentmethod.update`
-- `billing.paymentprovider.list`
-
-**Billing Viewer** (`billing-viewer`):
-- `billing.config.get`
-- `billing.invoice.get`
-- `billing.invoice.get-preliminary`
-- `billing.invoice.get-statistics`
-- `billing.invoice.list`
-- `billing.organization.get`
-- `billing.paymentmethod.get`
-- `billing.paymentmethod.get-default`
-- `billing.paymentmethod.list`
-- `billing.paymentprovider.list`
-
-**CA Certificate Administrator** (`cacertificate-admin`):
-- `crypto.cacertificate.create`
-- `crypto.cacertificate.delete`
-- `crypto.cacertificate.get`
-- `crypto.cacertificate.list`
-- `crypto.cacertificate.set-default`
-- `crypto.cacertificate.update`
-
-**CA Certificate Viewer** (`cacertificate-viewer`):
-- `crypto.cacertificate.get`
-- `crypto.cacertificate.list`
-
-**Dataloader Administrator** (`dataloader-admin`):
-- `dataloader.deployment.import`
-
-**Deployment Administrator** (`deployment-admin`):
-- `data.cpusize.list`
-- `data.deployment.create`
-- `data.deployment.create-test-database`
-- `data.deployment.delete`
-- `data.deployment.get`
-- `data.deployment.list`
-- `data.deployment.pause`
-- `data.deployment.rebalance-shards`
-- `data.deployment.resume`
-- `data.deployment.rotate-server`
-- `data.deployment.update`
-- `data.deployment.update-scheduled-root-password-rotation`
-- `data.deploymentfeatures.get`
-- `data.deploymentmodel.list`
-- `data.deploymentprice.calculate`
-- `data.diskperformance.list`
-- `data.limits.get`
-- `data.nodesize.list`
-- `data.presets.list`
-- `monitoring.logs.get`
-- `monitoring.metrics.get`
-- `notification.deployment-notification.list`
-- `notification.deployment-notification.mark-as-read`
-- `notification.deployment-notification.mark-as-unread`
-
-**Deployment Content Administrator** (`deployment-content-admin`):
-- `data.cpusize.list`
-- `data.deployment.create-test-database`
-- `data.deployment.get`
-- `data.deployment.list`
-- `data.deploymentcredentials.get`
-- `data.deploymentfeatures.get`
-- `data.deploymentmodel.list`
-- `data.deploymentprice.calculate`
-- `data.diskperformance.list`
-- `data.limits.get`
-- `data.nodesize.list`
-- `data.presets.list`
-- `monitoring.logs.get`
-- `monitoring.metrics.get`
-- `notification.deployment-notification.list`
-- `notification.deployment-notification.mark-as-read`
-- `notification.deployment-notification.mark-as-unread`
-
-**Deployment Full Access User** (`deployment-full-access-user`):
-- `data.deployment.full-access`
-
-**Deployment Read Only User** (`deployment-read-only-user`):
-- `data.deployment.read-only-access`
-
-**Deployment Viewer** (`deployment-viewer`):
-- `data.cpusize.list`
-- `data.deployment.get`
-- `data.deployment.list`
-- `data.deploymentfeatures.get`
-- `data.deploymentmodel.list`
-- `data.deploymentprice.calculate`
-- `data.diskperformance.list`
-- `data.limits.get`
-- `data.nodesize.list`
-- `data.presets.list`
-- `monitoring.metrics.get`
-- `notification.deployment-notification.list`
-- `notification.deployment-notification.mark-as-read`
-- `notification.deployment-notification.mark-as-unread`
-
-**Deployment Profile Viewer** (`deploymentprofile-viewer`):
-- `deploymentprofile.deploymentprofile.list`
-
-**Example Datasets Viewer** (`exampledataset-viewer`):
-- `example.exampledataset.get`
-- `example.exampledataset.list`
-
-**Example Dataset Installation Administrator** (`exampledatasetinstallation-admin`):
-- `example.exampledatasetinstallation.create`
-- `example.exampledatasetinstallation.delete`
-- `example.exampledatasetinstallation.get`
-- `example.exampledatasetinstallation.list`
-- `example.exampledatasetinstallation.update`
-
-**Example Dataset Installation Viewer** (`exampledatasetinstallation-viewer`):
-- `example.exampledatasetinstallation.get`
-- `example.exampledatasetinstallation.list`
-
-**Group Administrator** (`group-admin`):
-- `iam.group.create`
-- `iam.group.delete`
-- `iam.group.get`
-- `iam.group.list`
-- `iam.group.update`
-
-**Group Viewer** (`group-viewer`):
-- `iam.group.get`
-- `iam.group.list`
-
-**IAM provider Administrator** (`iamprovider-admin`):
-- `security.iamprovider.create`
-- `security.iamprovider.delete`
-- `security.iamprovider.get`
-- `security.iamprovider.list`
-- `security.iamprovider.set-default`
-- `security.iamprovider.update`
-
-**IAM provider Viewer** (`iamprovider-viewer`):
-- `security.iamprovider.get`
-- `security.iamprovider.list`
-
-**IP allowlist Administrator** (`ipwhitelist-admin`):
-- `security.ipallowlist.create`
-- `security.ipallowlist.delete`
-- `security.ipallowlist.get`
-- `security.ipallowlist.list`
-- `security.ipallowlist.update`
-
-**IP allowlist Viewer** (`ipwhitelist-viewer`):
-- `security.ipallowlist.get`
-- `security.ipallowlist.list`
-
-**Metrics Administrator** (`metrics-admin`):
-- `metrics.endpoint.get`
-- `metrics.token.create`
-- `metrics.token.delete`
-- `metrics.token.get`
-- `metrics.token.list`
-- `metrics.token.revoke`
-- `metrics.token.update`
-
-**Migration Administrator** (`migration-admin`):
-- `replication.deploymentmigration.create`
-- `replication.deploymentmigration.delete`
-- `replication.deploymentmigration.get`
-
-**MLServices Admin** (`mlservices-admin`):
-- `ml.mlservices.get`
-
-**Notebook Administrator** (`notebook-admin`):
-- `notebook.model.list`
-- `notebook.notebook.create`
-- `notebook.notebook.delete`
-- `notebook.notebook.get`
-- `notebook.notebook.list`
-- `notebook.notebook.pause`
-- `notebook.notebook.resume`
-- `notebook.notebook.update`
-
-**Notebook Executor** (`notebook-executor`):
-- `notebook.notebook.execute`
-
-**Notebook Viewer** (`notebook-viewer`):
-- `notebook.model.list`
-- `notebook.notebook.get`
-- `notebook.notebook.list`
-
-**Organization Administrator** (`organization-admin`):
-- `billing.organization.get`
-- `resourcemanager.organization-invite.create`
-- `resourcemanager.organization-invite.delete`
-- `resourcemanager.organization-invite.get`
-- `resourcemanager.organization-invite.list`
-- `resourcemanager.organization-invite.update`
-- `resourcemanager.organization.delete`
-- `resourcemanager.organization.get`
-- `resourcemanager.organization.update`
-
-**Organization Viewer** (`organization-viewer`):
-- `billing.organization.get`
-- `resourcemanager.organization-invite.get`
-- `resourcemanager.organization-invite.list`
-- `resourcemanager.organization.get`
-
-**Policy Administrator** (`policy-admin`):
-- `iam.policy.get`
-- `iam.policy.update`
-
-**Policy Viewer** (`policy-viewer`):
-- `iam.policy.get`
-
-**Prepaid Deployment Viewer** (`prepaid-deployment-viewer`):
-- `prepaid.prepaiddeployment.get`
-- `prepaid.prepaiddeployment.list`
-
-**Private Endpoint Service Administrator** (`privateendpointservice-admin`):
-- `network.privateendpointservice.create`
-- `network.privateendpointservice.get`
-- `network.privateendpointservice.get-by-deployment-id`
-- `network.privateendpointservice.get-feature`
-- `network.privateendpointservice.update`
-
-**Private Endpoint Service Viewer** (`privateendpointservice-viewer`):
-- `network.privateendpointservice.get`
-- `network.privateendpointservice.get-by-deployment-id`
-- `network.privateendpointservice.get-feature`
-
-**Project Administrator** (`project-admin`):
-- `resourcemanager.project.create`
-- `resourcemanager.project.delete`
-- `resourcemanager.project.get`
-- `resourcemanager.project.list`
-- `resourcemanager.project.update`
-
-**Project Viewer** (`project-viewer`):
-- `resourcemanager.project.get`
-- `resourcemanager.project.list`
-
-**Replication Administrator** (`replication-admin`):
-- `replication.deployment.clone-from-backup`
-- `replication.deploymentreplication.get`
-- `replication.deploymentreplication.update`
-- `replication.migration-forwarder.upgrade-connection`
-
-**Role Administrator** (`role-admin`):
-- `iam.role.create`
-- `iam.role.delete`
-- `iam.role.get`
-- `iam.role.list`
-- `iam.role.update`
-
-**Role Viewer** (`role-viewer`):
-- `iam.role.get`
-- `iam.role.list`
-
-**SCIM Administrator** (`scim-admin`):
-- `scim.user.add`
-- `scim.user.delete`
-- `scim.user.get`
-- `scim.user.list`
-- `scim.user.update`
-
-**User Administrator** (`user-admin`):
-- `iam.user.get-personal-data`
-- `iam.user.update`
-
-{{< /details >}}
-
-### How to create a custom role
-
-1. In the main navigation menu, click **Access Control**.
-2. On the **Roles** tab, click **New role**.
-3. Enter a name and optionally a description for the new role.
-4. Select the required permissions.
-5. Click **Create**.
-
-
-
-### How to view, edit or remove a custom role
-
-1. In the main navigation menu, click **Access Control**.
-2. On the **Roles** tab, click:
- - A role name or the **eye** icon in the **Actions** column to view the role.
- - The **pencil** icon in the **Actions** column to edit the role.
- You can also view a role and click the **Edit** button in the detail view.
- - The **recycle bin** icon to delete the role.
- You can also view a role and click the **Delete** button in the detail view.
-
-## Permissions
-
-Each operation done on a resource requires zero (just authentication) or more **permissions**.
-A permission is a constant string such as `resourcemanager.project.create`,
-following this schema: `..`.
-
-Permissions are solely defined by the ArangoGraph API.
-
-{{% comment %}}
-Retrieved with the below command, with manual adjustments:
-oasisctl list permissions
-
-Note that if the tier is "internal", there is an `internal-dashboard` API that should be excluded in below list!
-{{% /comment %}}
-
-| API | Kind | Verbs
-|:--------------------|:-----------------------------|:-------------------------------------------
-| `audit` | `auditlogarchive` | `delete`, `get`, `list`
-| `audit` | `auditlogattachment` | `create`, `delete`, `get`
-| `audit` | `auditlogevents` | `get`
-| `audit` | `auditlogevent` | `delete`
-| `audit` | `auditlog` | `create`, `delete`, `get`, `list`, `set-default`, `test-https-post-destination`, `update`
-| `backup` | `backuppolicy` | `create`, `delete`, `get`, `list`, `update`
-| `backup` | `backup` | `copy`, `create`, `delete`, `download`, `get`, `list`, `restore`, `update`
-| `backup` | `feature` | `get`
-| `billing` | `config` | `get`, `set`
-| `billing` | `invoice` | `get`, `get-preliminary`, `get-statistics`, `list`
-| `billing` | `organization` | `get`
-| `billing` | `paymentmethod` | `create`, `delete`, `get`, `get-default`, `list`, `set-default`, `update`
-| `billing` | `paymentprovider` | `list`
-| `crypto` | `cacertificate` | `create`, `delete`, `get`, `list`, `set-default`, `update`
-| `dataloader` | `deployment` | `import`
-| `data` | `cpusize` | `list`
-| `data` | `deploymentcredentials` | `get`
-| `data` | `deploymentfeatures` | `get`
-| `data` | `deploymentmodel` | `list`
-| `data` | `deploymentprice` | `calculate`
-| `data` | `deployment` | `create`, `create-test-database`, `delete`, `full-access`, `get`, `list`, `pause`, `read-only-access`, `rebalance-shards`, `restore-backup`, `resume`, `rotate-server`, `update`, `update-scheduled-root-password-rotation`
-| `data` | `diskperformance` | `list`
-| `data` | `limits` | `get`
-| `data` | `nodesize` | `list`
-| `data` | `presets` | `list`
-| `deploymentprofile` | `deploymentprofile` | `list`
-| `example` | `exampledatasetinstallation` | `create`, `delete`, `get`, `list`, `update`
-| `example` | `exampledataset` | `get`, `list`
-| `iam` | `group` | `create`, `delete`, `get`, `list`, `update`
-| `iam` | `policy` | `get`, `update`
-| `iam` | `role` | `create`, `delete`, `get`, `list`, `update`
-| `iam` | `user` | `get-personal-data`, `update`
-| `metrics` | `endpoint` | `get`
-| `metrics` | `token` | `create`, `delete`, `get`, `list`, `revoke`, `update`
-| `ml` | `mlservices` | `get`
-| `monitoring` | `logs` | `get`
-| `monitoring` | `metrics` | `get`
-| `network` | `privateendpointservice` | `create`, `get`, `get-by-deployment-id`, `get-feature`, `update`
-| `notebook` | `model` | `list`
-| `notebook` | `notebook` | `create`, `delete`, `execute`, `get`, `list`, `pause`, `resume`, `update`
-| `notification` | `deployment-notification` | `list`, `mark-as-read`, `mark-as-unread`
-| `prepaid` | `prepaiddeployment` | `get`, `list`
-| `replication` | `deploymentmigration` | `create`, `delete`, `get`
-| `replication` | `deploymentreplication` | `get`, `update`
-| `replication` | `deployment` | `clone-from-backup`
-| `replication` | `migration-forwarder` | `upgrade-connection`
-| `resourcemanager` | `organization-invite` | `create`, `delete`, `get`, `list`, `update`
-| `resourcemanager` | `organization` | `delete`, `get`, `update`
-| `resourcemanager` | `project` | `create`, `delete`, `get`, `list`, `update`
-| `scim` | `user` | `add`, `delete`, `get`, `list`, `update`
-| `security` | `iamprovider` | `create`, `delete`, `get`, `list`, `set-default`, `update`
-| `security` | `ipallowlist` | `create`, `delete`, `get`, `list`, `update`
-
-### Permission inheritance
-
-Each resource (organization, project, deployment) has its own policy, but this does not mean that you have to
-repeat role bindings in all these policies.
-
-Once you assign a role to a user (or group of users) in a policy at one level,
-all the permissions of this role are inherited in lower levels -
-permissions are inherited downwards from an organization to its projects and
-from a project to its deployments.
-
-For more general permissions, which you want to be propagated to other levels,
-add a role for a user/group at the organization level.
-For example, if you bind the `deployment-viewer` role to user `John` in the
-organization policy, `John` will have the role permissions in all projects of
-that organization and all deployments of the projects.
-
-For more restrictive permissions, which you don't necessarily want to be
-propagated to other levels, add a role at the project or even deployment level.
-For example, if you bind the `deployment-viewer` role to user `John`
-in a project, `John` will have the role permissions in
-this project as well as in all the deployments of it, but not
-in other projects of the parent organization.
-
-**Inheritance example**
-
-- Let's assume you have a group called "Deployers" which includes users who deal with deployments.
-- Then you create a role "Deployment Viewer", containing
- `data.deployment.get` and `data.deployment.list` permissions.
-- You can now add a role binding of the "Deployers" group to the "Deployment Viewer" role.
-- If you add the binding to an organization policy, members of this group
- will be granted the defined permissions for the organization, all its projects and all its deployments.
-- If you add the role binding to a policy of project ABC, members of this group will be granted
- the defined permissions for project ABC only and its deployments, but not for
- other projects and their deployments.
-- If you add the role binding to a policy of deployment X, members of this
- group will be granted the defined permissions for deployment X only, and not
- any other deployment of the parent project or any other project of the organization.
-
-The "Deployment Viewer" role is effective for the following entities depending
-on which policy the binding is added to:
-
-Role binding added to → Role effective on ↓ | Organization policy | Project ABC's policy | Deployment X's policy of project ABC |
-|:---:|:---:|:---:|:---:|
-Organization, its projects and deployments | ✓ | — | —
-Project ABC and its deployments | ✓ | ✓ | —
-Project DEF and its deployments | ✓ | — | —
-Deployment X of project ABC | ✓ | ✓ | ✓
-Deployment Y of project ABC | ✓ | ✓ | —
-Deployment Z of project DEF | ✓ | — | —
-
-## Restricting access to organizations
-
-To enhance security, you can implement the following restrictions via [Oasisctl](../oasisctl/_index.md):
-
-1. Limit allowed authentication providers.
-2. Specify an allowed domain list.
-
-{{< info >}}
-Note that users who do not meet the restrictions will not be granted permissions for any resource in
-the organization. These users can still be members of the organization.
-{{< /info >}}
-
-Using the first option, you can limit which **authentication providers** are
-accepted for users trying to access an organization in ArangoGraph.
-The following commands are available to configure this option:
-
-- `oasisctl get organization authentication providers` - allows you to see which
- authentication providers are enabled for accessing a specific organization
-- `oasisctl update organization authentication providers` - allows you to update
- a list of authentication providers for an organization to which the
- authenticated user has access
- - `--enable-github` - if set, allow access from user accounts authenticated via Github
- - `--enable-google` - if set, allow access from user accounts authenticated via Google
- - `--enable-microsoft` - if set, allow access from user accounts authenticated via Microsoft
- - `--enable-username-password` - if set, allow access from user accounts
- authenticated via a username/password
-
-Using the second option, you can configure a **list of domains**, and only users
-with email addresses from the specified domains will be able to access an
-organization. The following commands are available to configure this option:
-
-- `oasisctl get organization email domain restrictions -o ` -
- allows you to see which domains are in the allowed list for a specific organization
-- `oasisctl update organization email domain restrictions -o --allowed-domain= --allowed-domain=` -
- allows you to update a list of the allowed domains for a specific organization
-- `oasisctl update organization email domain restrictions -o --allowed-domain=` -
- allows you to reset a list and accept any domains for accessing a specific organization
-
-## Using an audit log
-
-{{< info >}}
-To enable the audit log feature, get in touch with the ArangoGraph team via **Request Help**, available in the left sidebar menu of the ArangoGraph Dashboard.
-{{< /info >}}
-
-To have a better overview of the events happening in your ArangoGraph organization,
-you can set up an audit log, which will track and log auditing information for you.
-The audit log is created on the organization level, then you can use the log for
-projects belonging to that organization.
-
-***To create an audit log***
-
-1. In the main navigation menu, click **Access Control** in the **Organization** section.
-2. Open the **Audit logs** tab and click the **New audit log** button.
-3. In the dialog, fill out the following settings:
-
- - **Name** - enter a name for your audit log.
- - **Description** - enter an optional description for your audit log.
- - **Destinations** - specify one or several destinations to which you want to
- upload the audit log. If you choose **Upload to cloud**, the log will be
- available on the **Audit logs** tab of your organization. To send the log
- entries to your custom destination, specify a destination URL with
- authentication parameters (the **HTTP destination** option).
-
- {{< info >}}
- The **Upload to cloud** option is not available for the free-to-try tier.
- {{< /info >}}
-
- - **Excluded topics** - select topics that will not be included in the log.
- Please note, that some are excluded by default (for example, `audit-document`).
-
- {{< warning >}}
- Enabling the audit log for all events will have a negative impact on performance.
- {{< /warning >}}
-
- - **Confirmation** - confirm that logging auditing events increases the price of your deployments.
-
- 
-
-4. Click **Create** to add the audit log. You can now use it in the projects
- belonging to your organization.
diff --git a/site/content/3.11/arangograph/security-and-access-control/single-sign-on/_index.md b/site/content/3.11/arangograph/security-and-access-control/single-sign-on/_index.md
deleted file mode 100644
index 1004352974..0000000000
--- a/site/content/3.11/arangograph/security-and-access-control/single-sign-on/_index.md
+++ /dev/null
@@ -1,104 +0,0 @@
----
-title: Single Sign-On (SSO) in ArangoGraph
-menuTitle: Single Sign-On
-weight: 10
-description: >-
- ArangoGraph supports **Single Sign-On** (SSO) authentication using
- **Security Assertion Markup language 2.0** (SAML 2.0)
----
-{{< info >}}
-To enable the Single Sign-On (SSO) feature, get in touch with the ArangoGraph
-team via **Request Help**, available in the left sidebar menu of the
-ArangoGraph Dashboard.
-{{< /info >}}
-
-## About SAML 2.0
-
-The Security Assertion Markup language 2.0 (SAML 2.0) is an open standard created
-to provide cross-domain single sign-on (SSO). It allows you to authenticate in
-multiple web applications by using a single set of login credentials.
-
-SAML SSO works by transferring user authentication data from the identity
-provider (IdP) to the service provider (SP) through an exchange of digitally
-signed XML documents.
-
-## IdP-initiated versus SP-initiated SSO
-
-There are generally two methods for starting Single Sign-On:
-
-- **Identity Provider Initiated** (IdP-initiated):
- You log into the Identity Provider and are then redirected to ArangoGraph.
-- **Service Provider Initiated** (SP-initiated):
- You access the ArangoGraph site which then redirects you to the
- Identity Provider for authentication.
-
-**ArangoGraph only supports SP-initiated SSO** because IdP-Initiated SSO is
-vulnerable to Man-in-the-Middle attacks. In order to initiate the SSO login
-process, you must start at ArangoGraph.
-
-## Configure SAML 2.0 using Okta
-
-You can enable SSO for your ArangoGraph organization using Okta as an Identity
-Provider (IdP). For more information about Okta, please refer to the
-[Okta Documentation](https://help.okta.com/en-us/Content/index.htm?cshid=csh-index).
-
-### Create the SAML app integration in Okta
-
-1. Sign in to your Okta account and select **Applications** from the left sidebar menu.
-2. Click **Create App Integration**.
-3. In the **Create a new app integration** dialog, select **SAML 2.0**.
-
- 
-4. In the **General Settings**, specify a name for your integration and click **Next**.
-
- 
-5. Configure the SAML settings:
- - For **Single sign-on URL**, use `https://auth.arangodb.com/login/callback?connection=ORG_ID`
- - For **Audience URI (SP Entity ID)**, use `urn:auth0:arangodb:ORG_ID`
-
- 
-
-6. Replace **ORG_ID** with your organization identifier from the
- ArangoGraph Dashboard. To find your organization ID, go to the **User Toolbar**
- in the top right corner, which is accessible from every view of the Dashboard,
- and click **My organizations**.
-
- If, for example, your organization ID is 14587062, here are the values you
- would use when configuring the SAML settings:
- - `https://auth.arangodb.com/login/callback?connection=14587062`
- - `urn:auth0:arangodb:14587062`
-
- 
-7. In the **Attribute Statements** section, add custom attributes as seen in the image below:
- - email: `user.email`
- - given_name: `user.firstName`
- - family_name: `user.lastName`
- - picture: `user.profileUrl`
-
- This step consists of a mapping between the ArangoGraph attribute names and
- Okta attribute names. The values of these attributes are automatically filled
- in based on the users list that is defined in Okta.
-
- 
-8. Click **Next**.
-9. In the **Configure feedback** section, select **I'm an Okta customer adding an internal app**.
-10. Click **Finish**. The SAML app integration is now created.
-
-### SAML Setup
-
-After creating the app integration, you must perform the SAML setup to finalize
-the SSO configuration.
-
-1. Go to the **SAML Signing Certificates** section, displayed under the **Sign On** tab.
-2. Click **View SAML setup instructions**.
-
- 
-3. The setup instructions include the following items:
- - **Identity Provider Single Sign-On URL**
- - **Identity Provider Issuer**
- - **X.509 Certificate**
-4. Copy the IdP settings, download the certificate using the
- **Download X.509 certificate** button, and share them with the ArangoGraph
- team via an ArangoGraph Support Ticket in order to complete the SSO
- configuration.
-
diff --git a/site/content/3.11/arangograph/security-and-access-control/single-sign-on/scim-provisioning.md b/site/content/3.11/arangograph/security-and-access-control/single-sign-on/scim-provisioning.md
deleted file mode 100644
index 8cf40b8009..0000000000
--- a/site/content/3.11/arangograph/security-and-access-control/single-sign-on/scim-provisioning.md
+++ /dev/null
@@ -1,76 +0,0 @@
----
-title: SCIM Provisioning
-menuTitle: SCIM Provisioning
-weight: 5
-description: >-
- How to enable SCIM provisioning with Okta for your ArangoGraph project
----
-ArangoGraph provides support to control and manage members access in
-ArangoGraph organizations with the
-**System for Cross-domain Identity Management** (SCIM) provisioning.
-This enables you to propagate to ArangoGraph any user access changes by using
-the dedicated API.
-
-{{< info >}}
-To enable the SCIM feature, get in touch with the ArangoGraph team via
-**Request Help**, available in the left sidebar menu of the ArangoGraph Dashboard.
-{{< /info >}}
-
-## About SCIM
-
-[SCIM](https://www.rfc-editor.org/rfc/rfc7644), or the System
-for Cross-domain Identity Management [specification](http://www.simplecloud.info/),
-is an open standard designed to manage user identity information.
-SCIM provides a defined schema for representing users, and a RESTful
-API to run CRUD operations on these user resources.
-
-The SCIM specification expects the following operations so that the SSO system
-can sync the information about user resources in real time:
-
-- `GET /Users` - List all users.
-- `GET /Users/:user_id` - Get details for a given user ID.
-- `POST /Users` - Invite a new user to ArangoGraph.
-- `PUT /Users/:user_id` - Update a given user ID.
-- `DELETE /Users/:user_id` - Delete a specified user ID.
-
-ArangoGraph organization administrators can generate an API key for a specific organization.
-The API token consists of a key and a secret. Using this key and secret as the
-Basic Authentication Header (Basic Auth) in SCIM provisioning, you can access the APIs and
-manage the user resources.
-
-To learn how to generate a new API key in the ArangoGraph Dashboard, see the
-[API Keys](../../my-account.md#api-keys) section.
-
-{{< info >}}
-When creating an API key, it is required to select an organization from the
-list.
-{{< /info >}}
-
-## Enable SCIM provisioning in Okta
-
-To enable SCIM provisioning, you first need to create an SSO integration that
-supports the SCIM provisioning feature.
-
-1. To enable SCIM provisioning for your integration, go to the **General** tab.
-2. In the **App Settings** section, select **Enable SCIM provisioning**.
-3. Navigate to the **Provisioning** tab. The SCIM connection settings are
- displayed under **Settings > Integration**.
-4. Fill in the following fields:
- - For **SCIM connector base URL**, use `https://dashboard.arangodb.cloud/api/scim/v1`
- - For **Unique identifier field for users**, use `userName`
-5. For **Supported provisioning actions**, enable the following:
- - **Import New Users and Profile Updates**
- - **Push New Users**
- - **Push Profile Updates**
-6. From the **Authentication Mode** menu, select the **Basic Auth** option.
- To authenticate using this mode, you need to provide the username and password
- for the account that handles the SCIM actions - in this case ArangoGraph.
-7. Go to the ArangoGraph Dashboard and create a new API key ID and Secret.
-
- 
-
- Make sure to select one organization from the list and do not set any
- value in the **Time to live** field. For more information,
- see [How to create a new API key](../../my-account.md#how-to-create-a-new-api-key).
-8. Use these authentication tokens as username and password when using the
- **Basic Auth** mode and click **Save**.
diff --git a/site/content/3.11/arangograph/security-and-access-control/x-509-certificates.md b/site/content/3.11/arangograph/security-and-access-control/x-509-certificates.md
deleted file mode 100644
index d8d694a139..0000000000
--- a/site/content/3.11/arangograph/security-and-access-control/x-509-certificates.md
+++ /dev/null
@@ -1,178 +0,0 @@
----
-title: X.509 Certificates in ArangoGraph
-menuTitle: X.509 Certificates
-weight: 5
-description: >-
- X.509 certificates in ArangoGraph are utilized for encrypted remote administration.
- The communication with and between the servers of an ArangoGraph deployment is
- encrypted using the TLS protocol
----
-X.509 certificates are digital certificates that are used to verify the
-authenticity of a website, user, or organization using a public key infrastructure
-(PKI). They are used in various applications, including SSL/TLS encryption,
-which is the basis for HTTPS - the primary protocol for securing communication
-and data transfer over a network.
-
-The X.509 certificate format is a standard defined by the
-[International Telecommunication Union (ITU)](https://www.itu.int/en/Pages/default.aspx)
-and contains information such as the name of the certificate holder, the public
-key associated with the certificate, the certificate's issuer, and the
-certificate's expiration date. An X.509 certificate can be signed by a
-certificate authority (CA) or self-signed.
-
-ArangoGraph is using:
-- **well-known X.509 certificates** created by
-[Let's Encrypt](https://letsencrypt.org/)
-- **self-signed X.509 certificates** created by ArangoGraph platform
-
-## Certificate chains
-
-A certificate chain, also called the chain of trust, is a hierarchical structure
-that links together a series of digital certificates. The trust in the chain is
-established by verifying the identity of the issuer of each certificate in the
-chain. The root of the chain is a trusted third-party, such as a certificate
-authority (CA). The CA issues a certificate to an organization, which in turn
-can issue certificates to servers and other entities.
-
-For example, when you visit a website with an SSL/TLS certificate, the browser
-checks the chain of trust to verify the authenticity of the digital certificate.
-The browser checks to see if the root certificate is trusted, and if it is, it
-trusts the chain of certificates that lead to the end-entity certificate.
-If any of the certificates in the chain are invalid, expired, or revoked, the
-browser does not trust the digital certificate.
-
-## X.509 certificates in ArangoGraph
-
-Each ArangoGraph deployment is accessible on different port numbers:
-- default port `8529`, `443`
-- high port `18529`
-
-Each ArangoGraph Notebook is accessible on different port numbers:
-- default port `8840`, `443`
-- high port `18840`
-
-Metrics are accessible on different port numbers:
-- default port `8829`, `443`
-- high port `18829`
-
-The distinction between these port numbers is in the certificate used for the
-TLS connection.
-
-{{< info >}}
-The default ports (`8529` and `443`) always serve the well-known certificate.
-The [auto login to database UI](../deployments/_index.md#auto-login-to-database-ui)
-feature is only available on the `443` port and is enabled by default.
-{{< /info >}}
-
-### Well-known X.509 certificates
-
-**Well-known X.509 certificates** created by
-[Let's Encrypt](https://letsencrypt.org/) are used on the
-default ports, `8529` and `443`.
-
-This type of certificate has a lifetime of 5 years and is rotated automatically.
-It is recommended to use well-known certificates, as this eases access of a
-deployment in your browser.
-
-{{< info >}}
-The well-known certificate is a wildcard certificate and cannot contain
-Subject Alternative Names (SANs). To include a SAN field, please use the
-self-signed certificate option.
-{{< /info >}}
-
-### Self-signed X.509 certificates
-
-**Self-signed X.509 certificates** are used on the high ports, i.e. `18529`.
-This type of certificate has a lifetime of 1 year, and it is created by the
-ArangoGraph platform. It is also rotated automatically before the expiration
-date.
-
-{{< info >}}
-Unless you switch off the **Use well-known certificate** option in the
-certificate generation, both the default and high port serve the same
-self-signed certificate.
-{{< /info >}}
-
-### Subject Alternative Name (SAN)
-
-The Subject Alternative Name (SAN) is an extension to the X.509 specification
-that allows you to specify additional host names for a single SSL certificate.
-
-When using [private endpoints](../deployments/private-endpoints.md),
-you can specify custom domain names. Note that these are added **only** to
-the self-signed certificate as Subject Alternative Name (SAN).
-
-## How to create a new certificate
-
-1. Click a project name in the **Projects** section of the main navigation.
-2. Click **Security**.
-3. In the **Certificates** section, click:
- - The **New certificate** button to create a new certificate.
- - A name or the **eye** icon in the **Actions** column to view a certificate.
- The dialog that opens provides commands for installing and uninstalling
- the certificate through a console.
- - The **pencil** icon to edit a certificate.
- You can also view a certificate and click the **Edit** button.
- - The **tag** icon to make the certificate the new default.
- - The **recycle bin** icon to delete a certificate.
-
-
-
-## How to install a certificate
-
-Certificates that have the **Use well-known certificate** option enabled do
-not need any installation and are supported by almost all web browsers
-automatically.
-
-When creating a self-signed certificate that has the **Use well-known certificate**
-option disabled, the certificate needs to be installed on your local machine as
-well. This operation varies between operating systems. To install a self-signed
-certificate on your local machine, open the certificate and follow the
-installation instructions.
-
-
-
-
-
-You can also extract the information from all certificates in the chain using the
-`openssl` tool.
-
-- For **well-known certificates**, run the following command:
- ```
- openssl s_client -showcerts -servername <123456abcdef>.arangodb.cloud -connect <123456abcdef>.arangodb.cloud:8529 .arangodb.cloud -connect <123456abcdef>.arangodb.cloud:18529 ` is a placeholder that needs to be replaced with the
-unique ID that is part of your ArangoGraph deployment endpoint URL.
-
-## How to connect to your application
-
-[ArangoDB drivers](../../develop/drivers/_index.md), also called connectors, allow you to
-easily connect ArangoGraph deployments to your application.
-
-1. Navigate to **Deployments** and click the **View** button to show the
- deployment page.
-2. In the **Quick start** section, click the **Connecting drivers** button.
-3. Select your programming language, i.e. Go, Java, Python, etc.
-4. Follow the examples to connect a driver to your deployment. They include
- code examples on how to use certificates in your application.
-
-
-
-## Certificate Rotation
-
-Every certificate has a self-signed root certificate that is going to expire.
-When certificates that are used in existing deployments are about to expire,
-an automatic rotation of the certificates is triggered. This means that the
-certificate is cloned (all existing settings are copied over to a new certificate)
-and all affected deployments then start using the cloned certificate.
-
-Based on the type of certificate used, you may also need to install the new
-certificate on your local machine. For example, self-signed certificates require
-installation. To prevent any downtime, it is recommended to manually create a
-new certificate and apply the required changes prior to the expiration date.
diff --git a/site/content/3.11/components/arangodb-server/_index.md b/site/content/3.11/components/arangodb-server/_index.md
deleted file mode 100644
index 82da2f3a5f..0000000000
--- a/site/content/3.11/components/arangodb-server/_index.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: ArangoDB Server
-menuTitle: ArangoDB Server
-weight: 170
-description: >-
- The ArangoDB daemon (arangod) is the central server binary that can run in
- different modes for a variety of setups like single server and clusters
----
-The ArangoDB server is the core component of ArangoDB. The executable file to
-run it is named `arangod`. The `d` stands for daemon. A daemon is a long-running
-background process that answers requests for services.
-
-The server process serves the various client connections to the server via the
-TCP/HTTP protocol. It also provides a [web interface](../web-interface/_index.md).
-
-_arangod_ can run in different modes for a variety of setups like single server
-and clusters. It differs between the [Community Edition](../../about-arangodb/features/community-edition.md)
-and [Enterprise Edition](../../about-arangodb/features/enterprise-edition.md).
-
-See [Administration](../../operations/administration/_index.md) for server configuration
-and [Deploy](../../deploy/_index.md) for operation mode details.
diff --git a/site/content/3.11/components/arangodb-server/ldap.md b/site/content/3.11/components/arangodb-server/ldap.md
deleted file mode 100644
index b773edf61e..0000000000
--- a/site/content/3.11/components/arangodb-server/ldap.md
+++ /dev/null
@@ -1,563 +0,0 @@
----
-title: ArangoDB Server LDAP Options
-menuTitle: LDAP
-weight: 10
-description: >-
- LDAP authentication options in the ArangoDB server
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-## Basics Concepts
-
-The basic idea is that one can keep the user authentication setup for
-an ArangoDB instance (single or cluster) outside of ArangoDB in an LDAP
-server. A crucial feature of this is that one can add and withdraw users
-and permissions by only changing the LDAP server and in particular
-without touching the ArangoDB instance. Changes are effective in
-ArangoDB within a few minutes.
-
-Since there are many different possible LDAP setups, we must support a
-variety of possibilities for authentication and authorization. Here is
-a short overview:
-
-To map ArangoDB user names to LDAP users there are two authentication
-methods called "simple" and "search". In the "simple" method the LDAP bind
-user is derived from the ArangoDB user name by prepending a prefix and
-appending a suffix. For example, a user "alice" could be mapped to the
-distinguished name `uid=alice,dc=arangodb,dc=com` to perform the LDAP
-bind and authentication.
-See [Simple authentication method](#simple-authentication-method)
-below for details and configuration options.
-
-In the "search" method there are two phases. In Phase 1 a generic
-read-only admin LDAP user account is used to bind to the LDAP server
-first and search for an LDAP user matching the ArangoDB user name. In
-Phase 2, the actual authentication is then performed against the LDAP
-user that was found in phase 1. Both methods are sensible and are
-recommended to use in production.
-See [Search authentication method](#search-authentication-method)
-below for details and configuration options.
-
-Once the user is authenticated, there are now two methods for
-authorization: (a) "roles attribute" and (b) "roles search".
-
-In method (a) ArangoDB acquires a list of roles the authenticated LDAP
-user has from the LDAP server. The actual access rights to databases
-and collections for these roles are configured in ArangoDB itself.
-Users effectively have the union of all access rights of all roles
-they have. This method is probably the most common one for production use
-cases. It combines the advantages of managing users and roles outside of
-ArangoDB in the LDAP server with the fine grained access control within
-ArangoDB for the individual roles. See [Roles attribute](#roles-attribute)
-below for details about method (a) and for the associated configuration
-options.
-
-Method (b) is very similar and only differs from (a) in the way the
-actual list of roles of a user is derived from the LDAP server.
-See [Roles search](#roles-search) below for details about method (b)
-and for the associated configuration options.
-
-## Fundamental options
-
-The fundamental options for specifying how to access the LDAP server are
-the following:
-
- - `--ldap.enabled` this is a boolean option which must be set to
- `true` to activate the LDAP feature
- - `--ldap.server` is a string specifying the host name or IP address
- of the LDAP server
- - `--ldap.port` is an integer specifying the port the LDAP server is
- running on, the default is `389`
- - `--ldap.basedn` specifies the base distinguished name under which
- the search takes place (can alternatively be set via `--ldap.url`)
- - `--ldap.binddn` and `--ldap.bindpasswd` are distinguished name and
- password for a read-only LDAP user to which ArangoDB can bind to
- search the LDAP server. Note that it is necessary to configure these
- for both the "simple" and "search" authentication methods, since
- even in the "simple" method, ArangoDB occasionally has to refresh
- the authorization information from the LDAP server
- even if the user session persists and no new authentication is
- needed! It is, however, allowed to leave both empty, but then the
- LDAP server must be readable with anonymous access.
- - `--ldap.refresh-rate` is a floating point value in seconds. The
- default is 300, which means that ArangoDB refreshes the
- authorization information for authenticated users after at most 5
- minutes. This means that changes in the LDAP server like removed
- users or added or removed roles for a user are effective after
- at most 5 minutes.
-
-Note that the `--ldap.server` and `--ldap.port` options can
-alternatively be specified in the `--ldap.url` string together with
-other configuration options. For details see Section "LDAP URLs" below.
-
-Here is an example on how to configure the connection to the LDAP server,
-with anonymous bind:
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com
-```
-
-With this configuration ArangoDB binds anonymously to the LDAP server
-on host `ldap.arangodb.com` on the default port 389 and executes all searches
-under the base distinguished name `dc=arangodb,dc=com`.
-
-If we need a user to read in LDAP here is the example for it:
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.binddn=uid=arangoadmin,dc=arangodb,dc=com \
---ldap.bindpasswd=supersecretpassword
-```
-
-The connection is identical but the searches are executed with the
-given distinguished name in `binddn`.
-
-Note here:
-The given user (or the anonymous one) needs at least read access on
-all user objects to find them and in the case of Roles search
-also read access on the objects storing the roles.
-
-Up to this point ArangoDB can now connect to a given LDAP server
-but it is not yet able to authenticate users properly with it.
-For this pick one of the following two authentication methods.
-
-### LDAP URLs
-
-As an alternative one can specify the values of multiple LDAP related configuration
-options by specifying a single LDAP URL. Here is an example:
-
-```
---ldap.url ldap://ldap.arangodb.com:1234/dc=arangodb,dc=com?uid?sub
-```
-
-This one option has the combined effect of setting the following:
-
-```
---ldap.server=ldap.arangodb.com \
---ldap.port=1234 \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.searchAttribute=uid \
---ldap.searchScope=sub
-```
-
-That is, the LDAP URL consists of the LDAP `server` and `port`, a `basedn`, a
-`searchAttribute`, and a `searchScope` which can be one of `base`, `one`, or
-`sub`. There is also the possibility to use the `ldaps` protocol as in:
-
-```
---ldap.url ldaps://ldap.arangodb.com:636/dc=arangodb,dc=com?uid?sub
-```
-
-This does exactly the same as the one above, except that it uses the
-LDAP over TLS protocol. This is a non-standard method which does not
-involve using the STARTTLS protocol. Note that this does not work in the
-Windows version! We suggest to use the `ldap` protocol and STARTTLS
-as described in the next section.
-
-### TLS options
-
-{{< warning >}}
-TLS is not supported in the Windows version of ArangoDB!
-{{< /warning >}}
-
-To configure the usage of encrypted TLS to communicate with the LDAP server
-the following options are available:
-
-- `--ldap.tls`: the main switch to active TLS. can either be
- `true` (use TLS) or `false` (do not use TLS). It is switched
- off by default. If you switch this on and do not use the `ldaps`
- protocol via the [LDAP URL](#ldap-urls), then ArangoDB
- uses the `STARTTLS` protocol to initiate TLS. This is the
- recommended approach.
-- `--ldap.tls-version`: the minimal TLS version that ArangoDB should accept.
- Available versions are `1.0`, `1.1` and `1.2`. The default is `1.2`. If
- your LDAP server does not support Version 1.2, you have to change
- this setting.
-- `--ldap.tls-cert-check-strategy`: strategy to validate the LDAP server
- certificate. Available strategies are `never`, `hard`,
- `demand`, `allow` and `try`. The default is `hard`.
-- `--ldap.tls-cacert-file`: a file path to one or more (concatenated)
- certificate authority certificates in PEM format.
- As default no file path is configured. This certificate
- is used to validate the server response.
-- `--ldap.tls-cacert-dir`: a directory path to certificate authority certificates in
- [c_rehash](https://www.openssl.org/docs/man3.0/man1/c_rehash.html)
- format. As default no directory path is configured.
-
-Assuming you have the TLS CAcert file that is given to the server at
-`/path/to/certificate.pem`, here is an example on how to configure TLS:
-
-```
---ldap.tls true \
---ldap.tls-cacert-file /path/to/certificate.pem
-```
-
-You can use TLS with any of the following authentication mechanisms.
-
-### Secondary server options (`ldap2`)
-
-The `ldap.*` options configure the primary LDAP server. It is possible to
-configure a secondary server with the `ldap2.*` options to use it as a
-fail-over for the case that the primary server is not reachable, but also to
-let the primary servers handle some users and the secondary others.
-
-Instead of `--ldap.` you need to specify `--ldap2. `.
-Authentication / authorization first checks the primary LDAP server.
-If this server cannot authenticate a user, it tries the secondary one.
-
-It is possible to specify a file containing all users that the primary
-LDAP server is handling by specifying the option `--ldap.responsible-for`.
-This file must contain the usernames line-by-line. This is also supported for
-the secondary server, which can be used to exclude certain users completely.
-
-### Esoteric options
-
-The following options can be used to configure advanced options for LDAP
-connectivity:
-
-- `--ldap.serialized`: whether or not calls into the underlying LDAP library should be serialized.
- This option can be used to work around thread-unsafe LDAP library functionality.
-- `--ldap.serialize-timeout`: sets the timeout value that is used when waiting to enter the
- LDAP library call serialization lock. This is only meaningful when `--ldap.serialized` has been
- set to `true`.
-- `--ldap.retries`: number of tries to attempt a connection. Setting this to values greater than
- one will make ArangoDB retry to contact the LDAP server in case no connection can be made
- initially.
-
-Please note that some of the following options are platform-specific and may not work
-with all LDAP servers reliably:
-
-- `--ldap.restart`: whether or not the LDAP library should implicitly restart connections
-- `--ldap.referrals`: whether or not the LDAP library should implicitly chase referrals
-
-The following options can be used to adjust the LDAP configuration on Linux and macOS
-platforms only, but does not work on Windows:
-
-- `--ldap.debug`: turn on internal OpenLDAP library output (warning: prints to stdout).
-- `--ldap.timeout`: timeout value (in seconds) for synchronous LDAP API calls (a value of 0
- means default timeout).
-- `--ldap.network-timeout`: timeout value (in seconds) after which network operations
- following the initial connection return in case of no activity (a value of 0 means default timeout).
-- `--ldap.async-connect`: whether or not the connection to the LDAP library is done
- asynchronously.
-
-## Authentication methods
-
-In order to authenticate users in LDAP we have two options available.
-We need to pick exactly one them.
-
-### Simple authentication method
-
-The simple authentication method is used if and only if both the
-`--ldap.prefix` and `--ldap.suffix` configuration options are specified
-and are non-empty. In all other cases the
-["search" authentication method](#search-authentication-method) is used.
-
-In the "simple" method the LDAP bind user is derived from the ArangoDB
-user name by prepending the value of the `--ldap.prefix` configuration
-option and by appending the value of the `--ldap.suffix` configuration
-option. For example, an ArangoDB user "alice" would be mapped to the
-distinguished name `uid=alice,dc=arangodb,dc=com` to perform the LDAP
-bind and authentication, if `--ldap.prefix` is set to `uid=` and
-`--ldap.suffix` is set to `,dc=arangodb,dc=com`.
-
-ArangoDB binds to the LDAP server and authenticates with the
-distinguished name and the password provided by the client. If
-the LDAP server successfully verifies the password then the user is
-authenticated.
-
-If you want to use this method add the following example to your
-ArangoDB configuration together with the fundamental configuration:
-
-```
---ldap.prefix uid= \
---ldap.suffix ,dc=arangodb,dc=com
-```
-
-This method authenticates an LDAP user with the distinguished name
-`{PREFIX}{USERNAME}{SUFFIX}`, in this case for the ArangoDB user `alice`.
-It searches for: `uid=alice,dc=arangodb,dc=com`.
-This distinguished name is used as `{USER}` for the roles later on.
-
-### Search authentication method
-
-The search authentication method is used if at least one of the two
-options `--ldap.prefix` and `--ldap.suffix` is empty or not specified.
-ArangoDB uses the LDAP user credentials given by the `--ldap.binddn` and
-`--ldap.bindpasswd` to perform a search for LDAP users.
-In this case, the values of the options `--ldap.basedn`,
-`--ldap.search-attribute`, `--ldap.search-filter` and `--ldap.search-scope`
-are used in the following way:
-
-- `--ldap.search-scope` is an LDAP search scope with possible values
- `base` (just search the base distinguished name),
- `sub` (recursive search under the base distinguished name) or
- `one` (search the base's immediate children) (default: `sub`)
-- `--ldap.search-filter` is an LDAP filter expression which limits the
- set of LDAP users being considered (default: `objectClass=*` which
- means all objects). The placeholder `{USER}` is replaced by the
- supplied username.
-- `--ldap.search-attribute` specifies the attribute in the user objects
- which is used to match the ArangoDB user name (default: `uid`)
-
-Here is an example on how to configure the search method.
-Assume we have users like the following stored in LDAP:
-
-```
-dn: uid=alice,dc=arangodb,dc=com
-uid: alice
-objectClass: inetOrgPerson
-objectClass: organizationalPerson
-objectClass: top
-objectClass: person
-```
-
-Where `uid` is the username used in ArangoDB, and we only search
-for objects of type `person` then we can add the following to our
-fundamental LDAP configuration:
-
-```
---ldap.search-attribute=uid \
---ldap.search-filter=objectClass=person
-```
-
-This uses the `sub` search scope by default and finds
-all `person` objects where the `uid` is equal to the given username.
-From these, the `dn` is extracted and used as `{USER}` in
-the roles later on.
-
-## Fetching roles for a user
-
-After authentication, the next step is to derive authorization
-information from the authenticated LDAP user.
-In order to fetch the roles and thereby the access rights
-for a user we again have two possible options and need to pick
-one of them. We can combine each authentication method
-with each role method.
-In any case a user can have no role or more than one.
-If a user has no role, the user does not get any access
-to ArangoDB at all.
-If a user has multiple roles with different rights,
-then the rights are combined and the *strongest*
-right wins. Example:
-
-- `alice` has the roles `project-a` and `project-b`.
-- `project-a` has no access to collection `BData`.
-- `project-b` has `rw` access to collection `BData`,
-- hence `alice` has `rw` access on `BData`.
-
-Note that the actual database and collection access rights
-are configured in ArangoDB itself by roles in the users module.
-The role name is always prefixed with `:role:`, e.g.: `:role:project-a`
-and `:role:project-b` respectively. You can use the normal user
-permissions tools in the Web interface or `arangosh` to configure these.
-
-### Roles attribute
-
-The most important method for this is to read off the roles an LDAP
-user is associated with from an attribute in the LDAP user object.
-If the
-
-```
---ldap.roles-attribute-name
-```
-
-configuration option is set, then the value of that
-option is the name of the attribute being used.
-
-Here is the example to add to the overall configuration:
-
-```
---ldap.roles-attribute-name=role
-```
-
-If we have the user stored like the following in LDAP:
-
-```
-dn: uid=alice,dc=arangodb,dc=com
-uid: alice
-objectClass: inetOrgPerson
-objectClass: organizationalPerson
-objectClass: top
-objectClass: person
-role: project-a
-role: project-b
-```
-
-Then the request grants the roles `project-a` and `project-b`
-for the user `alice` after successful authentication,
-as they are stored within the `role` on the user object.
-
-### Roles search
-
-An alternative method for authorization is to conduct a search in the
-LDAP server for LDAP objects representing roles a user has. If the
-
-```
---ldap.roles-search=
-```
-
-configuration option
-is given, then the string `{USER}` in `` is replaced
-with the distinguished name of the authenticated LDAP user and the
-resulting search expression is used to match distinguished names of
-LDAP objects representing roles of that user.
-
-Example:
-
-```
---ldap.roles-search '(&(objectClass=groupOfUniqueNames)(uniqueMember={USER}))'
-```
-
-After a LDAP user is found and authenticated as described in the
-authentication section above, the `{USER}` in the search expression
-is replaced by its distinguished name, e.g. `uid=alice,dc=arangodb,dc=com`,
-and thus with the above search expression the actual search expression
-ends up being:
-
-```
-(&(objectClass=groupOfUniqueNames)(uniqueMember=uid=alice,dc=arangodb,dc=com}))
-```
-
-This search finds all objects of `groupOfUniqueNames` where
-at least one `uniqueMember` has the `dn` of `alice`.
-The list of results of that search would be the list of roles given by
-the values of the `dn` attributes of the found role objects.
-
-### Role transformations and filters
-
-For both of the above authorization methods there are further
-configuration options to tune the role lookup. In this section we
-describe these further options:
-
-- `--ldap.roles-include` can be used to specify a regular expression
- that is used to filter roles. Only roles that match the regular
- expression are used.
-
-- `--ldap.roles-exclude` can be used to specify a regular expression
- that is used to filter roles. Only roles that do not match the regular
- expression are used.
-
-- `--ldap.roles-transformation` can be used to specify a regular
- expression and replacement text as `/re/text/`. This regular
- expression is applied to the role name found. This is especially
- useful in the roles-search variant to extract the real role name
- out of the `dn` value.
-
-- `--ldap.superuser-role` can be used to specify the role associated
- with the superuser. Any user belonging to this role gains superuser
- status. This role is checked after applying the roles-transformation
- expression.
-
-Example:
-
-```
---ldap.roles-include "^arangodb"
-```
-
-This setting only considers roles that start with `arangodb`.
-
-```
---ldap.roles-exclude=disabled
-```
-
-This setting only considers roles that do contain the word `disabled`.
-
-```
---ldap.superuser-role "arangodb-admin"
-```
-
-Anyone belonging to the group `arangodb-admin` become a superuser.
-
-The roles-transformation deserves a larger example. Assume we are using
-roles search and have stored roles in the following way:
-
-```
-dn: cn=project-a,dc=arangodb,dc=com
-objectClass: top
-objectClass: groupOfUniqueNames
-uniqueMember: uid=alice,dc=arangodb,dc=com
-uniqueMember: uid=bob,dc=arangodb,dc=com
-cn: project-a
-description: Internal project A
-
-dn: cn=project-b,dc=arangodb,dc=com
-objectClass: top
-objectClass: groupOfUniqueNames
-uniqueMember: uid=alice,dc=arangodb,dc=com
-uniqueMember: uid=charlie,dc=arangodb,dc=com
-cn: project-b
-description: External project B
-```
-
-In this case, we find `cn=project-a,dc=arangodb,dc=com` as one
-role of `alice`. However, we actually want to configure a role name
-`:role:project-a`, which is easier to read and maintain for our
-administrators.
-
-If we now apply the following transformation:
-
-```
---ldap.roles-transformation=/^cn=([^,]*),.*$/$1/
-```
-
-The regex extracts out `project-a` respectively `project-b` of the
-`dn` attribute.
-
-In combination with the `superuser-role` we could make all
-`project-a` members arangodb admins by using:
-
-```
---ldap.roles-transformation=/^cn=([^,]*),.*$/$1/ \
---ldap.superuser-role=project-a
-```
-
-## Complete configuration examples
-
-In this section we would like to present complete examples
-for a successful LDAP configuration of ArangoDB.
-All of the following are just combinations of the details described above.
-
-**Simple authentication with role-search, using anonymous LDAP user**
-
-This example connects to the LDAP server with an anonymous read-only
-user. We use the simple authentication mode (`prefix` + `suffix`)
-to authenticate users and apply a role search for `groupOfUniqueNames` objects
-where the user is a `uniqueMember`. Furthermore we extract only the `cn`
-out of the distinguished role name.
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.prefix uid= \
---ldap.suffix ,dc=arangodb,dc=com \
---ldap.roles-search '(&(objectClass=groupOfUniqueNames)(uniqueMember={USER}))' \
---ldap.roles-transformation=/^cn=([^,]*),.*$/$1/ \
---ldap.superuser-role=project-a
-```
-
-**Search authentication with roles attribute using LDAP admin user having TLS enabled**
-
-This example connects to the LDAP server with a given distinguished name of an
-admin user + password.
-Furthermore we activate TLS and give the certificate file to validate server responses.
-We use the search authentication searching for the `uid` attribute of `person` objects.
-These `person` objects have `role` attribute(s) containing the role(s) of a user.
-
-```
---ldap.enabled=true \
---ldap.server=ldap.arangodb.com \
---ldap.basedn=dc=arangodb,dc=com \
---ldap.binddn=uid=arangoadmin,dc=arangodb,dc=com \
---ldap.bindpasswd=supersecretpassword \
---ldap.tls true \
---ldap.tls-cacert-file /path/to/certificate.pem \
---ldap.search-attribute=uid \
---ldap.search-filter=objectClass=person \
---ldap.roles-attribute-name=role
-```
diff --git a/site/content/3.11/components/tools/_index.md b/site/content/3.11/components/tools/_index.md
deleted file mode 100644
index 72bf4118a4..0000000000
--- a/site/content/3.11/components/tools/_index.md
+++ /dev/null
@@ -1,35 +0,0 @@
----
-title: Tools
-menuTitle: Tools
-weight: 180
-description: >-
- ArangoDB ships with command-line tools like for accessing server instances
- programmatically, deploying clusters, creating backups, and importing data
----
-A full ArangoDB installation package contains the [ArangoDB server](../arangodb-server/_index.md)
-(`arangod`) as well as the following client tools:
-
-| Executable name | Brief description |
-|-----------------|-------------------|
-| `arangosh` | [ArangoDB shell](arangodb-shell/_index.md). A client that implements a read-eval-print loop (REPL) and provides functions to access and administrate the ArangoDB server.
-| `arangodb` | [ArangoDB Starter](arangodb-starter/_index.md) for easy deployment of ArangoDB instances.
-| `arangodump` | Tool to [create backups](arangodump/_index.md) of an ArangoDB database.
-| `arangorestore` | Tool to [load backups](arangorestore/_index.md) back into an ArangoDB database.
-| `arangobackup` | Tool to [perform hot backup operations](arangobackup/_index.md) on an ArangoDB installation.
-| `arangoimport` | [Bulk importer](arangoimport/_index.md) for the ArangoDB server. It supports JSON and CSV.
-| `arangoexport` | [Bulk exporter](arangoexport/_index.md) for the ArangoDB server. It supports JSON, CSV and XML.
-| `arangobench` | [Benchmark and test tool](arangobench/_index.md). It can be used for performance and server function testing.
-| `arangovpack` | Utility to validate and [convert VelocyPack](arangovpack/_index.md) and JSON data.
-| `arangoinspect` | [Inspection tool](arangoinspect/_index.md) that gathers server setup information.
-
-A client installation package comes without the `arangod` server executable and
-the ArangoDB Starter.
-
-Additional tools which are available separately:
-
-| Name | Brief description |
-|-----------------|-------------------|
-| [Foxx CLI](foxx-cli/_index.md) | Command line tool for managing and developing Foxx services
-| [kube-arangodb](../../deploy/kubernetes.md) | Operators to manage Kubernetes deployments
-| [Oasisctl](../../arangograph/oasisctl/_index.md) | Command-line tool for managing the ArangoGraph Insights Platform
-| [ArangoDB Datasets](arango-datasets.md) | A Python package for loading sample datasets into ArangoDB
diff --git a/site/content/3.11/components/tools/arangodump/examples.md b/site/content/3.11/components/tools/arangodump/examples.md
deleted file mode 100644
index 473f550e1c..0000000000
--- a/site/content/3.11/components/tools/arangodump/examples.md
+++ /dev/null
@@ -1,317 +0,0 @@
----
-title: _arangodump_ Examples
-menuTitle: Examples
-weight: 5
-description: ''
----
-_arangodump_ can be invoked in a command line by executing the following command:
-
-```
-arangodump --output-directory "dump"
-```
-
-This connects to an ArangoDB server and dump all non-system collections from
-the default database (`_system`) into an output directory named `dump`.
-Invoking _arangodump_ fails if the output directory already exists. This is
-an intentional security measure to prevent you from accidentally overwriting already
-dumped data. If you are positive that you want to overwrite data in the output
-directory, you can use the parameter `--overwrite true` to confirm this:
-
-```
-arangodump --output-directory "dump" --overwrite true
-```
-
-_arangodump_ connects to the `_system` database by default using the default
-endpoint. To override the endpoint, or specify a different user, use one of the
-following startup options:
-
-- `--server.endpoint `: endpoint to connect to
-- `--server.username `: username
-- `--server.password `: password to use (omit this and you'll be prompted for the
- password)
-- `--server.authentication `: whether or not to use authentication
-
-If you want to connect to a different database or dump all databases you can additionally
-use the following startup options:
-
-- `--all-databases true`: must have access to all databases, and not specify a database.
-- `--server.database `: name of the database to connect to
-
-Note that the specified user must have access to the databases.
-
-Here's an example of dumping data from a non-standard endpoint, using a dedicated
-[database name](../../../concepts/data-structure/databases.md#database-names):
-
-```
-arangodump \
- --server.endpoint tcp://192.168.173.13:8531 \
- --server.username backup \
- --server.database mydb \
- --output-directory "dump"
-```
-
-In contrast to the above call `--server.database` must not be specified when dumping
-all databases using `--all-databases true`:
-
-```
-arangodump \
- --server.endpoint tcp://192.168.173.13:8531 \
- --server.username backup \
- --all-databases true \
- --output-directory "dump-multiple"
-```
-
-When finished, _arangodump_ prints out a summary line with some aggregate
-statistics about what it did, e.g.:
-
-```
-Processed 43 collection(s), wrote 408173500 byte(s) into datafiles, sent 88 batch(es)
-```
-
-Also, more than one endpoint can be provided, such as:
-
-```
-arangodump \
- --server.endpoint tcp://192.168.173.13:8531 \
- --server.endpoint tcp://192.168.173.13:8532 \
- --server.username backup \
- --all-databases true \
- --output-directory "dump-multiple"
-```
-
-By default, _arangodump_ dumps both structural information and documents from all
-non-system collections. To adjust this, there are the following command-line
-arguments:
-
-- `--dump-data `: set to `true` to include documents in the dump. Set to `false`
- to exclude documents. The default value is `true`.
-- `--include-system-collections `: whether or not to include system collections
- in the dump. The default value is `false`. **Set to _true_ if you are using named
- graphs that you are interested in restoring.**
-
-For example, to only dump structural information of all collections (including system
-collections), use:
-
-```
-arangodump --dump-data false --include-system-collections true --output-directory "dump"
-```
-
-To restrict the dump to just specific collections, use the `--collection` option.
-You can specify it multiple times if required:
-
-```
-arangodump --collection myusers --collection myvalues --output-directory "dump"
-```
-
-Structural information for a collection is saved in files with name pattern
-`.structure.json`. Each structure file contains a JSON object
-with these attributes:
-- `parameters`: contains the collection properties
-- `indexes`: contains the collection indexes
-
-Document data for a collection is saved in files with name pattern
-`.data.json`. Each line in a data file is a document insertion/update or
-deletion marker, alongside with some meta data.
-
-## Cluster Backup
-
-The _arangodump_ tool supports sharding and can be used to backup data from a Cluster.
-Simply point it to one of the _Coordinators_ and it
-behaves exactly as described above, working on sharded collections
-in the Cluster.
-
-Please see the [Limitations](limitations.md).
-
-As above, the output is one structure description file and one data
-file per sharded collection. Note that the data in the data file is
-sorted first by shards and within each shard by ascending timestamp. The
-structural information of the collection contains the number of shards
-and the shard keys.
-
-Note that the version of the arangodump client tool needs to match the
-version of the ArangoDB server it connects to.
-
-### Dumping collections with sharding prototypes
-
-Collections may be created with the shard distribution identical to an existing
-prototypical collection (see [`distributeShardsLike`](../../../develop/javascript-api/@arangodb/db-object.md#db_createcollection-name--properties--type--options));
-i.e. shards are distributed in the very same pattern as in the prototype collection.
-Such collections cannot be dumped without the referenced collection or arangodump
-yields an error.
-
-```
-arangodump --collection clonedCollection --output-directory "dump"
-
-ERROR [f7ff5] {dump} An error occurred: Collection clonedCollection's shard distribution is based on that of collection prototypeCollection, which is not dumped along.
-```
-
-You need to dump the prototype collection as well:
-
-```
-arangodump --collection clonedCollection --collection prototypeCollection --output-directory "dump"
-
-...
-INFO [66c0e] {dump} Processed 2 collection(s) from 1 database(s) in 0.132990 s total time. Wrote 0 bytes into datafiles, sent 6 batch(es) in total.
-```
-
-## Encryption
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-You can encrypt dumps using an encryption keyfile, which must contain exactly 32
-bytes of data (required by the AES block cipher).
-
-The keyfile can be created by an external program, or, on Linux, by using a command
-like the following:
-
-```
-dd if=/dev/random bs=1 count=32 of=yourSecretKeyFile
-```
-
-For security reasons, it is best to create these keys offline (away from your
-database servers) and directly store them in your secret management
-tool.
-
-In order to create an encrypted backup, add the `--encryption.keyfile`
-option when invoking _arangodump_, in addition to any other option you
-are already using. The following example assumes that your secret key
-is stored in ~/SECRET-KEY:
-
-```
-arangodump --collection "secret-collection" dump --encryption.keyfile ~/SECRET-KEY
-```
-
-Note that _arangodump_ does not store the key anywhere. It is the responsibility
-of the user to find a safe place for the key. However, _arangodump_ stores
-the used encryption method in a file named `ENCRYPTION` in the dump directory.
-That way _arangorestore_ can later find out whether it is dealing with an
-encrypted dump or not.
-
-Trying to restore the encrypted dump without specifying the key fails
-and _arangorestore_ reports an error:
-
-```
-arangorestore --collection "secret-collection" dump --create-collection true
-...
-the dump data seems to be encrypted with aes-256-ctr, but no key information was specified to decrypt the dump
-it is recommended to specify either `--encryption.keyfile` or `--encryption.key-generator` when invoking arangorestore with an encrypted dump
-```
-
-It is required to use the exact same key when restoring the data. Again this is
-done by providing the `--encryption.keyfile` parameter:
-
-```
-arangorestore --collection "secret-collection" dump --create-collection true --encryption.keyfile ~/SECRET-KEY
-```
-
-Using a different key leads to the backup being non-recoverable.
-
-Note that encrypted backups can be used together with the already existing
-RocksDB encryption-at-rest feature.
-
-## Compression
-
-`--compress-output`
-
-Data can optionally be dumped in a compressed format to save space on disk.
-The `--compress-output` option cannot be used together with [Encryption](#encryption).
-
-If compression is enabled, no `.data.json` files are written. Instead, the
-collection data gets compressed using the Gzip algorithm and for each collection
-a `.data.json.gz` file is written. Metadata files such as `.structure.json` and
-`.view.json` do not get compressed.
-
-```
-arangodump --output-directory "dump" --compress-output
-```
-
-Compressed dumps can be restored with _arangorestore_, which automatically
-detects whether the data is compressed or not based on the file extension.
-
-```
-arangorestore --input-directory "dump"
-```
-
-## Dump output format
-
-Introduced in: v3.8.0
-
-Since its inception, _arangodump_ wrapped each dumped document into an extra
-JSON envelope, such as follows:
-
-```json
-{"type":2300,"key":"test","data":{"_key":"test","_rev":..., ...}}
-```
-
-This original dump format was useful when there was the MMFiles storage engine,
-which could use different `type` values in its datafiles.
-However, the RocksDB storage engine only uses `"type":2300` (document) when
-dumping data, so the JSON wrapper provides no further benefit except
-compatibility with older versions of ArangoDB.
-
-In case a dump taken with v3.8.0 or higher is known to never be used in older
-ArangoDB versions, the JSON envelopes can be turned off. The startup option
-`--envelope` controls this. The option defaults to `true`, meaning dumped
-documents are wrapped in envelopes, which makes new dumps compatible with
-older versions of ArangoDB.
-
-If that is not needed, the `--envelope` option can be set to `false`.
-In this case, the dump files only contain the raw documents, without any
-envelopes around them:
-
-```json
-{"_key":"test","_rev":..., ...}
-```
-
-Disabling the envelopes can **reduce dump sizes** a lot, especially if documents
-are small on average and the relative cost of the envelopes is high. Omitting
-the envelopes can also help to **save a bit on memory usage and bandwidth** for
-building up the dump results and sending them over the wire.
-
-As a bonus, turning off the envelopes turns _arangodump_ into a fast, concurrent
-JSONL exporter for one or multiple collections:
-
-```
-arangodump --collection "collection" --threads 8 --envelope false --compress-output false dump
-```
-
-The JSONL format is also supported by _arangoimport_ natively.
-
-{{< warning >}}
-Dumps created with the `--envelope false` setting cannot be restored into any
-ArangoDB versions older than v3.8.0!
-{{< /warning >}}
-
-## Threads
-
-_arangodump_ can use multiple threads for dumping database data in
-parallel. To speed up the dump of a database with multiple collections, it is
-often beneficial to increase the number of _arangodump_ threads.
-The number of threads can be controlled via the `--threads` option. The default
-value is the maximum of `2` and the number of available CPU cores.
-
-The `--threads` option works dynamically, its value depends on the number of
-available CPU cores. If the amount of available CPU cores is less than `3`, a
-threads value of `2` is used. Otherwise the value of threads is set to the
-number of available CPU cores.
-
-For example:
-
-- If a system has 8 cores, then max(2,8) = 8, i.e. 8 threads are used.
-- If it has 1 core, then max(2,1) = 2, i.e. 2 threads are used.
-
-_arangodump_ versions prior to v3.8.0 distribute dump jobs for individual
-collections to concurrent worker threads, which is optimal for dumping many
-collections of approximately the same size, but does not help for dumping few
-large collections or few large collections with many shards.
-
-Since v3.8.0, _arangodump_ can also dispatch dump jobs for individual shards of
-each collection, allowing higher parallelism if there are many shards to dump
-but only few collections. Keep in mind that even when concurrently dumping the
-data from multiple shards of the same collection in parallel, the individual
-shards' results are still written into a single result file for the collection.
-With a massive number of concurrent dump threads, some contention on that shared
-file should be expected. Also note that when dumping the data of multiple shards
-from the same collection, each thread's results are written to the result
-file in a non-deterministic order. This should not be a problem when restoring
-such dump, as _arangorestore_ does not assume any order of input.
diff --git a/site/content/3.11/components/tools/arangodump/maskings.md b/site/content/3.11/components/tools/arangodump/maskings.md
deleted file mode 100644
index 415262ccd5..0000000000
--- a/site/content/3.11/components/tools/arangodump/maskings.md
+++ /dev/null
@@ -1,1050 +0,0 @@
----
-title: _arangodump_ Data Maskings
-menuTitle: Maskings
-weight: 15
-description: >-
- `arangodump` supports obfuscating and redacting information when dumping, to
- allow you sharing dumps without sensitive data with third parties
----
-The masking feature allows you to define how sensitive data shall be dumped.
-It is possible to exclude collections entirely, limit the dump to the
-structural information of a collection (name, indexes, sharding etc.)
-or to obfuscate certain fields for a dump.
-
-You can make use of the feature by specifying a configuration file using the
-`--maskings` startup option when invoking `arangodump`.
-
-A JSON configuration file is used to define which collections and fields to mask
-and how. The general structure of the configuration file looks like this:
-
-```js
-{
- "": {
- "type": "",
- "maskings": [ // if masking-type is "masked"
- { "path": "", "type": "", ... }, // rule 1
- { "path": "", "type": "", ... }, // rule 2
- ...
- ]
- },
- "": { ... },
- "": { ... },
- "*": { ... }
-}
-```
-
-At the top level, there is an object with collection names. The masking to be
-applied to the respective collection is defined by the `type` sub-attribute.
-If the `type` is `"masked"`, then a sibling `maskings` attribute is available
-to define rules for obfuscating documents.
-
-Using `"*"` as collection name defines a default behavior for collections
-not listed explicitly.
-
-## Masking Types
-
-`type` is a string describing how to mask the given collection.
-Possible values are:
-
-- `"exclude"`: the collection is ignored completely and not even the
- structure data is dumped.
-
-- `"structure"`: only the collection structure is dumped, but no data at all
- (the file `.data.json` or `.data.json.gz`
- respectively is still created, but will not contain data).
-
-- `"masked"`: the collection structure and all data is dumped. However, the data
- is subject to obfuscation defined in the attribute `maskings`. It is an array
- of objects, with one object per masking rule. Each object needs at least a
- `path` and a `type` attribute to [define which field to mask](#path) and which
- [masking function](#masking-functions) to apply. Depending on the
- masking type, there may exist additional attributes to control the masking
- function behavior.
-
-- `"full"`: the collection structure and all data is dumped. No masking is
- applied to this collection at all.
-
-**Example**
-
-```json
-{
- "private": {
- "type": "exclude"
- },
-
- "temperature": {
- "type": "full"
- },
-
- "log": {
- "type": "structure"
- },
-
- "person": {
- "type": "masked",
- "maskings": [
- {
- "path": "name",
- "type": "xifyFront",
- "unmaskedLength": 2
- },
- {
- "path": ".security_id",
- "type": "xifyFront",
- "unmaskedLength": 2
- }
- ]
- }
-}
-```
-
-- The collection called _private_ is completely ignored.
-- Only the structure of the collection _log_ is dumped, but not the data itself.
-- The structure and data of the _temperature_ collection is dumped without any
- obfuscation of document attributes.
-- The collection _person_ is dumped completely but with maskings applied:
- - The _name_ field is masked if it occurs on the top-level.
- - It also masks fields with the name _security_id_ anywhere in the document.
- - The masking function is of type [_xifyFront_](#xify-front) in both cases.
- The additional setting `unmaskedLength` is specific so _xifyFront_.
-- All additional collections that might exist in the targeted database is
- ignored (like the collection _private_), as there is no attribute key
- `"*"` to specify a different default type for the remaining collections.
-
-### Masking vs. dump-data option
-
-_arangodump_ also supports a very coarse masking with the option
-`--dump-data false`, which leaves out all data for the dump.
-
-You can either use `--maskings` or `--dump-data false`, but not both.
-
-### Masking vs. collection option
-
-_arangodump_ also supports a very coarse masking with the option
-`--collection`. This restricts the collections that are
-dumped to the ones explicitly listed.
-
-It is possible to combine `--maskings` and `--collection`.
-This takes the intersection of exportable collections.
-
-## Path
-
-`path` defines which field to obfuscate. There can only be a single
-path per masking, but an unlimited amount of maskings per collection.
-
-```json
-{
- "collection1": {
- "type": "masked",
- "maskings": [
- {
- "path": "attr1",
- "type": "random"
- },
- {
- "path": "attr2",
- "type": "randomString"
- },
- ...
- ]
- },
- "collection2": {
- "type": "masked",
- "maskings": [
- {
- "path": "attr3",
- "type": "random"
- }
- ]
- },
- ...
-}
-```
-
-Top-level **system attributes** (`_key`, `_id`, `_rev`, `_from`, `_to`) are
-never masked.
-
-To mask a top-level attribute value, the path is simply the attribute
-name, for instance `"name"` to mask the value `"foobar"`:
-
-```json
-{
- "_key": "1234",
- "name": "foobar"
-}
-```
-
-The path to a nested attribute `name` with a top-level attribute `person`
-as its parent is `"person.name"` (here: `"foobar"`):
-
-```json
-{
- "_key": "1234",
- "person": {
- "name": "foobar"
- }
-}
-```
-
-Example masking definition:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "person.name",
- "type": ""
- }
- ]
- }
-}
-```
-
-If the path starts with a `.` then it matches any path ending in `name`.
-For example, `.name` matches the field `name` of all leaf attributes
-in the document. Leaf attributes are attributes whose value is `null`,
-`true`, `false`, or of data type `string`, `number` or `array`.
-That means, it matches `name` at the top level as well as at any nested level
-(e.g. `foo.bar.name`), but not nested objects themselves.
-
-On the other hand, `name` only matches leaf attributes
-at top level. `person.name` matches the attribute `name` of a leaf
-in the top-level object `person`. If `person` was itself an object,
-then the masking settings for this path would be ignored, because it
-is not a leaf attribute.
-
-If the attribute value is an **array** then the masking is applied to
-**all array elements individually**.
-
-The special path `*` matches **all** leaf nodes of a document.
-
-If you have an attribute key that contains a dot (like `{ "name.with.dots": … }`)
-or a top-level attribute with a single asterisk as full name (`{ "*": … }`)
-then you need to quote the name in ticks or backticks:
-
-- `"path": "´name.with.dots´"`
-- `` "path": "`name.with.dots`" ``
-- `"path": "´*´"`
-- `` "path": "`*`" ``
-
-**Example**
-
-The following configuration replaces the value of the `name`
-attribute with an "xxxx"-masked string:
-
-```json
-{
- "type": "xifyFront",
- "path": ".name",
- "unmaskedLength": 2
-}
-```
-
-The document:
-
-```json
-{
- "name": "top-level-name",
- "age": 42,
- "nicknames" : [ { "name": "hugo" }, "egon" ],
- "other": {
- "name": [ "emil", { "secret": "superman" } ]
- }
-}
-```
-
-… is changed as follows:
-
-```json
-{
- "name": "xxxxxxxxxxxxme",
- "age": 42,
- "nicknames" : [ { "name": "xxgo" }, "egon" ],
- "other": {
- "name": [ "xxil", { "secret": "superman" } ]
- }
-}
-```
-
-The values `"egon"` and `"superman"` are not replaced, because they
-are not contained in an attribute value of which the attribute name is
-`name`.
-
-### Nested objects and arrays
-
-If you specify a path and the attribute value is an array then the
-masking decision is applied to each element of the array as if this
-was the value of the attribute. This applies to arrays inside the array too.
-
-If the attribute value is an object, then it is ignored and the attribute
-does not get masked. To mask nested fields, specify the full path for each
-leaf attribute.
-
-{{< tip >}}
-If some documents have an attribute `mail` with a string as value, but other
-documents store a nested object under the same attribute name, then make sure
-to set up proper masking for the latter case, in which sub-attributes are not
-masked if there is only a masking configured for the attribute `mail`
-but not its nested attributes.
-
-You can use the special path `"*"` to **match all leaf attributes** in the
-document.
-{{< /tip >}}
-
-**Examples**
-
-Masking `mail` with the _Xify Front_ function:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "mail",
- "type": "xifyFront"
- }
- ]
- }
-}
-```
-
-… converts this document:
-
-```json
-{
- "mail" : "mail address"
-}
-```
-
-… into:
-
-```json
-{
- "mail" : "xxil xxxxxxss"
-}
-```
-
-because `mail` is a leaf attribute. The document:
-
-```json
-{
- "mail" : [
- "address one",
- "address two",
- [
- "address three"
- ]
- ]
-}
-```
-
-… is converted into:
-
-```json
-{
- "mail" : [
- "xxxxxss xne",
- "xxxxxss xwo",
- [
- "xxxxxss xxxee"
- ]
- ]
-}
-```
-
-… because the masking is applied to each array element individually
-including the elements of the sub-array. The document:
-
-```json
-{
- "mail" : {
- "address" : "mail address"
- }
-}
-```
-
-… is not masked because `mail` is not a leaf attribute.
-To mask the mail address, you could use the paths `mail.address`
-or `.address` in the masking definition:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": ".address",
- "type": "xifyFront"
- }
- ]
- }
-}
-```
-
-A catch-all `"path": "*"` would apply to the nested `address` attribute too,
-but it would mask all other string attributes as well, which may not be what
-you want. A syntax `"path": "mail.*` to only match the sub-attributes of the
-top-level `mail` attribute is not supported.
-
-### Rule precedence
-
-Masking rules may overlap, for instance if you specify the same path multiple
-times, or if you define a rule for a specific field but also one which matches
-all leaf attributes of the same name.
-
-The precedence is determined by the order in which the rules are defined in the
-masking configuration file in such cases, giving priority to the first matching
-rule (i.e. the rule above the other ambiguous ones).
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "address",
- "type": "xifyFront"
- },
- {
- "path": ".address",
- "type": "random"
- }
- ]
- }
-}
-```
-
-Above masking definition obfuscates the top-level attribute `address` with
-the `xifyFront` function, whereas all nested attributes with name `address`
-will use the `random` masking function. If the rules are defined in reverse
-order however, then all attributes called `address` are obfuscated using
-`random`. The second, overlapping rule is effectively ignored:
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": ".address",
- "type": "random"
- },
- {
- "path": "address",
- "type": "xifyFront"
- }
- ]
- }
-}
-```
-
-This behavior also applies to the catch-all path `"*"`, which means it should
-generally be placed below all other rules for a collection so that it is used
-for all unspecified attribute paths. Otherwise, all document attributes are
-processed by a single masking function, ignoring any other rules below it.
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": "address",
- "type": "random"
- },
- {
- "path": ".address",
- "type": "xifyFront"
- },
- {
- "path": "*",
- "type": "email"
- }
- ]
- }
-}
-```
-
-## Masking Functions
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-- [Xify Front](#xify-front)
-- [Zip](#zip)
-- [Datetime](#datetime)
-- [Integer Number](#integer-number)
-- [Decimal Number](#decimal-number)
-- [Credit Card Number](#credit-card-number)
-- [Phone Number](#phone-number)
-- [Email Address](#email-address)
-
-The masking functions:
-
-- [Random String](#random-string)
-- [Random](#random)
-
-… are available in the Community Edition as well as the Enterprise Edition.
-
-### Random String
-
-This masking type replaces all values of attributes whose values are strings
-with key `name` with an anonymized string. It is not guaranteed that the
-string is of the same length. Attribute whose values are not strings
-are not modified.
-
-A hash of the original string is computed. If the original string is
-shorter, then the hash is used. This results in a longer
-replacement string. If the string is longer than the hash, then
-characters are repeated as many times as needed to reach the full
-original string length.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"randomString"`
-
-**Example**
-
-```json
-{
- "path": ".name",
- "type": "randomString"
-}
-```
-
-Above masking setting applies to all leaf attributes with name `.name`.
-A document like:
-
-```json
-{
- "_key" : "1234",
- "name" : [
- "My Name",
- {
- "other" : "Hallo Name"
- },
- [
- "Name One",
- "Name Two"
- ],
- true,
- false,
- null,
- 1.0,
- 1234,
- "This is a very long name"
- ],
- "deeply": {
- "nested": {
- "name": "John Doe",
- "not-a-name": "Pizza"
- }
- }
-}
-```
-
-… is converted to:
-
-```json
-{
- "_key": "1234",
- "name": [
- "+y5OQiYmp/o=",
- {
- "other": "Hallo Name"
- },
- [
- "ihCTrlsKKdk=",
- "yo/55hfla0U="
- ],
- true,
- false,
- null,
- 1.0,
- 1234,
- "hwjAfNe5BGw=hwjAfNe5BGw="
- ],
- "deeply": {
- "nested": {
- "name": "55fHctEM/wY=",
- "not-a-name": "Pizza"
- }
- }
-}
-```
-
-### Random
-
-This masking type substitutes leaf attribute values of all data types with
-random values of the same kind:
-
-- Strings are replaced with [random strings](#random-string).
-- Numbers are replaced with random integer or decimal numbers, depending on
- the original value (but not keeping sign or scientific notation).
- The generated numbers are between -1000 and 1000.
-- Booleans are randomly replaced with `true` or `false`.
-- `null` values remain `null`.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"random"`
-
-**Examples**
-
-```json
-{
- "collection": {
- "type": "masked",
- "maskings": [
- {
- "path": "*",
- "type": "random"
- }
- ]
- }
-}
-```
-
-Using above masking configuration, all leaf attributes of the documents in
-_collection_ would be randomized. A possible input document:
-
-```json
-{
- "_key" : "1121535",
- "_id" : "coll/1121535",
- "_rev" : "_Z3AKGjW--_",
- "nullValue" : null,
- "bool" : true,
- "int" : 1,
- "decimal" : 2.34,
- "string" : "hello",
- "array" : [
- null,
- false,
- true,
- 0,
- -123,
- 0.45,
- 6e7,
- -0.8e-3,
- "nine",
- "Lorem ipsum sit dolor amet.",
- [
- false,
- false
- ],
- {
- "obj" : "nested"
- }
- ]
-}
-```
-
-… could result in an output like this:
-
-```json
-{
- "_key": "1121535",
- "_id": "coll/1121535",
- "_rev": "_Z3AKGjW--_",
- "nullValue": null,
- "bool": false,
- "int": -900,
- "decimal": -4.27,
- "string": "etxfOC+K0HM=",
- "array": [
- null,
- true,
- false,
- 754,
- -692,
- 2.64,
- 834,
- 1.69,
- "NGf7NKGrMYw=",
- "G0czIlvaGw4=G0czIlvaGw4=G0c",
- [
- false,
- true
- ],
- {
- "obj": "eCGe36xiRho="
- }
- ]
-}
-```
-
-### Xify Front
-
-This masking type replaces the front characters with `x` and
-blanks. Alphanumeric characters, `_` and `-` are replaced by `x`,
-everything else is replaced by a blank.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"xifyFront"`
-- `unmaskedLength` (number, _default: `2`_): how many characters to
- leave as-is on the right-hand side of each word as integer value
-- `hash` (bool, _default: `false`_): whether to append a hash value to the
- masked string to avoid possible unique constraint violations caused by
- the obfuscation
-- `seed` (integer, _default: `0`_): used as secret for computing the hash.
- A value of `0` means a random seed
-
-**Examples**
-
-```json
-{
- "": {
- "type": "masked",
- "maskings": [
- {
- "path": ".name",
- "type": "xifyFront",
- "unmaskedLength": 2
- }
- ]
- }
-}
-```
-
-This affects attributes with key `"name"` at any level by masking all
-alphanumeric characters of a word except the last two characters. Words of
-length 1 and 2 remain unmasked. If the attribute value is not a string but
-boolean or numeric, then the result is `"xxxx"` (fixed length).
-`null` values remain `null`.
-
-```json
-{
- "name": "This is a test!Do you agree?",
- "bool": true,
- "number": 1.23,
- "null": null
-}
-```
-
-… becomes:
-
-```json
-{
- "name": "xxis is a xxst Do xou xxxee ",
- "bool": "xxxx",
- "number": "xxxx",
- "null": null
-}
-```
-
-There is a catch. If you have an index on the attribute the masking
-might distort the index efficiency or even cause errors in case of a
-unique index.
-
-```json
-{
- "path": ".name",
- "type": "xifyFront",
- "unmaskedLength": 2,
- "hash": true
-}
-```
-
-This adds a hash at the end of the string.
-
-```
-"This is a test!Do you agree?"
-```
-
-… becomes
-
-```
-"xxis is a xxst Do xou xxxee NAATm8c9hVQ="
-```
-
-Note that the hash is based on a random secret that is different for
-each run. This avoids dictionary attacks which could be used to guess
-values based pre-computations on dictionaries.
-
-If you need reproducible results, i.e. hashes that do not change between
-different runs of _arangodump_, you need to specify a secret as seed,
-a number which must not be `0`.
-
-```json
-{
- "path": ".name",
- "type": "xifyFront",
- "unmaskedLength": 2,
- "hash": true,
- "seed": 246781478647
-}
-```
-
-### Zip
-
-This masking type replaces a zip code with a random one.
-It uses the following rules:
-
-- If a character of the original zip code is a digit, it is replaced
- by a random digit.
-- If a character of the original zip code is a letter, it
- is replaced by a random letter keeping the case.
-- If the attribute value is not a string then the default value is used.
-
-Note that this generates random zip codes. Therefore there is a
-chance that the same zip code value is generated multiple times, which can
-cause unique constraint violations if a unique index is or will be
-used on the zip code attribute.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"zip"`
-- `default` (string, _default: `"12345"`_): if the input field is not of
- data type `string`, then this value is used
-
-**Examples**
-
-```json
-{
- "path": ".code",
- "type": "zip",
-}
-```
-
-This replaces real zip codes stored in fields called `code` at any level
-with random ones. `"12345"` is used as fallback value.
-
-```json
-{
- "path": ".code",
- "type": "zip",
- "default": "abcdef"
-}
-```
-
-If the original zip code is:
-
-```
-50674
-```
-
-… it is replaced by e.g.:
-
-```
-98146
-```
-
-If the original zip code is:
-
-```
-SA34-EA
-```
-
-… it is replaced by e.g.:
-
-```
-OW91-JI
-```
-
-If the original zip code is `null`, `true`, `false` or a number, then the
-user-defined default value of `"abcdef"` is used.
-
-### Datetime
-
-This masking type replaces the value of the attribute with a random
-date between two configured dates in a customizable format.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"datetime"`
-- `begin` (string, _default: `"1970-01-01T00:00:00.000"`_):
- earliest point in time to return. Date time string in ISO 8601 format.
-- `end` (string, _default: now_):
- latest point in time to return. Date time string in ISO 8601 format.
- In case a partial date time string is provided (e.g. `2010-06` without day
- and time) the earliest date and time is assumed (`2010-06-01T00:00:00.000`).
- The default value is the current system date and time.
-- `format` (string, _default: `""`_): the formatting string format is
- described in [`DATE_FORMAT()`](../../../aql/functions/date.md#date_format).
- If no format is specified, then the result is an empty string.
-
-**Example**
-
-```json
-{
- "path": "eventDate",
- "type": "datetime",
- "begin" : "2019-01-01",
- "end": "2019-12-31",
- "format": "%yyyy-%mm-%dd",
-}
-```
-
-Above example masks the field `eventDate` by returning a random date time
-string in the range of January 1st and December 31st in 2019 using a format
-like `2019-06-17`.
-
-### Integer Number
-
-This masking type replaces the value of the attribute with a random
-integer number. It replaces the value even if it is a string,
-Boolean, or `null`.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"integer"`
-- `lower` (number, _default: `-100`_): smallest integer value to return
-- `upper` (number, _default: `100`_): largest integer value to return
-
-**Example**
-
-```json
-{
- "path": "count",
- "type": "integer",
- "lower" : -100,
- "upper": 100
-}
-```
-
-This masks the field `count` with a random number between
--100 and 100 (inclusive).
-
-### Decimal Number
-
-This masking type replaces the value of the attribute with a random
-floating point number. It replaces the value even if it is a string,
-Boolean, or `null`.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"decimal"`
-- `lower` (number, _default: `-1`_): smallest floating point value to return
-- `upper` (number, _default: `1`_): largest floating point value to return
-- `scale` (number, _default: `2`_): maximal amount of digits in the
- decimal fraction part
-
-**Examples**
-
-```json
-{
- "path": "rating",
- "type": "decimal",
- "lower" : -0.3,
- "upper": 0.3
-}
-```
-
-This masks the field `rating` with a random floating point number between
--0.3 and +0.3 (inclusive). By default, the decimal has a scale of 2.
-That means, it has at most 2 digits after the dot.
-
-The configuration:
-
-```json
-{
- "path": "rating",
- "type": "decimal",
- "lower" : -0.3,
- "upper": 0.3,
- "scale": 3
-}
-```
-
-… generates numbers with at most 3 decimal digits.
-
-### Credit Card Number
-
-This masking type replaces the value of the attribute with a random
-credit card number (as integer number).
-See [Luhn algorithm](https://en.wikipedia.org/wiki/Luhn_algorithm)
-for details.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"creditCard"`
-
-**Example**
-
-```json
-{
- "path": "ccNumber",
- "type": "creditCard"
-}
-```
-
-This generates a random credit card number to mask field `ccNumber`,
-e.g. `4111111414443302`.
-
-### Phone Number
-
-This masking type replaces a phone number with a random one.
-It uses the following rule:
-
-- If a character of the original number is a digit
- it is replaced by a random digit.
-- If it is a letter it is replaced by a random letter.
-- All other characters are left unchanged.
-- If the attribute value is not a string it is replaced by the
- default value.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"phone"`
-- `default` (string, _default: `"+1234567890"`_): if the input field
- is not of data type `string`, then this value is used
-
-**Examples**
-
-```json
-{
- "path": "phone.landline",
- "type": "phone"
-}
-```
-
-This replaces an existing phone number with a random one, for instance
-`"+31 66-77-88-xx"` might get substituted by `"+75 10-79-52-sb"`.
-
-```json
-{
- "path": "phone.landline",
- "type": "phone",
- "default": "+49 12345 123456789"
-}
-```
-
-This masks a phone number as before, but falls back to a different default
-phone number in case the input value is not a string.
-
-### Email Address
-
-This masking type takes an email address, computes a hash value and
-splits it into three equal parts `AAAA`, `BBBB`, and `CCCC`. The
-resulting email address is in the format `AAAA.BBBB@CCCC.invalid`.
-The hash is based on a random secret that is different for each run.
-
-Masking settings:
-
-- `path` (string): which field to mask
-- `type` (string): masking function name `"email"`
-
-**Example**
-
-```json
-{
- "path": ".email",
- "type": "email"
-}
-```
-
-This masks every leaf attribute `email` with a random email address
-similar to `"EHwG.3AOg@hGU=.invalid"`.
diff --git a/site/content/3.11/components/web-interface/_index.md b/site/content/3.11/components/web-interface/_index.md
deleted file mode 100644
index 12863abcb6..0000000000
--- a/site/content/3.11/components/web-interface/_index.md
+++ /dev/null
@@ -1,16 +0,0 @@
----
-title: Web Interface
-menuTitle: Web Interface
-weight: 175
-description: >-
- ArangoDB has a graphical user interface you can access with your browser
----
-The ArangoDB server (*arangod*) comes with a built-in web interface for
-administration. It lets you manage databases, collections, documents,
-users, graphs and more. You can also run and explain queries in a
-convenient way. Statistics and server status are provided as well.
-
-The web interface (also referred to as Web UI, frontend or *Aardvark*) can be accessed with a
-browser under the URL `http://localhost:8529` with default server settings.
-
-
diff --git a/site/content/3.11/components/web-interface/cluster.md b/site/content/3.11/components/web-interface/cluster.md
deleted file mode 100644
index 91ae4bd075..0000000000
--- a/site/content/3.11/components/web-interface/cluster.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: Cluster
-menuTitle: Cluster
-weight: 10
-description: ''
----
-The web interface differs for cluster deployments and single-server instances.
-Instead of a single [Dashboard](dashboard.md), there
-is a **CLUSTER** and a **NODES** section.
-
-Furthermore, the **REPLICATION** and **LOGS** section are not available.
-You can access the logs of individual Coordinators and DB-Servers via the
-**NODES** section.
-
-The cluster section displays statistics about the general cluster performance.
-
-
-
-Statistics:
-
- - Available and missing Coordinators
- - Available and missing DB-Servers
- - Memory usage (percent)
- - Current connections
- - Data (bytes)
- - HTTP (bytes)
- - Average request time (seconds)
-
-## Nodes
-
-### Overview
-
-The overview shows available and missing Coordinators and DB-Servers.
-
-
-
-Functions:
-
-- Coordinator Dashboard: Click on a Coordinator will open a statistics dashboard.
-
-Information (Coordinator / DB-Servers):
-
-- Name
-- Endpoint
-- Last Heartbeat
-- Status
-- Health
-
-### Shards
-
-The shard section displays all available sharded collections.
-
-
-
-Functions:
-
-- Move Shard Leader: Click on a leader database of a shard server will open a move shard dialog. Shards can be
- transferred to all available DB-Servers, except the leading DB-Server or an available follower.
-- Move Shard Follower: Click on a follower database of a shard will open a move shard dialog. Shards can be
- transferred to all available DB-Servers, except the leading DB-Server or an available follower.
-
-Information (collection):
-
-- Shard
-- Leader (green state: sync is complete)
-- Followers
-
-### Rebalance Shards
-
-The rebalance shards section displays a button for rebalancing shards.
-A new DB-Server will not have any shards. With the rebalance functionality,
-the cluster will start to rebalance shards including empty DB-Servers.
-You can specify the maximum number of shards that can be moved in each
-operation by using the `--cluster.max-number-of-move-shards` startup option
-of _arangod_ (the default value is `10`).
-When the button is clicked, the number of scheduled move shards operations is
-shown, or it is displayed that no move operations have been scheduled if they
-are not necessary.
diff --git a/site/content/3.11/components/web-interface/collections.md b/site/content/3.11/components/web-interface/collections.md
deleted file mode 100644
index d53138f83e..0000000000
--- a/site/content/3.11/components/web-interface/collections.md
+++ /dev/null
@@ -1,72 +0,0 @@
----
-title: Collections
-menuTitle: Collections
-weight: 15
-description: ''
----
-The collections section displays all available collections. From here you can
-create new collections and jump into a collection for details (click on a
-collection tile).
-
-
-
-Functions:
-
- - A: Toggle filter properties
- - B: Search collection by name
- - D: Create collection
- - C: Filter properties
- - H: Show collection details (click tile)
-
-Information:
-
- - E: Collection type
- - F: Collection state(unloaded, loaded, ...)
- - G: Collection name
-
-## Collection
-
-
-
-There are four view categories:
-
-1. Content:
- - Create a document
- - Delete a document
- - Filter documents
- - Download documents
- - Upload documents
-
-2. indexes:
- - Create indexes
- - Delete indexes
-
-3. Info:
- - Detailed collection information and statistics
-
-3. Settings:
- - Configure name, journal size, index buckets, wait for sync
- - Delete collection
- - Truncate collection
- - Unload/Load collection
- - Save modified properties (name, journal size, index buckets, wait for sync)
-
-Additional information:
-
-Upload format:
-
-I. Line-wise
-
-```js
-{ "_key": "key1", ... }
-{ "_key": "key2", ... }
-```
-
-II. JSON documents in a list
-
-```js
-[
- { "_key": "key1", ... },
- { "_key": "key2", ... }
-]
-```
diff --git a/site/content/3.11/components/web-interface/dashboard.md b/site/content/3.11/components/web-interface/dashboard.md
deleted file mode 100644
index aac4f439ae..0000000000
--- a/site/content/3.11/components/web-interface/dashboard.md
+++ /dev/null
@@ -1,37 +0,0 @@
----
-title: Dashboard
-menuTitle: Dashboard
-weight: 5
-description: ''
----
-The **DASHBOARD** section provides statistics which are polled regularly from the
-ArangoDB server.
-
-
-
-There is a different interface for [Cluster](cluster.md) deployments.
-
-Statistics:
-
- - Requests per second
- - Request types
- - Number of client connections
- - Transfer size
- - Transfer size (distribution)
- - Average request time
- - Average request time (distribution)
-
-System Resources:
-
-- Number of threads
-- Memory
-- Virtual size
-- Major page faults
-- Used CPU time
-
-Replication:
-
-- Replication state
-- Totals
-- Ticks
-- Progress
diff --git a/site/content/3.11/components/web-interface/document.md b/site/content/3.11/components/web-interface/document.md
deleted file mode 100644
index 2ff9addb44..0000000000
--- a/site/content/3.11/components/web-interface/document.md
+++ /dev/null
@@ -1,21 +0,0 @@
----
-title: Document
-menuTitle: Document
-weight: 20
-description: ''
----
-The document section offers a editor which let you edit documents and edges of a collection.
-
-
-
-Functions:
-
- - Edit document
- - Save document
- - Delete document
- - Switch between Tree/Code - Mode
- - Create a new document
-
-Information:
-
- - Displays: _id, _rev, _key properties
diff --git a/site/content/3.11/components/web-interface/graphs.md b/site/content/3.11/components/web-interface/graphs.md
deleted file mode 100644
index dafa51f9f8..0000000000
--- a/site/content/3.11/components/web-interface/graphs.md
+++ /dev/null
@@ -1,194 +0,0 @@
----
-title: Graphs in the web interface
-menuTitle: Graphs
-weight: 30
-description: >-
- You can create and manage named graphs in the web interface, as well as
- visually explore graphs with the graph viewer
----
-The **GRAPHS** section of the web interface lists the _named graphs_ stored in
-ArangoDB (EnterpriseGraphs, SmartGraphs, SatelliteGraphs, General Graphs) and
-lets you create new named graphs as well as view and edit the settings of
-existing named graphs. It also provides a viewer facility for visualizing
-subsets of a graph or an entire graph.
-
-
-
-## Create a named graph
-
-1. In the **GRAPHS** section, click the first card with the label **Add Graph**.
-2. Select a tab depending on which type of named graph you want to create.
- The **SatelliteGraph**, **SmartGraph**, and **EnterpriseGraph** tabs are
- only available for cluster deployments using the Enterprise Edition.
- For non-cluster deployments and in the Community Edition, only the
- **Examples** and **GeneralGraph** tabs are available.
-3. Fill in the fields of the dialog. Required fields have an asterisk (`*`)
- in their label. Hover over the gray circle with a white `i` in it next to
- a field to show the tooltip with an explanation.
-4. Click the **Create** button to create the named graph.
-
-For more information about the different types of named graphs and how to
-create them, see [Graphs](../../graphs/_index.md).
-
-## View and edit the settings of a named graph
-
-1. In the **GRAPHS** section, click the _gear_ icon in the top right corner
- of a graph's card.
-2. The setting dialog opens. You can only edit certain fields. Fields that
- cannot be modified are grayed out.
-3. Click the **Cancel** button or outside of the dialog to close it without
- saving any changes. Click **Save** to save changes.
-
-## Delete a named graph
-
-1. In the **GRAPHS** section, click the _gear_ icon in the top right corner
- of a graph's card.
-2. Click the **Delete** button.
-3. Optional: Tick the **also drop collections?** checkbox if you want to
- delete the vertex and edge collections of the graph as well and not the
- graph definition only. This deletes the collections with all the documents
- they contain and is irreversible!
-4. Confirm the deletion by clicking the **Yes** button.
-
-## Graph viewer
-
-The graph viewer opens if you click a graph's card in the **GRAPHS** section.
-It randomly selects a start node and displays its neighborhood. By default,
-up to 250 nodes that are directly connected to the start node as well as
-their direct neighbors are selected. You can select one or more start nodes
-and change the depth and the limit in the settings panel. You can also load
-the entire graph via the toolbar, but only use this with small graphs.
-
-
-
-### Viewport
-
-The main area of the graph viewer is used for displaying the graph. You can
-interact with it in the followings ways:
-
-- Left-click a node or edge to select it. The document ID and the names of the
- document's top-level attributes are displayed at the bottom of the viewport.
- Hover an attribute name to view the attribute value as a tooltip.
-- Left-click and drag nodes if you want to re-arrange them.
-- Left-click and drag to move the entire graph within the viewport.
-- Right-click to access the [Context menus](#context-menus).
-- Use the [Toolbar](#toolbar), for example, to access the graph viewer **Settings**
-- See the number of the currently displayed nodes and edges, and how long it
- took to load the graph. This is displayed at the bottom of the viewport.
-
-### Toolbar
-
-The toolbar at the top shows you the name of the graph and offers the following
-actions and a toggle for the settings panel:
-
-- Take a screenshot (_camera_ icon)
-- Enter fullscreen (_rectangle corners_ icon)
-- Load full graph (_cloud download_ icon)
-- Switch to the old graph viewer (_clock with an arrow_ icon)
-- Search nodes (_magnifier_ icon)
-- Settings (_gear_ icon)
-
-### Settings
-
-The settings panel is divided into three collapsible sections and lets you
-configure what to show of the graph and how.
-
-**General**
-
-- **Start node**: One or more document IDs to start the traversal from for
- displaying (a subset of) the graph. If no start node is specified, the
- graph viewer picks a random node.
-- **Layout**: The graph layout algorithm for finding a sensible arrangement and
- visualizing the graph in 2D.
- - **forceAtlas2**: Assigns positions to nodes based on the principles of
- physical simulation, using repulsive and attractive forces. Works best with
- medium-sized graphs.
- - **hierarchical**: Arranges the graph uniformly to display a hierarchical
- structure (for example, a tree) that avoids edge crossings and overlaps.
- Works best with small graphs.
-- **Depth**: The traversal depth for displaying the Start node's neighborhood.
- The default depth is **2**.
-- **Limit**: The maximum number of nodes to display, even if the maximum depth
- is not reached. The default is **250** nodes. Set it to **0** for no limit.
-
-**Nodes**
-
-- **Node label**: The document attribute to use for node labels.
- The default is `_key`.
-- **Default node color**: The color for nodes if no color attribute is set or
- as a fallback if the document does not have this attribute.
-- **Color nodes by collection**: Give nodes stored in the same collection the
- same, randomly chosen color. Disables the default node color and the node
- color attribute field.
-- **Node color attribute**: A document attribute to use for assigning a
- node color. Nodes with the same attribute value get the same, randomly
- chosen color.
-- **Show collection name**: Whether to include the document's collection name
- in the node label.
-- **Size by connections**: Scale nodes based on the number of inbound and
- outbound edges. Disables the sizing attribute field.
-- **Sizing attribute**: A document attribute to use for scaling nodes. Attribute values need to be numeric.
-
-**Edges**
-
-- **Edge label**: The document attribute to use for edge labels.
- The default is none.
-- **Default edge color**: The color for edges if no color attribute is set or
- as a fallback if the document does not have this attribute.
-- **Color edges by collection**: Give edges stored in the same collection the
- same, randomly chosen color. Disables the default edge color and the edge
- color attribute field.
-- **Edge color attribute**: A document attribute to use for assigning an
- edge color. Edges with the same attribute value get the same, randomly
- chosen color.
-- **Show collection name**: Whether to include the document's collection name
- in the edge label.
-- **Show edge direction**: Whether to display arrow heads on edge ends to show
- which way edges are pointing.
-- **Type**: The style for edge lines or arcs.
- Can be **solid**, **dashed**, or **dotted**.
-
-**Actions**
-
-- **Restore defaults**: Reset the settings.
-- **Apply**: Traverse and layout the graph according to the settings.
-
-### Context menus
-
-You can click the right mouse button to access the context menus. You can take
-different actions depending on where you click.
-
-**Background**
-
-If you right-click a blank area anywhere in the graph viewer, you get the
-options to create a node or edge.
-
-- **Add node to database**: Opens a dialog that lets you specify a document key,
- select a collection to store the node in, and to set any document attributes.
-- **Add edge to database**: Enables the _Add edge mode_. Left-click a node and
- drag the edge to the end node. A dialog opens that lets you specify a
- document key, select a collection to store the edge in, and to set any
- document attributes.
-
-**Node**
-
-If you right-click a node, the connected edges are highlighted and you get the
-followings options:
-
-- **Delete Node**: Opens a confirmation dialog for removing the document from
- the collection it is stored in.
- You can optionally **Delete connected edges too**.
-- **Edit Node**: Opens a dialog for editing the document attributes.
-- **Expand Node**: Follow this node's inbound and outbound edges and display
- its direct neighbors in addition to the already shown graph.
-- **Set as Start Node**: Change the start node to this node and render the
- graph according to the settings.
-- **Pin Node**: Locks the position of the node.
-
-**Edge**
-
-If you right-click an edge, you get the following options:
-
-- **Delete edge**: Opens a confirmation dialog for removing the document from
- the collection it is stored in.
-- **Edit edge**: Opens a dialog for editing the document attributes.
diff --git a/site/content/3.11/components/web-interface/logs.md b/site/content/3.11/components/web-interface/logs.md
deleted file mode 100644
index f9ddcc007b..0000000000
--- a/site/content/3.11/components/web-interface/logs.md
+++ /dev/null
@@ -1,20 +0,0 @@
----
-title: Logs
-menuTitle: Logs
-weight: 45
-description: ''
----
-The logs section displays all available log entries. Log entries are filterable by
-their log level types.
-
-
-
-Functions:
-
- - Filter log entries by log level (all, info, error, warning, debug)
-
-Information:
-
- - Loglevel
- - Date
- - Message
diff --git a/site/content/3.11/components/web-interface/queries.md b/site/content/3.11/components/web-interface/queries.md
deleted file mode 100644
index c263e2e6b0..0000000000
--- a/site/content/3.11/components/web-interface/queries.md
+++ /dev/null
@@ -1,117 +0,0 @@
----
-title: Query View
-menuTitle: Queries
-weight: 25
-description: ''
----
-The query view offers you three different subviews:
-
-- Editor
-- Running Queries
-- Slow Query History
-
-## AQL Query Editor
-
-The web interface offers a AQL Query Editor:
-
-
-
-The editor is split into two parts, the query editor pane and the bind
-parameter pane.
-
-The left pane is your regular query input field, where you can edit and then
-execute or explain your queries. By default, the entered bind parameter will
-automatically be recognized and shown in the bind parameter table in the right
-pane, where you can easily edit them.
-
-The input fields are equipped with type detection. This means you don't have to
-use quote marks around string, just write them as-is. Numbers will be treated
-as numbers, *true* and *false* as booleans, *null* as null-type value. Square
-brackets can be used to define arrays, and curly braces for objects (keys and
-values have to be surrounded by double quotes). This will mostly be what you want.
-But if you want to force something to be treated as string, use quotation marks
-for the value:
-
-```js
-123 // interpreted as number
-"123" // interpreted as string
-
-["foo", "bar", 123, true] // interpreted as array
-['foo', 'bar', 123, true] // interpreted as string
-```
-
-If you are used to work with JSON, you may want to switch the bind parameter
-editor to JSON mode by clicking on the upper right toggle button. You can then
-edit the bind parameters in raw JSON format.
-
-### Custom Queries
-
-To save the current query use the *Save* button in the top left corner of
-the editor or use the shortcut (see below).
-
-
-
-By pressing the *Queries* button in the top left corner of the editor you
-activate the custom queries view. Here you can select a previously stored custom
-query or one of our query examples.
-
-Click on a query title to get a code preview. In addition, there are action
-buttons to:
-
-- Copy to editor
-- Explain query
-- Run query
-- Delete query
-
-For the built-in example queries, there is only *Copy to editor* available.
-
-To export or import queries to and from JSON you can use the buttons on the
-right-hand side.
-
-### Result
-
-
-
-Each query you execute or explain opens up a new result box, so you are able
-to fire up multiple queries and view their results at the same time. Every query
-result box gives you detailed query information and of course the query result
-itself. The result boxes can be dismissed individually, or altogether using the
-*Remove results* button. The toggle button in the top right corner of each box
-switches back and forth between the *Result* and *AQL* query with bind parameters.
-
-### Spotlight
-
-
-
-The spotlight feature opens up a modal view. There you can find all AQL keywords,
-AQL functions and collections (filtered by their type) to help you to be more
-productive in writing your queries. Spotlight can be opened by the magic wand icon
-in the toolbar or via shortcut (see below).
-
-### AQL Editor Shortcuts
-
-| Key combination | Action |
-|:----------------|:-------|
-| `Ctrl`/`Cmd` + `Return` | Execute query
-| `Ctrl`/`Cmd` + `Alt` + `Return` | Execute selected query
-| `Ctrl`/`Cmd` + `Shift` + `Return` | Explain query
-| `Ctrl`/`Cmd` + `Shift` + `S` | Save query
-| `Ctrl`/`Cmd` + `Shift` + `C` | Toggle comments
-| `Ctrl`/`Cmd` + `Z` | Undo last change
-| `Ctrl`/`Cmd` + `Shift` + `Z` | Redo last change
-| `Shift` + `Alt` + `Up` | Increase font size
-| `Shift` + `Alt` + `Down` | Decrease font size
-| `Ctrl` + `Space` | Open up the spotlight search
-
-## Running Queries
-
-
-
-The *Running Queries* tab gives you a compact overview of all running queries.
-By clicking the red minus button, you can abort the execution of a running query.
-
-## Slow Query History
-
-
-
-The *Slow Query History* tab gives you a compact overview of all past slow queries.
diff --git a/site/content/3.11/components/web-interface/services.md b/site/content/3.11/components/web-interface/services.md
deleted file mode 100644
index 3bae62eb84..0000000000
--- a/site/content/3.11/components/web-interface/services.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-title: Services
-menuTitle: Services
-weight: 35
-description: ''
----
-The services section displays all installed Foxx applications. You can create new services
-or go into a detailed view of a chosen service.
-
-
-
-## Create Service
-
-There are four different possibilities to create a new service:
-
-1. Create service via zip file
-2. Create service via github repository
-3. Create service via official ArangoDB store
-4. Create a blank service from scratch
-
-
-
-## Service View
-
-This section offers several information about a specific service.
-
-
-
-There are four view categories:
-
-1. Info:
- - Displays name, short description, license, version, mode (production, development)
- - Offers a button to go to the services interface (if available)
-
-2. Api:
- - Display API as SwaggerUI
- - Display API as RAW JSON
-
-3. Readme:
- - Displays the services manual (if available)
-
-4. Settings:
- - Download service as zip file
- - Run service tests (if available)
- - Run service scripts (if available)
- - Configure dependencies (if available)
- - Change service parameters (if available)
- - Change mode (production, development)
- - Replace the service
- - Delete the service
diff --git a/site/content/3.11/components/web-interface/users.md b/site/content/3.11/components/web-interface/users.md
deleted file mode 100644
index 3ecc4fc927..0000000000
--- a/site/content/3.11/components/web-interface/users.md
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: Managing Users in the Web Interface
-menuTitle: Users
-weight: 40
-description: ''
----
-ArangoDB users are globally stored in the `_system` database and can only be
-managed while logged on to this database. There you can find the *Users* section:
-
-
-
-## General
-
-Select a user to bring up the *General* tab with the username, name and active
-status, as well as options to delete the user or change the password.
-
-
-
-## Permissions
-
-Select a user and go to the *Permissions* tab. You will see a list of databases
-and their corresponding database access level for that user.
-
-
-
-Please note that server access level follows from the access level on
-the `_system` database. Furthermore, the default database access level
-for this user appear in the artificial row with the database name `*`.
-
-Below this table is another one for the collection category access
-levels. At first, it shows the list of databases, too. If you click on a
-database, the list of collections in that database will be open and you
-can see the defined collection access levels for each collection of that
-database (which can be all unselected which means that nothing is
-explicitly set). The default access levels for this user and database
-appear in the artificial row with the collection name `*`.
-
-{{< info >}}
-Also see [Managing Users](../../operations/administration/user-management/_index.md) about access levels.
-{{< /info >}}
diff --git a/site/content/3.11/data-science/_index.md b/site/content/3.11/data-science/_index.md
deleted file mode 100644
index 32d6450b82..0000000000
--- a/site/content/3.11/data-science/_index.md
+++ /dev/null
@@ -1,133 +0,0 @@
----
-title: Data Science
-menuTitle: Data Science
-weight: 115
-description: >-
- ArangoDB lets you apply analytics and machine learning to graph data at scale
-aliases:
- - data-science/overview
----
-ArangoDB's Graph Analytics and GraphML capabilities provide various solutions
-in data science and data analytics. Multiple data science personas within the
-engineering space can make use of ArangoDB's set of tools and technologies that
-enable analytics and machine learning on graph data.
-
-ArangoDB, as the foundation for GraphML, comes with the following key features:
-
-- **Scalable**: designed to support true scalability with high performance for
- enterprise use cases.
-- **Simple Ingestion**: easy integration in existing data infrastructure with
- connectors to all leading data processing and data ecosystems.
-- **Open Source**: extensibility and community.
-- **NLP Support**: built-in text processing, search, and similarity ranking.
-
-
-
-## Graph Analytics vs. GraphML
-
-This section classifies the complexity of the queries we can answer -
-like running a simple query that shows what is the path that goes from one node
-to another, or more complex tasks like node classification,
-link prediction, and graph classification.
-
-### Graph Queries
-
-When you run an AQL query on a graph, a traversal query can go from a vertex to
-multiple edges, and then the edges indicate what the next connected vertices are.
-Graph queries can also determine the shortest paths between vertices.
-
-Graph queries can answer questions like _**Who can introduce me to person X**_?
-
-
-
-See [Graphs in AQL](../aql/graphs/_index.md) for the supported graph queries.
-
-### Graph Analytics
-
-Graph analytics or graph algorithms is what you run on a graph if you want to
-know aggregate information about your graph, while analyzing the entire graph.
-
-Graph analytics can answer questions like _**Who are the most connected persons**_?
-
-
-
-ArangoDB offers _Graph Analytics Engines_ to run algorithms such as
-connected components, label propagation, and PageRank on your data. This feature
-is available for the ArangoGraph Insights Platform. See
-[Graph Analytics](graph-analytics.md) for details.
-
-### GraphML
-
-When applying machine learning on a graph, you can predict connections, get
-better product recommendations, and also classify vertices, edges, and graphs.
-
-GraphML can answer questions like:
-- _**Is there a connection between person X and person Y?**_
-- _**Will a customer churn?**_
-- _**Is this particular transaction Anomalous?**_
-
-
-
-For ArangoDB's enterprise-ready, graph-powered machine learning offering,
-see [ArangoGraphML](arangographml/_index.md).
-
-## Use Cases
-
-This section contains an overview of different use cases where Graph Analytics
-and GraphML can be applied.
-
-### GraphML
-
-GraphML capabilities of using more data outperform conventional deep learning
-methods and **solve high-computational complexity graph problems**, such as:
-- Drug discovery, repurposing, and predicting adverse effects.
-- Personalized product/service recommendation.
-- Supply chain and logistics.
-
-With GraphML, you can also **predict relationships and structures**, such as:
-- Predict molecules for treating diseases (precision medicine).
-- Predict fraudulent behavior, credit risk, purchase of product or services.
-- Predict relationships among customers, accounts.
-
-ArangoDB uses well-known GraphML frameworks like
-[Deep Graph Library](https://www.dgl.ai)
-and [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/)
-and connects to these external machine learning libraries. When coupled to
-ArangoDB, you are essentially integrating them with your graph dataset.
-
-## Example: ArangoFlix
-
-ArangoFlix is a complete movie recommendation application that predicts missing
-links between a user and the movies they have not watched yet.
-
-This [interactive tutorial](https://colab.research.google.com/github/arangodb/interactive_tutorials/blob/master/notebooks/Integrate_ArangoDB_with_PyG.ipynb)
-demonstrates how to integrate ArangoDB with PyTorch Geometric to
-build recommendation systems using Graph Neural Networks (GNNs).
-
-The full ArangoFlix demo website is accessible from the ArangoGraph Insights Platform,
-the managed cloud for ArangoDB. You can open the demo website that connects to
-your running database from the **Examples** tab of your deployment.
-
-{{< tip >}}
-You can try out the ArangoGraph Insights Platform free of charge for 14 days.
-Sign up at [dashboard.arangodb.cloud](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-{{< /tip >}}
-
-The ArangoFlix demo uses five different recommendation methods:
-- Content-Based using AQL
-- Collaborative Filtering using AQL
-- Content-Based using ML
-- Matrix Factorization
-- Graph Neural Networks
-
-
-
-The ArangoFlix website not only offers an example of how the user recommendations might
-look like in real life, but it also provides information on a recommendation method,
-an AQL query, a custom graph visualization for each movie, and more.
-
-## Sample datasets
-
-If you want to try out ArangoDB's data science features, you may use the
-[`arango_datasets` Python package](../components/tools/arango-datasets.md)
-to load sample datasets into a deployment.
diff --git a/site/content/3.11/data-science/arangograph-notebooks.md b/site/content/3.11/data-science/arangograph-notebooks.md
deleted file mode 100644
index 34ca9529be..0000000000
--- a/site/content/3.11/data-science/arangograph-notebooks.md
+++ /dev/null
@@ -1,22 +0,0 @@
----
-title: ArangoGraph Notebooks
-menuTitle: ArangoGraph Notebooks
-weight: 130
-description: >-
- Colocated Jupyter Notebooks within the ArangoGraph Insights Platform
----
-{{< tip >}}
-ArangoGraph Notebooks don't include the ArangoGraphML services.
-To enable the ArangoGraphML services,
-[get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team.
-{{< /tip >}}
-
-The ArangoGraph Notebook is a JupyterLab notebook embedded in the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-The notebook integrates seamlessly with the platform,
-automatically connecting to ArangoGraph services and ArangoDB.
-This makes it much easier to leverage these resources without having
-to download any data locally or to remember user IDs, passwords, and endpoint URLs.
-
-For more information, see the [Notebooks](../arangograph/notebooks.md) documentation.
diff --git a/site/content/3.11/data-science/arangographml/_index.md b/site/content/3.11/data-science/arangographml/_index.md
deleted file mode 100644
index e8d6ea4137..0000000000
--- a/site/content/3.11/data-science/arangographml/_index.md
+++ /dev/null
@@ -1,181 +0,0 @@
----
-title: ArangoGraphML
-menuTitle: ArangoGraphML
-weight: 125
-description: >-
- Enterprise-ready, graph-powered machine learning as a cloud service or self-managed
-aliases:
- - graphml
----
-Traditional Machine Learning (ML) overlooks the connections and relationships
-between data points, which is where graph machine learning excels. However,
-accessibility to GraphML has been limited to sizable enterprises equipped with
-specialized teams of data scientists. ArangoGraphML simplifies the utilization of GraphML,
-enabling a broader range of personas to extract profound insights from their data.
-
-## How GraphML works
-
-Graph machine learning leverages the inherent structure of graph data, where
-entities (nodes) and their relationships (edges) form a network. Unlike
-traditional ML, which primarily operates on tabular data, GraphML applies
-specialized algorithms like Graph Neural Networks (GNNs), node embeddings, and
-link prediction to uncover complex patterns and insights.
-
-1. **Graph Construction**:
- Raw data is transformed into a graph structure, defining nodes and edges based
- on real-world relationships.
-2. **Featurization**:
- Nodes and edges are enriched with features that help in training predictive models.
-3. **Model Training**:
- Machine learning techniques are applied on GNNs to identify patterns and make predictions.
-4. **Inference & Insights**:
- The trained model is used to classify nodes, detect anomalies, recommend items,
- or predict future connections.
-
-ArangoGraphML streamlines these steps, providing an intuitive and scalable
-framework to integrate GraphML into various applications, from fraud detection
-to recommendation systems.
-
-
-
-
-
-It is no longer necessary to understand the complexities involved with graph
-machine learning, thanks to the accessibility of the ArangoML package.
-Solutions with ArangoGraphML only require input from a user about
-their data, and the ArangoGraphML managed service handles the rest.
-
-The platform comes preloaded with all the tools needed to prepare your graph
-for machine learning, high-accuracy training, and persisting predictions back
-to the database for application use.
-
-## Supported Tasks
-
-### Node Classification
-
-Node classification is a **supervised learning** task where the goal is to
-predict the label of a node based on both its own features and its relationships
-within the graph. It requires a set of labeled nodes to train a model, which then
-classifies unlabeled nodes based on learned patterns.
-
-**How it works in ArangoGraphML**
-
-- A portion of the nodes in a graph is labeled for training.
-- The model learns patterns from both **node features** and
- **structural relationships** (neighboring nodes and connections).
-- It predicts labels for unlabeled nodes based on these learned patterns.
-
-**Example Use Cases**
-
-- **Fraud Detection in Financial Networks**
- - **Problem:** Fraudsters often create multiple accounts or interact within
- suspicious clusters to evade detection.
- - **Solution:** A transaction graph is built where nodes represent users and
- edges represent transactions. The model learns patterns from labeled
- fraudulent and legitimate users, detecting hidden fraud rings based on
- **both user attributes and transaction relationships**.
-
-- **Customer Segmentation in E-Commerce & Social Media**
- - **Problem:** Businesses need to categorize customers based on purchasing
- behavior and engagement.
- - **Solution:** A graph is built where nodes represent customers and edges
- represent interactions (purchases, reviews, social connections). The model
- predicts the category of each user based on how similar they are to other users
- **not just by their personal data, but also by how they are connected to others**.
-
-- **Disease Classification in Biomedical Networks**
- - **Problem:** Identifying proteins or genes associated with a disease.
- - **Solution:** A protein interaction graph is built where nodes are proteins
- and edges represent biochemical interactions. The model classifies unknown
- proteins based on their interactions with known disease-related proteins,
- rather than just their individual properties.
-
-### Node Embedding Generation
-
-Node embedding is an **unsupervised learning** technique that converts nodes
-into numerical vector representations, preserving their **structural relationships**
-within the graph. Unlike simple feature aggregation, node embeddings
-**capture the influence of neighboring nodes and graph topology**, making
-them powerful for downstream tasks like clustering, anomaly detection,
-and link prediction. These combinations can provide valuable insights.
-Consider using [ArangoDB's Vector Search](https://arangodb.com/2024/11/vector-search-in-arangodb-practical-insights-and-hands-on-examples/)
-capabilities to find similar nodes based on their embeddings.
-
-**Feature Embeddings versus Node Embeddings**
-
-**Feature Embeddings** are vector representations derived from the attributes or
-features associated with nodes. These embeddings aim to capture the inherent
-characteristics of the data. For example, in a social network, a
-feature embedding might encode user attributes like age, location, and
-interests. Techniques like **Word2Vec**, **TF-IDF**, or **autoencoders** are
-commonly used to generate such embeddings.
-
-In the context of graphs, **Node Embeddings** are a
-**combination of a node's feature embedding and the structural information from its connected edges**.
-Essentially, they aggregate both the node's attributes and the connectivity patterns
-within the graph. This fusion helps capture not only the individual properties of
-a node but also its position and role within the network.
-
-**How it works in ArangoGraphML**
-
-- The model learns an embedding (a vector representation) for each node based on its
- **position within the graph and its connections**.
-- It **does not rely on labeled data** – instead, it captures structural patterns
- through graph traversal and aggregation of neighbor information.
-- These embeddings can be used for similarity searches, clustering, and predictive tasks.
-
-**Example Use Cases**
-
-- **Recommendation Systems (E-commerce & Streaming Platforms)**
- - **Problem:** Platforms like Amazon, Netflix, and Spotify need to recommend products,
- movies, or songs.
- - **Solution:** A user-item interaction graph is built where nodes are users
- and products, and edges represent interactions (purchases, ratings, listens).
- **Embeddings encode relationships**, allowing the system to recommend similar
- items based on user behavior and network influence rather than just individual
- preferences.
-
-- **Anomaly Detection in Cybersecurity & Finance**
- - **Problem:** Detecting unusual activity (e.g., cyber attacks, money laundering)
- in complex networks.
- - **Solution:** A network of IP addresses, users, and transactions is represented as
- a graph. Nodes with embeddings that significantly deviate from normal patterns
- are flagged as potential threats. The key advantage here is that anomalies are
- detected based on **network structure, not just individual activity logs**.
-
-- **Link Prediction (Social & Knowledge Graphs)**
- - **Problem:** Predicting new relationships, such as suggesting friends on
- social media or forecasting research paper citations.
- - **Solution:** A social network graph is created where nodes are users, and
- edges represent friendships. **Embeddings capture the likelihood of
- connections forming based on shared neighborhoods and structural
- similarities, even if users have never interacted before**.
-
-### Key Differences
-
-| Feature | Node Classification | Node Embedding Generation |
-|-----------------------|---------------------|----------------------------|
-| **Learning Type** | Supervised | Unsupervised |
-| **Input Data** | Labeled nodes | Graph structure & features |
-| **Output** | Predicted labels | Node embeddings (vectors) |
-| **Key Advantage** | Learns labels based on node connections and attributes | Learns structural patterns and node relationships |
-| **Use Cases** | Fraud detection, customer segmentation, disease classification | Recommendations, anomaly detection, link prediction |
-
-ArangoGraphML provides the infrastructure to efficiently train and apply these
-models, helping users extract meaningful insights from complex graph data.
-
-## Metrics and Compliance
-
-ArangoGraphML supports tracking your ML pipeline by storing all relevant metadata
-and metrics in a Graph called ArangoPipe. This is only available to you and is never
-viewable by ArangoDB. This metadata graph links all experiments
-to the source data, feature generation activities, training runs, and prediction
-jobs, allowing you to track the entire ML pipeline without having to leave ArangoDB.
-
-### Security
-
-Each deployment that uses ArangoGraphML has an `arangopipe` database created,
-which houses all ML Metadata information. Since this data lives within the deployment,
-it benefits from the ArangoGraph security features and SOC 2 compliance.
-All ArangoGraphML services live alongside the ArangoGraph deployment and are only
-accessible within that organization.
diff --git a/site/content/3.11/data-science/arangographml/deploy.md b/site/content/3.11/data-science/arangographml/deploy.md
deleted file mode 100644
index 0d62cb12f6..0000000000
--- a/site/content/3.11/data-science/arangographml/deploy.md
+++ /dev/null
@@ -1,78 +0,0 @@
----
-title: Deploy ArangoGraphML
-menuTitle: Deploy
-weight: 5
-description: >-
- You can deploy ArangoGraphML in your own Kubernetes cluster or use the managed
- cloud service that comes with a ready-to-go, pre-configured environment
----
-
-## Managed cloud service versus self-managed
-
-ArangoDB offers two deployment options, tailored to suit diverse requirements
-and infrastructure preferences:
-- Managed cloud service via the [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-- Self-managed solution via the [ArangoDB Kubernetes Operator](https://github.com/arangodb/kube-arangodb)
-
-### ArangoGraphML
-
-ArangoGraphML provides enterprise-ready Graph Machine Learning as a
-Cloud Service via Jupyter Notebooks that run on the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-{{< tip >}}
-To get access to ArangoGraphML services and packages,
-[get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team.
-{{< /tip >}}
-
-- **Accessible at all levels**
- - Low code UI
- - Notebooks
- - APIs
-- **Full usability**
- - MLOps lifecycle
- - Metrics
- - Metadata capture
- - Model management
-
-
-
-#### Setup
-
-The ArangoGraphML managed-service runs on the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-It offers a pre-configured environment where everything,
-including necessary components and configurations, comes preloaded. You don't
-need to set up or configure the infrastructure, and can immediately start using the
-GraphML functionalities.
-
-To summarize, all you need to do is:
-1. Sign up for an [ArangoGraph account](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-2. Create a new [deployment in ArangoGraph](../../arangograph/deployments/_index.md#how-to-create-a-new-deployment).
-3. Start using the ArangoGraphML functionalities.
-
-### Self-managed ArangoGraphML
-
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-The self-managed solution enables you to deploy and manage ArangoML within your
-Kubernetes cluster using the [ArangoDB Kubernetes Operator](https://github.com/arangodb/kube-arangodb).
-
-The self-managed package includes the same features as in ArangoGraphML.
-The primary distinction lies in the environment setup: with the self-managed
-solution, you have direct control over configuring your environment.
-
-#### Setup
-
-You can run ArangoGraphML in your Kubernetes
-cluster provided you already have a running `ArangoDeployment`. If you don't
-have one yet, consider checking the installation guide of the
-[ArangoDB Kubernetes Operator](https://arangodb.github.io/kube-arangodb/docs/using-the-operator.html)
-and the [ArangoDeployment Custom Resource](https://arangodb.github.io/kube-arangodb/docs/deployment-resource-reference.html)
-description.
-
-To start ArangoGraphML in your Kubernetes cluster, follow the instructions provided
-in the [ArangoMLExtension Custom Resource](https://arangodb.github.io/kube-arangodb/docs/mlextension-resource.html)
-description. Once the `CustomResource` has been created and the ArangoGraphML extension
-is ready, you can start using it.
\ No newline at end of file
diff --git a/site/content/3.11/data-science/arangographml/getting-started.md b/site/content/3.11/data-science/arangographml/getting-started.md
deleted file mode 100644
index 6bd614167e..0000000000
--- a/site/content/3.11/data-science/arangographml/getting-started.md
+++ /dev/null
@@ -1,967 +0,0 @@
----
-title: Getting Started with ArangoGraphML
-menuTitle: Getting Started
-weight: 10
-description: >-
- How to control all resources inside ArangoGraphML in a scriptable manner
-aliases:
- - getting-started-with-arangographml
----
-ArangoGraphML provides an easy-to-use & scalable interface to run Graph Machine Learning on ArangoDB Data. Since all of the orchestration and ML logic is managed by ArangoGraph, all that is typically required are JSON specifications outlining individual processes to solve an ML Task. If you are using the self-managed solution, additional configurations may be required.
-
-The `arangoml` is a Python Package allowing you to manage all of the necessary ArangoGraphML components, including:
-- **Project Management**: Projects are a metadata-tracking entity that sit at the top level of ArangoGraphML. All activities must link to a project.
-- **Featurization**: The step of converting human-understandable data to machine-understandable data (i.e features), such that it can be used to train Graph Neural Networks (GNNs).
-- **Training**: Train a set of models based on the name of the generated/existing features, and a definition of the ML Task we want to solve (e.g Node Classification, Embedding Generation).
-- **Model Selection**: Select the best model based on the metrics generated during training.
-- **Predictions**: Generate predictions based on the selected model, and persit the results to the source graph (either in the source document, or in a new collection).
-
-{{< tip >}}
-To enable the ArangoGraphML services in the ArangoGraph platform,
-[get in touch](https://www.arangodb.com/contact/)
-with the ArangoDB team. Regular notebooks in ArangoGraph don't include the
-`arangoml` package.
-{{< /tip >}}
-
-ArangoGraphML's suite of services and packages is driven by **"specifications"**. These specifications are standard Python dictionaries that describe the task being performed, & the data being used. The ArangoGraphML services work closely together, with the previous task being used as the input for the next.
-
-Let's take a look at using the `arangoml` package to:
-
-1. Manage projects
-2. Featurize data
-3. Submit Training Jobs
-4. Evaluate Model Metrics
-5. Generate Predictions
-
-## Initialize ArangoML
-
-{{< tabs "arangoml" >}}
-
-{{< tab "ArangoGraphML" >}}
-
-**API Documentation: [arangoml.ArangoMLMagics.enable_arangoml](https://arangoml.github.io/arangoml/magics.html#arangoml.magic.ArangoMLMagics.enable_arangoml)**
-
-The `arangoml` package comes pre-loaded with every ArangoGraphML notebook environment.
-To start using it, simply import it, and enable it via a Jupyter Magic Command.
-
-```py
-arangoml = %enable_arangoml
-```
-
-{{< tip >}}
-ArangoGraphML comes with other ArangoDB Magic Commands! See the full list [here](https://arangoml.github.io/arangoml/magics.html).
-{{< /tip >}}
-
-{{< /tab >}}
-
-{{< tab "Self-managed" >}}
-
-**API Documentation: [arangoml.ArangoML](https://arangoml.github.io/arangoml/client.html#arangoml.main.ArangoML)**
-
-The `ArangoML` class is the main entry point for the `arangoml` package.
-It has the following parameters:
-- `client`: An instance of arango.client.ArangoClient. Defaults to `None`. If not provided, the **hosts** argument must be provided.
-- `hosts`: The ArangoDB host(s) to connect to. This can be a single host, or a
- list of hosts.
-- `username`: The ArangoDB username to use for authentication.
-- `password`: The ArangoDB password to use for authentication.
-- `user_token`: The ArangoDB user token to use for authentication.
- This is an alternative to username/password authentication.
-- `ca_cert_file`: The path to the CA certificate file to use for TLS
- verification. Defaults to `None`.
-- `api_endpoint`: The URL to the ArangoGraphML API Service.
-- `settings_files`: A list of secrets files to be loaded as settings. Parameters provided as arguments will override those in the settings files (e.g `settings.toml`).
-- `version`: The ArangoML API date version. Defaults to the latest version.
-
-It is possible to instantiate an ArangoML object in multiple ways:
-
-1. Via parameters
-```py
-from arangoml import ArangoML
-
-arangoml = ArangoML(
- hosts="http://localhost:8529"
- username="root",
- password="password",
- # ca_cert_file="/path/to/ca.pem",
- # user_token="..."
- api_endpoint="http://localhost:8501",
-)
-```
-
-2. Via parameters and a custom `ArangoClient` instance
-```py
-from arangoml import ArangoML
-from arango import ArangoClient
-
-client = ArangoClient(
- hosts="http://localhost:8529",
- verify_override="/path/to/ca.pem",
- hosts_resolver=...,
- ...
-)
-
-arangoml = ArangoML(
- client=client,
- username="root",
- password="password",
- # user_token="..."
- api_endpoint="http://localhost:8501",
-)
-```
-
-3. Via environment variables
-```py
-import os
-from arangoml import ArangoML
-
-os.environ["ARANGODB_HOSTS"] = "http://localhost:8529"
-os.environ["ARANGODB_CA_CERT_FILE"]="/path/to/ca.pem"
-os.environ["ARANGODB_USER"] = "root"
-os.environ["ARANGODB_PW"] = "password"
-# os.environ["ARANGODB_USER_TOKEN"] = "..."
-os.environ["ML_API_SERVICES_ENDPOINT"] = "http://localhost:8501"
-
-arangoml = ArangoML()
-```
-
-4. Via configuration files
-```py
-import os
-from arangoml import ArangoML
-
-arangoml = ArangoML(settings_files=["settings_1.toml", "settings_2.toml"])
-```
-
-5. Via a Jupyter Magic Command
-
-**API Documentation: [arangoml.ArangoMLMagics.enable_arangoml](https://arangoml.github.io/arangoml/magics.html#arangoml.magic.ArangoMLMagics.enable_arangoml)**
-
-```
-%load_ext arangoml
-%enable_arangoml
-```
-{{< info >}}
-This assumes you are working out of a Jupyter Notebook environment, and
-have set the environment variables in the notebook environment with user
-authentication that has **_system** access.
-{{< /info >}}
-
-{{< tip >}}
-Running `%load_ext arangoml` also provides access to other [ArangoGraphML
-Jupyter Magic Commands](https://arangoml.github.io/arangoml/magics.html).
-{{< /tip >}}
-
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Load the database
-
-This example is using ArangoML to predict the **class** of `Events` in a
-Knowledge Graph constructed from the [GDELT Project](https://www.gdeltproject.org/).
-
-> GDELT monitors the world's news media from nearly every corner of every
- country in print, broadcast, and web formats, in over 100 languages, every
- moment of every day. [...] Put simply, the GDELT Project is a realtime open
- data global graph over human society as seen through the eyes of the world's
- news media, reaching deeply into local events, reaction, discourse, and
- emotions of the most remote corners of the world in near-realtime and making
- all of this available as an open data firehose to enable research over human
- society.
-
-The events used range from peaceful protests to significant battles in Angola.
-The image below depicts the connections around an example event:
-
-
-
-You can also see a larger portion of this graph, showing how the events, actors,
-news sources, and locations are interconnected into a large graph.
-
-
-
-Let's get started!
-
-{{< tabs "arangoml" >}}
-
-{{< tab "ArangoGraphML" >}}
-
-The [`arango-datasets`](../../components/tools/arango-datasets.md) Python package
-allows you to load pre-defined datasets into ArangoDB. It comes pre-installed in the
-ArangoGraphML notebook environment.
-
-```py
-DATASET_NAME = "OPEN_INTELLIGENCE_ANGOLA"
-
-%delete_database {DATASET_NAME}
-%create_database {DATASET_NAME}
-%use_database {DATASET_NAME}
-%load_dataset {DATASET_NAME}
-```
-
-{{< /tab >}}
-
-{{< tab "Self-managed" >}}
-
-The [`arango-datasets`](../../components/tools/arango-datasets.md) Python package
-allows you to load pre-defined datasets into ArangoDB. It can be installed with the
-following command:
-
-```
-pip install arango-datasets
-```
-
-```py
-from arango_datasets.datasets import Datasets
-
-DATASET_NAME = "OPEN_INTELLIGENCE_ANGOLA"
-
-db = arangoml.client.db(
- name=DATASET_NAME,
- username=arangoml.settings.get("ARANGODB_USER"),
- password=arangoml.settings.get("ARANGODB_PW"),
- user_token=arangoml.settings.get("ARANGODB_USER_TOKEN"),
- verify=True
-)
-
-Datasets(dataset_db).load(DATASET_NAME)
-```
-{{< /tab >}}
-
-{{< /tabs >}}
-
-## Projects
-
-**API Documentation: [ArangoML.projects](https://arangoml.github.io/arangoml/api.html#projects)**
-
-Projects are an important reference used throughout the entire ArangoGraphML
-lifecycle. All activities link back to a project. The creation of the project
-is very simple.
-
-### Get/Create a project
-```py
-project = arangoml.get_or_create_project(DATASET_NAME)
-```
-
-### List projects
-
-```py
-arangoml.projects.list_projects()
-```
-
-## Featurization
-
-**API Documentation: [ArangoML.jobs.featurize](https://arangoml.github.io/arangoml/api.html#agml_api.jobs.v1.api.jobs_api.JobsApi.featurize)**
-
-**The Featurization Service depends on a `Featurization Specification` that contains**:
-- `featurizationName`: A name for the featurization task.
-
-- `projectName`: The associated project name. You can use `project.name` here
- if it was created or retrieved as described above.
-
-- `graphName`: The associated graph name that exists within the database.
-
-- `featureSetID` Optional: The ID of an existing Feature Set to re-use. If provided, the `metagraph` dictionary can be ommitted. Defaults to `None`.
-
-- `featurizationConfiguration` Optional: The optional default configuration to be applied
- across all features. Individual collection feature settings override this option.
-
- - `featurePrefix`: The prefix to be applied to all individual features generated. Default is `feat_`.
-
- - `outputName`: Adjust the default feature name. This can be any valid ArangoDB attribute name. Defaults to `x`.
-
- - `dimensionalityReduction`: Object configuring dimensionality reduction.
- - `disabled`: Whether to disable dimensionality reduction. Default is `false`,
- therefore dimensionality reduction is applied after Featurization by default.
- - `size`: The number of dimensions to reduce the feature length to. Default is `512`.
-
- - `defaultsPerFeatureType`: A dictionary mapping each feature to how missing or mismatched values should be handled. The keys of this dictionary are the features, and the values are sub-dictionaries with the following keys:
- - `missing`: A sub-dictionary detailing how missing values should be handled.
- - `strategy`: The strategy to use for missing values. Options include `REPLACE` or `RAISE`.
- - `replacement`: The value to replace missing values with. Only needed if `strategy` is `REPLACE`.
- - `mismatch`: A sub-dictionary detailing how mismatched values should be handled.
- - `strategy`: The strategy to use for mismatched values. Options include `REPLACE`, `RAISE`, `COERCE_REPLACE`, or `COERCE_RAISE`.
- - `replacement`: The value to replace mismatched values with. Only needed if `strategy` is `REPLACE`, or `COERCE_REPLACE`.
-
-- `jobConfiguration` Optional: A set of configurations that are applied to the job.
- - `batchSize`: The number of documents to process in a single batch. Default is `32`.
- - `runAnalysisChecks`: Whether to run analysis checks, used to perform a high-level analysis of the data quality before proceeding. Default is `true`.
- - `skipLabels`: Skips the featurization process for attributes marked as `label`. Default is `false`.
- - `useFeatureStore`: Enables the use of the Feature Store database, which allows you to store features separately from your Source Database. Default is `false`, therefore features are written to the source graph.
- - `overwriteFSGraph`: Whether to overwrite the Feature Store Graph if features were previously generated. Default is `false`, therefore features are written to an existing Feature Store Graph.s
- - `writeToSourceGraph`: Whether to store the generated features on the Source Graph. Default is `true`.
-
-- `metagraph`: Metadata to represent the vertex & edge collections of the graph.
- - `vertexCollections`: A dictionary mapping the vertex collection names to the following values:
- - `features`: A dictionary mapping document properties to the following values:
- - `featureType`: The type of feature. Options include `text`, `category`, `numeric`, or `label`.
- - `config`: Collection-level configuration settings.
- - `featurePrefix`: Identical to global `featurePrefix` but for this collection.
- - `dimensionalityReduction`: Identical to global `dimensionalityReduction` but for this collection.
- - `outputName`: Identical to global `outputName`, but specifically for this collection.
- - `defaultsPerFeatureType`: Identical to global `defaultsPerFeatureType`, but specifically for this collection.
- - `edgeCollections`: A dictionary mapping the edge collection names to an empty dictionary, as edge attributes are not currently supported.
-
-The Featurization Specification example is used for the GDELT dataset:
-- It featurizes the `name` attribute of the `Actor`, `Class`, `Country`,
- `Source`, `Location`, and `Region` collections as a `text` features.
-- It featurizes the `description` attribute of the `Event` collection as a
- `text` feature.
-- It featurizes the `label` attribute of the `Event` collection as a `label`
- feature (this is the attribute you want to predict).
-- It featurizes the `sourceScale` attribute of the `Source` collection as a
- `category` feature.
-- It featurizes the `name` attribute of the `Region` collection as a
- `category` feature.
-
-```py
-# 1. Define the Featurization Specification
-
-featurization_spec = {
- "databaseName": dataset_db.name,
- "projectName": project.name,
- "graphName": graph.name,
- "featurizationName": f"{DATASET_NAME}_Featurization",
- "featurizationConfiguration": {
- "featurePrefix": "feat_",
- "dimensionalityReduction": { "size": 256 },
- "outputName": "x"
- },
- "jobConfiguration": {
- "batchSize": 512,
- "useFeatureStore": False,
- "runAnalysisChecks": False,
- },
- "metagraph": {
- "vertexCollections": {
- "Actor": {
- "features": {
- "name": {
- "featureType": "text",
- },
- }
- },
- "Country": {
- "features": {
- "name": {
- "featureType": "text",
- }
- }
- },
- "Event": {
- "features": {
- "description": {
- "featureType": "text",
- },
- "label": {
- "featureType": "label",
- },
- }
- },
- "Source": {
- "features": {
- "name": {
- "featureType": "text",
- },
- "sourceScale": {
- "featureType": "category",
- },
- }
- },
- "Location": {
- "features": {
- "name": {
- "featureType": "text",
- }
- }
- },
- "Region": {
- "features": {
- "name": {
- "featureType": "category",
- },
- }
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- }
- }
-}
-```
-
-Once the specification has been defined, a Featurization Job can be triggered using the `arangoml.jobs.featurize` method:
-
-```py
-# 2. Submit a Featurization Job
-
-featurization_job = arangoml.jobs.featurize(featurization_spec)
-```
-
-Once a Featurization Job has been submitted, you can wait for it to complete using the `arangoml.wait_for_featurization` method:
-
-```py
-# 3. Wait for the Featurization Job to complete
-
-featurization_job_result = arangoml.wait_for_featurization(featurization_job.job_id)
-```
-
-
-**Example Output:**
-```py
-{
- "job_id": "16349541",
- "output_db_name": "OPEN_INTELLIGENCE_ANGOLA",
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "feature_set_id": "16349537",
- "feature_set_ids": [
- "16349537"
- ],
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Class": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- "subClass": {},
- "type": {}
- },
- "label_field": "OPEN_INTELLIGENCE_ANGOLA_y",
- "input_field": "OPEN_INTELLIGENCE_ANGOLA_x",
- "feature_set_id_to_results": {
- "16349537": {
- "feature_set_id": "16349537",
- "output_db_name": "OPEN_INTELLIGENCE_ANGOLA",
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Class": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- "subClass": {},
- "type": {}
- },
- "label_field": "OPEN_INTELLIGENCE_ANGOLA_y",
- "input_field": "OPEN_INTELLIGENCE_ANGOLA_x",
- "is_feature_store": false,
- "target_collection": "Event"
- }
- },
- "is_feature_store": false,
- "target_collection": "Event"
-}
-```
-
-You can also cancel a Featurization Job using the `arangoml.jobs.cancel_job` method:
-
-```py
-arangoml.jobs.cancel_job(prediction_job.job_id)
-```
-
-
-## Training
-
-**API Documentation: [ArangoML.jobs.train](https://arangoml.github.io/arangoml/api.html#agml_api.jobs.v1.api.jobs_api.JobsApi.train)**
-
-Training Graph Machine Learning Models with ArangoGraphML requires two steps:
-1. Describe which data points should be included in the Training Job.
-2. Pass the Training Specification to the Training Service.
-
-**The Training Service depends on a `Training Specification` that contains**:
-- `featureSetID`: The feature set ID that was generated during the Featurization Job (if any). It replaces the need to provide the `metagraph`, `databaseName`, and `projectName` fields.
-
-- `databaseName`: The database name the source data is in. Can be omitted if `featureSetID` is provided.
-
-- `projectName`: The top-level project to which all the experiments will link back. Can be omitted if `featureSetID` is provided.
-
-- `useFeatureStore`: Boolean for enabling or disabling the use of the feature store. Default is `false`.
-
-- `mlSpec`: Describes the desired machine learning task, input features, and
- the attribute label to be predicted.
- - `classification`: Dictionary to describe the Node Classification Task Specification.
- - `targetCollection`: The ArangoDB collection name that contains the prediction label.
- - `inputFeatures`: The name of the feature to be used as input.
- - `labelField`: The name of the attribute to be predicted.
- - `batchSize`: The number of documents to process in a single training batch. Default is `64`.
- - `graphEmbeddings`: Dictionary to describe the Graph Embedding Task Specification.
- - `targetCollection`: The ArangoDB collection used to generate the embeddings.
- - `embeddingSize`: The size of the embedding vector. Default is `128`.
- - `batchSize`: The number of documents to process in a single training batch. Default is `64`.
- - `generateEmbeddings`: Whether to generate embeddings on the training dataset. Default is `false`.
-
-- `metagraph`: Metadata to represent the vertex & edge collections of the graph. If `featureSetID` is provided, this can be omitted.
- - `graph`: The ArangoDB graph name.
- - `vertexCollections`: A dictionary mapping the collection names to the following values:
- - `x`: The name of the feature to be used as input.
- - `y`: The name of the attribute to be predicted. Can only be specified for one collection.
- - `edgeCollections`: A dictionary mapping the edge collection names to an empty dictionary, as edge features are not currently supported.
-
-A Training Specification allows for concisely defining your training task in a
-single object and then passing that object to the training service using the
-Python API client, as shown below.
-
-The ArangoGraphML Training Service is responsible for training a series of
-Graph Machine Learning Models using the data provided in the Training
-Specification. It assumes that the data has been featurized and is ready to be
-used for training.
-
-Given that we have run a Featurization Job, we can create the Training Specification using the `featurization_job_result` object returned from the Featurization Job:
-
-```py
-# 1. Define the Training Specification
-
-# Node Classification example
-
-training_spec = {
- "featureSetID": featurization_job_result.result.feature_set_id,
- "mlSpec": {
- "classification": {
- "targetCollection": "Event",
- "inputFeatures": "OPEN_INTELLIGENCE_ANGOLA_x",
- "labelField": "OPEN_INTELLIGENCE_ANGOLA_y",
- }
- },
-}
-
-# Node Embedding example
-# NOTE: Full Graph Embeddings support is coming soon
-
-training_spec = {
- "featureSetID": featurization_job_result.result.feature_set_id,
- "mlSpec": {
- "graphEmbeddings": {
- "targetCollection": "Event",
- "embeddingSize": 128,
- "generateEmbeddings": True,
- }
- },
-}
-```
-
-Once the specification has been defined, a Training Job can be triggered using the `arangoml.jobs.train` method:
-
-```py
-# 2. Submit a Training Job
-
-training_job = arangoml.jobs.train(training_spec)
-```
-
-Once a Training Job has been submitted, you can wait for it to complete using the `arangoml.wait_for_training` method:
-
-```py
-# 3. Wait for the Training Job to complete
-
-training_job_result = arangoml.wait_for_training(training_job.job_id)
-```
-
-**Example Output (Node Classification):**
-```py
-{
- "job_id": "691ceb2f-1931-492a-b4eb-0536925a4697",
- "job_status": "COMPLETED",
- "project_name": "OPEN_INTELLIGENCE_ANGOLA_GraphML_Node_Classification",
- "project_id": "16832427",
- "database_name": "OPEN_INTELLIGENCE_ANGOLA",
- "metagraph": {
- "mlSpec": {
- "classification": {
- "targetCollection": "Event",
- "inputFeatures": "OPEN_INTELLIGENCE_ANGOLA_x",
- "labelField": "OPEN_INTELLIGENCE_ANGOLA_y",
- "metrics": None
- }
- },
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Class": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {},
- "subClass": {},
- "type": {}
- },
- "batch_size": 64
- },
- "time_submitted": "2024-01-12T02:19:19.686286",
- "time_started": "2024-01-12T02:19:29.403742",
- "time_ended": "2024-01-12T02:30:59.313038",
- "job_state": None,
- "job_conditions": None
-}
-```
-
-**Example Output (Node Embeddings):**
-```py
-{
- "job_id": "6047e53a-f1dd-4725-83e8-74ac44629c11",
- "job_status": "COMPLETED",
- "project_name": "OPEN_INTELLIGENCE_ANGOLA_GraphML_Node_Embeddings",
- "project_id": "647025872",
- "database_name": "OPEN_INTELLIGENCE_ANGOLA",
- "ml_spec": {
- "graphEmbeddings": {
- "targetCollection": "Event",
- "embeddingLevel": "NODE_EMBEDDINGS",
- "embeddingSize": 128,
- "embeddingTrainingType": "UNSUPERVISED",
- "batchSize": 64,
- "generateEmbeddings": true,
- "bestModelSelection": "BEST_LOSS",
- "persistModels": "ALL_MODELS",
- "modelConfigurations": {}
- }
- },
- "metagraph": {
- "graph": "OPEN_INTELLIGENCE_ANGOLA",
- "vertexCollections": {
- "Actor": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Country": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Event": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x",
- "y": "OPEN_INTELLIGENCE_ANGOLA_y"
- },
- "Source": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Location": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- },
- "Region": {
- "x": "OPEN_INTELLIGENCE_ANGOLA_x"
- }
- },
- "edgeCollections": {
- "eventActor": {},
- "hasSource": {},
- "hasLocation": {},
- "inCountry": {},
- "inRegion": {}
- }
- },
- "time_submitted": "2025-03-27T02:55:15.099680",
- "time_started": "2025-03-27T02:57:25.143948",
- "time_ended": "2025-03-27T03:01:24.619737",
- "training_type": "Training"
-}
-```
-
-You can also cancel a Training Job using the `arangoml.jobs.cancel_job` method:
-
-```py
-arangoml.jobs.cancel_job(training_job.job_id)
-```
-
-## Model Selection
-
-Model Statistics can be observed upon completion of a Training Job.
-To select a Model, the ArangoGraphML Projects Service can be used to gather
-all relevant models and choose the preferred model for a Prediction Job.
-
-First, let's list all the trained models using [ArangoML.list_models](https://arangoml.github.io/arangoml/client.html#arangoml.main.ArangoML.list_models):
-
-```py
-# 1. List all trained Models
-
-models = arangoml.list_models(
- project_name=project.name,
- training_job_id=training_job.job_id
-)
-
-print(len(models))
-```
-
-The cell below selects the model with the highest **test accuracy** using [ArangoML.get_best_model](https://arangoml.github.io/arangoml/client.html#arangoml.main.ArangoML.get_best_model), but there may be other factors that motivate you to choose another model. See the `model_statistics` in the output field below for more information on the full list of available metrics.
-
-```py
-
-# 2. Select the best Model
-
-# Get best Node Classification Model
-# Sort by highest test accuracy
-
-best_model = arangoml.get_best_model(
- project.name,
- training_job.job_id,
- sort_parent_key="test",
- sort_child_key="accuracy",
-)
-
-# Get best Graph Embedding Model
-# Sort by lowest loss
-
-best_model = arangoml.get_best_model(
- project.name,
- training_job.job_id,
- sort_parent_key="loss",
- sort_child_key=None,
- reverse=False
-)
-
-print(best_model)
-```
-
-**Example Output (Node Classification):**
-```py
-{
- "job_id": "691ceb2f-1931-492a-b4eb-0536925a4697",
- "model_id": "02297435-3394-4e7e-aaac-82e1d224f85c",
- "model_statistics": {
- "_id": "devperf/123",
- "_key": "123",
- "_rev": "_gkUc8By--_",
- "run_id": "123",
- "test": {
- "accuracy": 0.8891242216547955,
- "confusion_matrix": [[13271, 2092], [1276, 5684]],
- "f1": 0.9,
- "loss": 0.1,
- "precision": 0.9,
- "recall": 0.8,
- "roc_auc": 0.8,
- },
- "validation": {
- "accuracy": 0.9,
- "confusion_matrix": [[13271, 2092], [1276, 5684]],
- "f1": 0.85,
- "loss": 0.1,
- "precision": 0.86,
- "recall": 0.85,
- "roc_auc": 0.85,
- },
- },
- "target_collection": "Event",
- "target_field": "label",
-}
-```
-
-**Example Output (Node Embeddings):**
-```py
-{
- "job_id": "6047e53a-f1dd-4725-83e8-74ac44629c11",
- "model_id": "55ae93c2-3497-4405-9c63-0fa0e4a5b5bd",
- "model_display_name": "graphsageencdec Model",
- "model_name": "graphsageencdec Model 55ae93c2-3497-4405-9c63-0fa0e4a5b5bd",
- "model_statistics": {
- "loss": 0.13700408464796796,
- "val_acc": 0.5795393939393939,
- "test_acc": 0.5809545454545455
- },
- "model_tasks": [ "GRAPH_EMBEDDINGS" ]
-}
-```
-
-## Prediction
-
-**API Documentation: [ArangoML.jobs.predict](https://arangoml.github.io/arangoml/api.html#agml_api.jobs.v1.api.jobs_api.JobsApi.predict)**
-
-Final step!
-
-After selecting a model, a Prediction Job can be created. The Prediction Job
-will generate predictions and persist them to the source graph in a new
-collection, or within the source documents.
-
-**The Prediction Service depends on a `Prediction Specification` that contains**:
-- `projectName`: The top-level project to which all the experiments will link back.
-- `databaseName`: The database name the source data is in.
-- `modelID`: The model ID to use for generating predictions.
-- `featurizeNewDocuments`: Boolean for enabling or disabling the featurization of new documents. Useful if you don't want to re-train the model upon new data. Default is `false`.
-- `featurizeOutdatedDocuments`: Boolean for enabling or disabling the featurization of outdated documents. Outdated documents are those whose features have changed since the last featurization. Default is `false`.
-- `schedule`: A cron expression to schedule the prediction job (e.g `0 0 * * *` for daily predictions). Default is `None`.
-- `embeddingsField`: The name of the field to store the generated embeddings. This is only used for Graph Embedding tasks. Default is `None`.
-
-```py
-# 1. Define the Prediction Specification
-
-# Node Classification Example
-prediction_spec = {
- "projectName": project.name,
- "databaseName": dataset_db.name,
- "modelID": best_model.model_id,
-}
-
-# Node Embedding Example
-prediction_spec = {
- "projectName": project.name,
- "databaseName": dataset_db.name,
- "modelID": best_model.model_id,
- "embeddingsField": "embeddings"
-}
-```
-
-This job updates all documents with the predictions derived from the trained model.
-Once the specification has been defined, a Prediction Job can be triggered using the `arangoml.jobs.predict` method:
-
-```py
-# 2. Submit a Prediction Job
-
-# For Node Classification
-prediction_job = arangoml.jobs.predict(prediction_spec)
-
-# For Graph Embeddings
-prediction_job = arangoml.jobs.generate(prediction_spec)
-```
-
-Similar to the Training Service, we can wait for a Prediction Job to complete with the `arangoml.wait_for_prediction` method:
-
-```py
-# 3. Wait for the Prediction Job to complete
-
-prediction_job_result = arangoml.wait_for_prediction(prediction_job.job_id)
-```
-
-**Example Output (Node Classification):**
-```py
-{
- "job_id": "b2a422bb-5650-4fbc-ba6b-0578af0049d9",
- "job_status": "COMPLETED",
- "project_name": "OPEN_INTELLIGENCE_ANGOLA_GraphML_Node_Classification",
- "project_id": "16832427",
- "database_name": "OPEN_INTELLIGENCE_ANGOLA",
- "model_id": "1a365657-f5ed-4da9-948b-1ff60bc6e7de",
- "job_state_information": {
- "outputGraphName": "OPEN_INTELLIGENCE_ANGOLA",
- "outputCollectionName": "Event",
- "outputAttribute": "OPEN_INTELLIGENCE_ANGOLA_y_predicted",
- "numberOfPredictedDocuments": 3302,
- "outputEdgeCollectionName": None
- },
- "time_submitted": "2024-01-12T02:31:18.382625",
- "time_started": "2024-01-12T02:31:23.550469",
- "time_ended": "2024-01-12T02:31:40.021035"
-}
-```
-
-**Example Output (Node Embeddings):**
-```py
-{
- "job_id": "25260362-9764-47d0-abb4-247cbdce6c7b",
- "job_status": "COMPLETED",
- "project_name": "OPEN_INTELLIGENCE_ANGOLA_GraphML_Node_Embeddings",
- "project_id": "647025872",
- "database_name": "OPEN_INTELLIGENCE_ANGOLA",
- "model_id": "55ae93c2-3497-4405-9c63-0fa0e4a5b5bd",
- "job_state_information": {
- "outputGraphName": "OPEN_INTELLIGENCE_ANGOLA",
- "outputCollectionName": "Event",
- "outputAttribute": "embeddings",
- "numberOfPredictedDocuments": 0, # 0 All documents already have up-to-date embeddings
- },
- "time_submitted": "2025-03-27T14:02:33.094191",
- "time_started": "2025-03-27T14:09:34.206659",
- "time_ended": "2025-03-27T14:09:35.791630",
- "prediction_type": "Prediction"
-}
-```
-
-You can also cancel a Prediction Job using the `arangoml.jobs.cancel_job` method:
-
-```py
-arangoml.jobs.cancel_job(prediction_job.job_id)
-```
-
-### Viewing Inference Results
-
-We can now access our results via AQL:
-
-```py
-import json
-
-collection_name = prediction_job_result.job_state_information['outputCollectionName']
-
-query = f"""
- FOR doc IN `{collection_name}`
- SORT RAND()
- LIMIT 3
- RETURN doc
-"""
-
-docs = list(dataset_db.aql.execute(query))
-
-print(json.dumps(docs, indent=2))
-```
-
-## What's next
-
-With the generated Feature (and optionally Node) Embeddings, you can now use them for downstream tasks like clustering, anomaly detection, and link prediction. Consider using [ArangoDB's Vector Search](https://arangodb.com/2024/11/vector-search-in-arangodb-practical-insights-and-hands-on-examples/) capabilities to find similar nodes based on their embeddings.
diff --git a/site/content/3.11/data-science/graph-analytics.md b/site/content/3.11/data-science/graph-analytics.md
deleted file mode 100644
index 18df401e84..0000000000
--- a/site/content/3.11/data-science/graph-analytics.md
+++ /dev/null
@@ -1,716 +0,0 @@
----
-title: Graph Analytics
-menuTitle: Graph Analytics
-weight: 123
-description: |
- ArangoGraph offers Graph Analytics Engines to run graph algorithms on your
- data separately from your ArangoDB deployments
----
-Graph analytics is a branch of data science that deals with analyzing information
-networks known as graphs, and extracting information from the data relationships.
-It ranges from basic measures that characterize graphs, over PageRank, to complex
-algorithms. Common use cases include fraud detection, recommender systems,
-and network flow analysis.
-
-ArangoDB offers a feature for running algorithms on your graph data,
-called Graph Analytics Engines (GAEs). It is available on request for the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-Key features:
-
-- **Separation of storage and compute**: GAEs are a solution that lets you run
- graph analytics independent of your ArangoDB deployments on dedicated machines
- optimized for compute tasks. This separation of OLAP and OLTP workloads avoids
- affecting the performance of the transaction-oriented database systems.
-
-- **Fast data loading**: You can easily and efficiently import graph data from
- ArangoDB and export results back to ArangoDB.
-
-- **In-memory processing**: All imported data is held and processed in the
- main memory of the compute machines for very fast execution of graph algorithms
- such as connected components, label propagation, and PageRank.
-
-## Workflow
-
-The following lists outlines how you can use Graph Analytics Engines (GAEs).
-How to perform the steps is detailed in the subsequent sections.
-
-{{< info >}}
-Before you can use Graph Analytics Engines, you need to request the feature
-via __Request help__ in the ArangoGraph dashboard for a deployment.
-
-The deployment needs to use **AWS** as the cloud provider.
-
-Single server deployments using ArangoDB version 3.11 are not supported.
-{{< /info >}}
-
-1. Determine the approximate size of the data that you will load into the GAE
- to select an engine size with sufficient memory. The data as well as the
- temporarily needed space for computations and results needs to fit in memory.
-2. Deploy an engine of the desired size and of type `gral`. It only takes a few
- seconds until the engine can be used. The engine runs adjacent to a particular
- ArangoGraph deployment.
-3. Load graph data from ArangoDB into the engine. You can load named graphs or
- sets of vertex and edge collections. This loads the edge information and a
- configurable subset of the vertex attributes.
-4. Run graph algorithms on the data. You only need to load the data once per
- engine and can then run various algorithms with different settings.
-5. Write the computation results back to ArangoDB.
-6. Delete the engine once you are done.
-
-## Authentication
-
-The [Management API](#management-api) for deploying and deleting engines requires
-an ArangoGraph **API key**. See
-[Generating an API Key](../arangograph/api/get-started.md#generating-an-api-key)
-on how to create one.
-
-You then need to generate an **access token** using the API key. See
-[Authenticating with Oasisctl](../arangograph/api/get-started.md#authenticating-with-oasisctl)
-on how to do so using `oasisctl login`.
-
-The [Engine API](#engine-api) uses one of two authentication methods, depending
-on the [__auto login to database UI__](../arangograph/deployments/_index.md#auto-login-to-database-ui)
-setting in ArangoGraph:
-- **Enabled**: You can use an ArangoGraph access token created with an API key
- (see above), allowing you to use one token for both the Management API and
- the Engine API.
-- **Disabled**: You need use a JWT user token created from ArangoDB credentials.
- These session tokens need to be renewed every hour by default. See
- [HTTP API Authentication](../develop/http-api/authentication.md#jwt-user-tokens)
- for details.
-
-## Management API
-
-You can save an ArangoGraph access token created with `oasisctl login` in a
-variable to ease scripting. Note that this should be the token string only and
-not include quote marks. The following examples assume Bash as the shell and
-that the `curl` and `jq` commands are available.
-
-```bash
-ARANGO_GRAPH_TOKEN="$(oasisctl login --key-id "" --key-secret "")"
-```
-
-To determine the base URL of the management API, use the ArangoGraph dashboard
-and copy the __APPLICATION ENDPOINT__ of the deployment that holds the graph data
-you want to analyze. Replace the port with `8829` and append
-`/graph-analytics/api/graphanalytics/v1`, e.g.
-`https://123456abcdef.arangodb.cloud:8829/graph-analytics/api/graphanalytics/v1`.
-
-Store the base URL in a variable called `BASE_URL`:
-
-```bash
-BASE_URL='https://...'
-```
-
-To authenticate requests, you need to use the following HTTP header:
-
-```
-Authorization: bearer
-```
-
-For example, with cURL and using the token variable:
-
-```bash
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" "$BASE_URL/api-version"
-```
-
-Request and response payloads are JSON-encoded in the management API.
-
-### Get the API version
-
-`GET /api-version`
-
-Retrieve the version information of the management API.
-
-```bash
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" "$BASE_URL/api-version"
-```
-
-### List engine sizes
-
-`GET /enginesizes`
-
-List the available engine sizes, which is a combination of the number of cores
-and the size of the RAM, starting at 1 CPU and 4 GiB of memory (`e4`).
-
-```bash
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" "$BASE_URL/enginesizes"
-```
-
-### List engine types
-
-`GET /enginetypes`
-
-List the available engine types. The only type supported for GAE workloads is
-called `gral`.
-
-```bash
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" "$BASE_URL/enginetypes"
-```
-
-### Deploy an engine
-
-`POST /engines`
-
-Set up a GAE adjacent to the ArangoGraph deployment, for example, using an
-engine size of `e4`.
-
-```bash
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" -X POST -d '{"type_id":"gral","size_id":"e4"}' "$BASE_URL/engines"
-```
-
-### List all engines
-
-`GET /engines`
-
-List all deployed GAEs of a ArangoGraph deployment.
-
-```bash
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" "$BASE_URL/engines"
-```
-
-### Get an engine
-
-`GET /engines/`
-
-List the detailed information about a specific GAE.
-
-```bash
-ENGINE_ID="zYxWvU9876"
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" "$BASE_URL/engines/$ENGINE_ID"
-```
-
-### Delete an engine
-
-`DELETE /engines/`
-
-Delete a no longer needed GAE, freeing any data it holds in memory.
-
-```bash
-ENGINE_ID="zYxWvU9876"
-curl -H "Authorization: bearer $ARANGO_GRAPH_TOKEN" -X DELETE "$BASE_URL/engines/$ENGINE_ID"
-```
-
-## Engine API
-
-To determine the base URL of the engine API, use the ArangoGraph dashboard
-and copy the __APPLICATION ENDPOINT__ of the deployment that holds the graph data
-you want to analyze. Replace the port with `8829` and append
-`/graph-analytics/engines/`, e.g.
-`https://<123456abcdef>.arangodb.cloud:8829/graph-analytics/engines/zYxWvU9876`.
-
-Store the base URL in a variable called `ENGINE_URL`:
-
-```bash
-ENGINE_URL='https://...'
-```
-
-To authenticate requests, you need to use a bearer token in HTTP header:
-```
-Authorization: bearer
-```
-
-- If __Auto login to database UI__ is enabled for the ArangoGraph deployment,
- this can be the same access token as used for the management API.
-- If it is disabled, use an ArangoDB session token (JWT user token) instead.
-
-You can save the token in a variable to ease scripting. Note that this should be
-the token string only and not include quote marks. The following examples assume
-Bash as the shell and that the `curl` and `jq` commands are available.
-
-An example of authenticating a request using cURL and a session token:
-
-```bash
-APPLICATION_ENDPOINT="https://123456abcdef.arangodb.cloud:8529"
-
-ADB_TOKEN=$(curl -X POST -d "{\"username\":\"\",\"password\":\"\"}" "$APPLICATION_ENDPOINT/_open/auth" | jq -r '.jwt')
-
-curl -H "Authorization: bearer $ADB_TOKEN" "$ENGINE_URL/v1/jobs"
-```
-
-All requests to the engine API start jobs, each representing an operation.
-You can check the progress of operations and check if errors occurred.
-You can submit jobs concurrently and they also run concurrently.
-
-You can find the API reference documentation with detailed descriptions of the
-request and response data structures at .
-
-Request and response payloads are JSON-encoded in the engine API.
-
-### Load data
-
-`POST /v1/loaddata`
-
-Import graph data from a database of the ArangoDB deployment. You can import
-named graphs as well as sets of vertex and edge collections (see
-[Managed and unmanaged graphs](../graphs/_index.md#managed-and-unmanaged-graphs)).
-
-```bash
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d '{"database":"_system","graph_name":"connectedComponentsGraph"}' "$ENGINE_URL/v1/loaddata"
-```
-
-### Run algorithms
-
-#### PageRank
-
-`POST /v1/pagerank`
-
-PageRank is a well known algorithm to rank vertices in a graph: the more
-important a vertex, the higher rank it gets. It goes back to L. Page and S. Brin's
-[paper](http://infolab.stanford.edu/pub/papers/google.pdf) and
-is used to rank pages in in search engines (hence the name). The algorithm runs
-until the execution converges. To run for a fixed number of iterations, use the
-`maximum_supersteps` parameter.
-
-The rank of a vertex is a positive real number. The algorithm starts with every
-vertex having the same rank (one divided by the number of vertices) and sends its
-rank to its out-neighbors. The computation proceeds in iterations. In each iteration,
-the new rank is computed according to the formula
-`( (1 - damping_factor) / total number of vertices) + (damping_factor * the sum of all incoming ranks)`.
-The value sent to each of the out-neighbors is the new rank divided by the number
-of those neighbors, thus every out-neighbor gets the same part of the new rank.
-
-The algorithm stops when at least one of the two conditions is satisfied:
-- The maximum number of iterations is reached. This is the same `maximum_supersteps`
- parameter as for the other algorithms.
-- Every vertex changes its rank in the last iteration by less than a certain
- threshold. The threshold is hardcoded to `0.0000001`.
-
-It is possible to specify an initial distribution for the vertex documents in
-your graph. To define these seed ranks / centralities, you can specify a
-`seeding_attribute` in the properties for this algorithm. If the specified field is
-set on a document _and_ the value is numeric, then it is used instead of
-the default initial rank of `1 / numVertices`.
-
-Parameters:
-- `graph_id`
-- `damping_factor`
-- `maximum_supersteps`
-- `seeding_attribute` (optional, for seeded PageRank)
-
-Result: the rank of each vertex
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"damping_factor\":0.85,\"maximum_supersteps\":500,\"seeding_attribute\":\"seed_attr\"}" "$ENGINE_URL/v1/pagerank"
-```
-
-{{< comment >}} Not merged yet
-#### Single-Source Shortest Path (SSSP)
-
-`POST /v1/single_source_shortest_path`
-
-The algorithm computes the shortest path from a given source vertex to all other
-vertices and returns the length of this path (distance). The algorithm returns a
-distance of `-1` for a vertex that cannot be reached from the source, and `0`
-for the source vertex itself.
-
-Parameters:
-- `graph_id`
-- `source_vertex`: The document ID of the source vertex.
-- `undirected`: Determines whether the algorithm respects the direction of edges.
-
-Result: the distance of each vertex to the `source_vertex`
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"source_vertex\":\"vertex/345\",\"undirected\":false}" "$ENGINE_URL/v1/single_source_shortest_path"
-```
-{{< /comment >}}
-
-#### Weakly Connected Components (WCC)
-
-`POST /v1/wcc`
-
-The weakly connected component algorithm partitions a graph into maximal groups
-of vertices, so that within a group, all vertices are reachable from each vertex
-by following the edges, ignoring their direction.
-
-In other words, each weakly connected component is a maximal subgraph such that
-there is a path between each pair of vertices where one can also follow edges
-against their direction in a directed graph.
-
-Parameters:
-- `graph_id`
-
-Result: a component ID for each vertex. All vertices from the same component
-obtain the same component ID, every two vertices from different components
-obtain different IDs.
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID}" "$ENGINE_URL/v1/wcc"
-```
-
-#### Strongly Connected Components (SCC)
-
-`POST /v1/scc`
-
-The strongly connected components algorithm partitions a graph into maximal
-groups of vertices, so that within a group, all vertices are reachable from each
-vertex by following the edges in their direction.
-
-In other words, a strongly connected component is a maximal subgraph, where for
-every two vertices, there is a path from one of them to the other, forming a
-cycle. In contrast to a weakly connected component, one cannot follow edges
-against their direction.
-
-Parameters:
-
-- `graph_id`
-
-Result: a component ID for each vertex. All vertices from the same component
-obtain the same component ID, every two vertices from different components
-obtain different IDs.
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID}" "$ENGINE_URL/v1/scc"
-```
-
-#### Vertex Centrality
-
-Centrality measures help identify the most important vertices in a graph.
-They can be used in a wide range of applications:
-to identify influencers in social networks, or middlemen in terrorist
-networks.
-
-There are various definitions for centrality, the simplest one being the
-vertex degree. These definitions were not designed with scalability in mind.
-It is probably impossible to discover an efficient algorithm which computes
-them in a distributed way. Fortunately there are scalable substitutions
-available, which should be equally usable for most use cases.
-
-
-
-##### Betweenness Centrality
-
-`POST /v1/betweennesscentrality`
-
-A relatively expensive algorithm with complexity `O(V*E)` where `V` is the
-number of vertices and `E` is the number of edges in the graph.
-
-Betweenness-centrality can be approximated by cheaper algorithms like
-Line Rank but this algorithm strives to compute accurate centrality measures.
-
-Parameters:
-- `graph_id`
-- `k` (number of start vertices, 0 = all)
-- `undirected`
-- `normalized`
-- `parallelism`
-
-Result: a centrality measure for each vertex
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"k\":0,\"undirected\":false,\"normalized\":true}" "$ENGINE_URL/v1/betweennesscentrality"
-```
-
-{{< comment >}} Not merged yet
-##### Effective Closeness
-
-A common definitions of centrality is the **closeness centrality**
-(or closeness). The closeness of a vertex in a graph is the inverse average
-length of the shortest path between the vertex and all other vertices.
-For vertices *x*, *y* and shortest distance `d(y, x)` it is defined as:
-
-
-
-Effective Closeness approximates the closeness measure. The algorithm works by
-iteratively estimating the number of shortest paths passing through each vertex.
-The score approximates the real closeness score, since it is not possible
-to actually count all shortest paths due to the horrendous `O(n^2 * d)` memory
-requirements. The algorithm is from the paper
-*Centralities in Large Networks: Algorithms and Observations (U Kang et.al. 2011)*.
-
-ArangoDBs implementation approximates the number of shortest paths in each
-iteration by using a HyperLogLog counter with 64 buckets. This should work well
-on large graphs and on smaller ones as well. The memory requirements should be
-**O(n * d)** where *n* is the number of vertices and *d* the diameter of your
-graph. Each vertex stores a counter for each iteration of the algorithm.
-
-Parameters:
-- `graph_id`
-- `undirected`: Whether to ignore the direction of edges
-- `maximum_supersteps`
-
-Result: a closeness measure for each vertex
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"undirected\":false,\"maximum_supersteps\":500}" "$ENGINE_URL/v1/effectivecloseness"
-```
-{{< /comment >}}
-
-##### LineRank
-
-`POST /v1/linerank`
-
-Another common measure is the [*betweenness* centrality](https://en.wikipedia.org/wiki/Betweenness_centrality):
-It measures the number of times a vertex is part of shortest paths between any
-pairs of vertices. For a vertex *v* betweenness is defined as:
-
-
-
-Where the σ represents the number of shortest paths between *x* and *y*,
-and σ(v) represents the number of paths also passing through a vertex *v*.
-By intuition a vertex with higher betweenness centrality has more
-information passing through it.
-
-**LineRank** approximates the random walk betweenness of every vertex in a
-graph. This is the probability that someone, starting on an arbitrary vertex,
-visits this node when they randomly choose edges to visit.
-
-The algorithm essentially builds a line graph out of your graph
-(switches the vertices and edges), and then computes a score similar to PageRank.
-This can be considered a scalable equivalent to vertex betweenness, which can
-be executed distributedly in ArangoDB. The algorithm is from the paper
-*Centralities in Large Networks: Algorithms and Observations (U Kang et.al. 2011)*.
-
-Parameters:
-- `graph_id`
-- `damping_factor`
-- `maximum_supersteps`
-
-Result: the line rank of each vertex
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"damping_factor\":0.0000001,\"maximum_supersteps\":500}" "$ENGINE_URL/v1/linerank"
-```
-
-#### Community Detection
-
-Graphs based on real world networks often have a community structure.
-This means it is possible to find groups of vertices such that each vertex
-group is internally more densely connected than outside the group.
-This has many applications when you want to analyze your networks, for example
-Social networks include community groups (the origin of the term, in fact)
-based on common location, interests, occupation, etc.
-
-##### Label Propagation
-
-`POST /v1/labelpropagation`
-
-[*Label Propagation*](https://arxiv.org/pdf/0709.2938) can be used to implement
-community detection on large graphs.
-
-The algorithm assigns an initial community identifier to every vertex in the
-graph using a user-defined attribute. The idea is that each vertex should be in
-the community that most of its neighbors are in at the end of the computation.
-
-In each iteration of the computation, a vertex sends its current community ID to
-all its neighbor vertices, inbound and outbound (ignoring edge directions).
-After that, each vertex adopts the community ID it received most frequently in
-the last step.
-
-It can happen that a vertex receives multiple most frequent community IDs.
-In this case, one is chosen either randomly or using a deterministic choice
-depending on a setting for the algorithm. The rules for a deterministic tiebreak
-are as follows:
-- If a vertex obtains only one community ID and the ID of the vertex from the
- previous step, its old ID, is less than the obtained ID, the old ID is kept.
-- If a vertex obtains more than one ID, its new ID is the lowest ID among the
- most frequently obtained IDs. For example, if the initial IDs are numbers and
- the obtained IDs are 1, 2, 2, 3, 3, then 2 is the new ID.
-- If, however, no ID arrives more than once, the new ID is the minimum of the
- lowest obtained IDs and the old ID. For example, if the old ID is 5 and the
- obtained IDs are 3, 4, 6, then the new ID is 3. If the old ID is 2, it is kept.
-
-The algorithm runs until it converges or reaches the maximum iteration bound.
-It may not converge on large graphs if the synchronous variant is used.
-- **Synchronous**: The new community ID of a vertex is based on the
- community IDs of its neighbors from the previous iteration. With (nearly)
- [bipartite](https://en.wikipedia.org/wiki/Bipartite_graph) subgraphs, this may
- lead to the community IDs changing back and forth in each iteration within the
- two halves of the subgraph.
-- **Asynchronous**: A vertex determines the new community ID using the most
- up-to-date community IDs of its neighbors, whether those updates occurred in
- the current iteration or the previous one. The order in which vertices are
- updated in each iteration is chosen randomly. This leads to more stable
- community IDs.
-
-Parameters:
-- `graph_id`
-- `start_label_attribute`
-- `synchronous`
-- `random_tiebreak`
-- `maximum_supersteps`
-
-Result: a community ID for each vertex
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"start_label_attribute\":\"start_attr\",\"synchronous\":false,\"random_tiebreak\":false,\"maximum_supersteps\":500}" "$ENGINE_URL/v1/labelpropagation"
-```
-
-##### Attribute Propagation
-
-`POST /v1/attributepropagation`
-
-The attribute propagation algorithm can be used to implement community detection.
-It works similar to the label propagation algorithm, but every node additionally
-accumulates a memory of observed labels instead of forgetting all but one label.
-
-The algorithm assigns an initial value to every vertex in the graph using a
-user-defined attribute. The attribute value can be a list of strings to
-initialize the set of labels with multiple labels.
-
-In each iteration of the computation, the following steps are executed:
-
-1. Each vertex propagates its set of labels along the edges to all direct
- neighbor vertices. Whether inbound or outbound edges are followed depends on
- an algorithm setting.
-2. Each vertex adds the labels it receives to its own set of labels.
-
- After a specified maximal number of iterations or if no label set changes any
- more, the algorithm stops.
-
- {{< warning >}}
- If there are many labels and the graph is well-connected, the result set can
- be very large.
- {{< /warning >}}
-
-Parameters:
-- `graph_id`
-- `start_label_attribute`: The attribute to initialize labels with.
- Use `"@id"` to use the document IDs of the vertices.
-- `synchronous`: Whether synchronous or asynchronous label propagation is used.
-- `backwards`: Whether labels are propagated in edge direction (`false`) or the
- opposite direction (`true`).
-- `maximum_supersteps`: Maximum number of iterations.
-
-Result: The set of accumulated labels of each vertex.
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"start_label_attribute\":\"start_attr\",\"synchronous\":false,\"backwards\":false,\"maximum_supersteps\":500}" "$ENGINE_URL/v1/attributepropagation"
-```
-
-{{< comment >}} Work in progress
-#### Custom Python code
-
-`POST /v1/python`
-
-You can run Python code to implement custom graph analytics algorithms as well
-as execute any of the graph algorithms available in the supported Python
-libraries.
-
-The NetworkX library is available by default using the variable `nx`:
-
-```py
-def worker(graph):
- return nx.pagerank(graph, 0.85)
-```
-
-Parameters:
-- `graph_id`
-- `function`: A string with Python code. It must define a function `def worker(graph):`
- that returns a dataframe or dictionary with the results. The key inside that
- dict must represent the vertex ID. The value can be of any type.
-- `use_cugraph`: Use cugraph (or regular pandas/pyarrow).
-
-Result: Depends on the algorithm. If multiple values are returned for a single
-vertex, a JSON object with multiple keys is stored.
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -XPOST -d "{\"graph_id\":$GRAPH_ID,\"function\":\"def worker(graph):\n return nx.pagerank(graph, 0.85)\",\"use_cugraph\":false}" "$ENGINE_URL/v1/python"
-```
-{{< /comment >}}
-
-### Store job results
-
-`POST /v1/storeresults`
-
-You need to specify to which ArangoDB `database` and `target_collection` to save
-the results to. They need to exist already.
-
-You also need to specify a list of `job_ids` with one or more jobs that have run
-graph algorithms.
-
-Each algorithm outputs one value for each vertex, and you can define the target
-attribute to store the information in with `attribute_names`. It has to be a
-list with one attribute name for every job in the `job_ids` list.
-
-You can optionally set the degree of `parallelism` and the `batch_size` for
-saving the data.
-
-Parameters:
-- `database`
-- `target_collection`
-- `job_ids`
-- `attribute_names`
-- `parallelism`
-- `batch_size`
-
-```bash
-JOB_ID="123"
-curl -H "Authorization: bearer $ADB_TOKEN" -X POST -d "{\"database\":\"_system\",\"target_collection\":\"coll\",\"job_ids\":[$JOB_ID],\"attribute_names\":[\"attr\"]}" "$ENGINE_URL/v1/storeresults"
-```
-
-### List all jobs
-
-`GET /v1/jobs`
-
-List all active and finished jobs.
-
-```bash
-curl -H "Authorization: bearer $ADB_TOKEN" "$ENGINE_URL/v1/jobs"
-```
-
-### Get a job
-
-`GET /v1/jobs/`
-
-Get detailed information about a specific job.
-
-```bash
-JOB_ID="123"
-curl -H "Authorization: bearer $ADB_TOKEN" "$ENGINE_URL/v1/jobs/$JOB_ID"
-```
-
-### Delete a job
-
-`DELETE /v1/jobs/`
-
-Delete a specific job.
-
-```bash
-JOB_ID="123"
-curl -H "Authorization: bearer $ADB_TOKEN" -X DELETE "$ENGINE_URL/v1/jobs/$JOB_ID"
-```
-
-### List all graphs
-
-`GET /v1/graphs`
-
-List all loaded sets of graph data that reside in the memory of the engine node.
-
-```bash
-curl -H "Authorization: bearer $ADB_TOKEN" "$ENGINE_URL/v1/graphs"
-```
-
-### Get a graph
-
-`GET /v1/graphs/`
-
-Get detailed information about a specific set of graph data.
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" "$ENGINE_URL/v1/graphs/$GRAPH_ID"
-```
-
-### Delete a graph
-
-`DELETE /v1/graphs/`
-
-Delete a specific set of graph data, removing it from the memory of the engine node.
-
-```bash
-GRAPH_ID="234"
-curl -H "Authorization: bearer $ADB_TOKEN" -X DELETE "$ENGINE_URL/v1/graphs/$GRAPH_ID"
-```
diff --git a/site/content/3.11/data-science/llm-knowledge-graphs.md b/site/content/3.11/data-science/llm-knowledge-graphs.md
deleted file mode 100644
index aa5c11bc84..0000000000
--- a/site/content/3.11/data-science/llm-knowledge-graphs.md
+++ /dev/null
@@ -1,73 +0,0 @@
----
-title: Large Language Models (LLMs) and Knowledge Graphs
-menuTitle: Large Language Models and Knowledge Graphs
-weight: 133
-description: >-
- Integrate large language models (LLMs) with knowledge graphs using ArangoDB
----
-Large language models (LLMs) and knowledge graphs are two prominent and
-contrasting concepts, each possessing unique characteristics and functionalities
-that significantly impact the methods we employ to extract valuable insights from
-constantly expanding and complex datasets.
-
-LLMs, exemplified by OpenAI's ChatGPT, represent a class of powerful language
-transformers. These models leverage advanced neural networks to exhibit a
-remarkable proficiency in understanding, generating, and participating in
-contextually-aware conversations.
-
-On the other hand, knowledge graphs contain carefully structured data and are
-designed to capture intricate relationships among discrete and seemingly
-unrelated information. With knowledge graphs, you can explore contextual
-insights and execute structured queries that reveal hidden connections within
-complex datasets.
-
-ArangoDB's unique capabilities and flexible integration of knowledge graphs and
-LLMs provide a powerful and efficient solution for anyone seeking to extract
-valuable insights from diverse datasets.
-
-## Knowledge Graphs
-
-A knowledge graph can be thought of as a dynamic and interconnected network of
-real-world entities and the intricate relationships that exist between them.
-
-Key aspects of knowledge graphs:
-- **Domain specific knowledge**: You can tailor knowledge graphs to specific
- domains and industries.
-- **Structured information**: Makes it easy to query, analyze, and extract
- meaningful insights from your data.
-- **Accessibility**: You can build a Semantic Web knowledge graph or using
- custom data.
-
-LLMs can help distill knowledge graphs from natural language by performing
-the following tasks:
-- Entity discovery
-- Relation extraction
-- Coreference resolution
-- End-to-end knowledge graph construction
-- (Text) Embeddings
-
-
-
-## ArangoDB and LangChain
-
-[LangChain](https://www.langchain.com/) is a framework for developing applications
-powered by language models.
-
-LangChain enables applications that are:
-- Data-aware (connect a language model to other sources of data)
-- Agentic (allow a language model to interact with its environment)
-
-The ArangoDB integration with LangChain provides you the ability to analyze
-data seamlessly via natural language, eliminating the need for query language
-design. By using LLM chat models such as OpenAI’s ChatGPT, you can "speak" to
-your data instead of querying it.
-
-### Get started with ArangoDB QA chain
-
-The [ArangoDB QA chain notebook](https://langchain-langchain.vercel.app/docs/integrations/providers/arangodb/)
-shows how to use LLMs to provide a natural language interface to an ArangoDB
-instance.
-
-Run the notebook directly in [Google Colab](https://colab.research.google.com/github/arangodb/interactive_tutorials/blob/master/notebooks/Langchain.ipynb).
-
-See also other [machine learning interactive tutorials](https://github.com/arangodb/interactive_tutorials#machine-learning).
\ No newline at end of file
diff --git a/site/content/3.11/data-science/pregel/algorithms.md b/site/content/3.11/data-science/pregel/algorithms.md
deleted file mode 100644
index b596d7669b..0000000000
--- a/site/content/3.11/data-science/pregel/algorithms.md
+++ /dev/null
@@ -1,369 +0,0 @@
----
-title: Pregel Algorithms
-menuTitle: Pregel Algorithms
-weight: 5
-description: >-
- You can use Pregel algorithms for graph exploration, path finding, analytics
- queries, and much more
-aliases:
- - pregel-algorithms
----
-Pregel algorithms are used in scenarios where you need to do an
-analysis of a graph stored in ArangoDB to get insights about its
-nature and structure - without having to use external processing systems.
-
-Pregel can solve numerous graph problems and offers solutions that are
-essential building blocks in the cycle of a real world application.
-For example, in a network system, detecting the weaknesses of the network
-design and determining the times when the network is vulnerable may
-significantly reduce any downtime.
-
-In the section below you can find more details about all available
-Pregel algorithms in ArangoDB.
-
-## Available Algorithms
-
-### PageRank
-
-PageRank is a well known algorithm to rank vertices in a graph: the more
-important a vertex, the higher rank it gets. It goes back to L. Page and S. Brin's
-[paper](http://infolab.stanford.edu/pub/papers/google.pdf) and
-is used to rank pages in in search engines (hence the name). The algorithm runs
-until the execution converges. To specify a custom threshold, use the `threshold`
-parameter; to run for a fixed number of iterations, use the `maxGSS` parameter.
-
-The rank of a vertex is a positive real number. The algorithm starts with every
-vertex having the same rank (one divided by the number of vertices) and sends its
-rank to its out-neighbors. The computation proceeds in iterations. In each iteration,
-the new rank is computed according to the formula
-`(0.15/total number of vertices) + (0.85 * the sum of all incoming ranks)`.
-The value sent to each of the out-neighbors is the new rank divided by the number
-of those neighbors, thus every out-neighbor gets the same part of the new rank.
-
-The algorithm stops when at least one of the two conditions is satisfied:
-- The maximum number of iterations is reached. This is the same `maxGSS`
- parameter as for the other algorithms.
-- Every vertex changes its rank in the last iteration by less than a certain
- threshold. The default threshold is 0.00001, a custom value can be set with
- the `threshold` parameter.
-
-```js
-var pregel = require("@arangodb/pregel");
-pregel.start("pagerank", "graphname", { maxGSS: 100, threshold: 0.00000001, resultField: "rank" })
-```
-
-#### Seeded PageRank
-
-It is possible to specify an initial distribution for the vertex documents in
-your graph. To define these seed ranks / centralities, you can specify a
-`sourceField` in the properties for this algorithm. If the specified field is
-set on a document _and_ the value is numeric, then it is used instead of
-the default initial rank of `1 / numVertices`.
-
-```js
-var pregel = require("@arangodb/pregel");
-pregel.start("pagerank", "graphname", { maxGSS: 20, threshold: 0.00000001, sourceField: "seed", resultField: "rank" })
-```
-
-### Single-Source Shortest Path
-
-Calculates the distances, that is, the lengths of shortest paths from the
-given source to all other vertices, called _targets_. The result is written
-to the specified property of the respective target.
-The distance to the source vertex itself is returned as `0` and a length above
-`9007199254740991` (max safe integer) means that there is no path from the
-source to the vertex in the graph.
-
-The algorithm runs until all distances are computed. The number of iterations is bounded by the
-diameter of your graph (the longest distance between two vertices).
-
-A call of the algorithm requires the `source` parameter whose value is the
-document ID of the source vertex. The result field needs to be
-specified in `_resultField` (note the underscore).
-
-```js
-var pregel = require("@arangodb/pregel");
-pregel.start("sssp", "graphname", { source: "vertices/1337", _resultField: "distance" });
-```
-
-### Connected Components
-
-There are three algorithms to find connected components in a graph:
-
-1. If your graph is effectively undirected (for every edge from vertex A to
- vertex B there is also an edge from B to A),
- then the simple **connected components** algorithm named
- `"connectedcomponents"` is suitable.
-
- It is a very simple and fast algorithm, but it only works correctly on
- undirected graphs. Your results on directed graphs may vary, depending on
- how connected your components are.
-
- In an undirected graph, a _connected component_ is a subgraph:
- - where there is a path between every pair of vertices from this component and
- - which is maximal with this property: adding any other vertex would destroy it.
- In other words, there is no path between any vertex from the component and
- any vertex not in the component.
-
-2. To find **weakly connected components** (WCC) you can use the algorithm named `"wcc"`.
- A _weakly connected component_ in a directed graph is a maximal subgraph such
- that there is a path between each pair of vertices
- where _we can walk also against the direction of edges_. More formally, it is
- a connected component (see the definition above) in the
- _underlying undirected graph_, i.e., in the undirected graph obtained by
- adding an edge from vertex B to vertex A (if it does not already exist),
- if there is an edge from vertex A to vertex B.
-
- This algorithm works on directed graphs but, in general, requires a greater amount of
- traffic between DB-Servers.
-
-3. To find **strongly connected components** (SCC) you can use the algorithm named `"scc"`.
- A _strongly connected component_ is a maximal subgraph,
- where for every two vertices, there is a path from one of them to the other.
- It is thus defined as a weakly connected component,
- but one is not allowed to run against the edge directions.
-
- The algorithm is more complex than the WCC algorithm and, in general, requires more memory.
-
-All above algorithms assign a component ID to each vertex, a number which is
-written into the specified `resultField`. All vertices from the same component
-obtain the same component ID, every two vertices from different components
-obtain different IDs.
-
-```js
-var pregel = require("@arangodb/pregel");
-
-// connected components
-pregel.start("connectedcomponents", "graphname", { resultField: "component" });
-
-// weakly connected components
-pregel.start("wcc", "graphname", { resultField: "component_weak" });
-
-// strongly connected components
-pregel.start("scc", "graphname", { resultField: "component_strong" });
-```
-
-### Hyperlink-Induced Topic Search (HITS)
-
-HITS is a link analysis algorithm that rates Web pages, developed by
-Jon Kleinberg in J. Kleinberg,
-[Authoritative sources in a hyperlinked environment](http://www.cs.cornell.edu/home/kleinber/auth.pdf),
-Journal of the ACM. 46 (5): 604–632, 1999. The algorithm is also known as _Hubs and Authorities_.
-
-The idea behind hubs and authorities comes from the typical structure of the early web:
-certain websites, known as hubs, serve as large directories that are not actually
-authoritative on the information that they point to. These hubs are used as
-compilations of a broad catalog of information that leads users to other,
-authoritative, webpages.
-
-The algorithm assigns two scores to each vertex: the authority score and the
-hub score. The authority score of a vertex rates the total hub score of vertices
-pointing to that vertex; the hub score rates the total authority
-score of vertices pointed by it. Also see
-[en.wikipedia.org/wiki/HITS_algorithm](https://en.wikipedia.org/wiki/HITS_algorithm).
-Note, however, that this version of the algorithm is slightly different from that of the original paper.
-
-ArangoDB offers two versions of the algorithm: the original Kleinberg's version and our own version
-that has some advantages and disadvantages as discussed below.
-
-Both versions keep two values for each vertex: the hub value and the authority value and update
-both of them in iterations until the corresponding sequences converge or until the maximum number of steps
-is reached. The hub value of a vertex is updated from the authority values of the vertices pointed by it;
-the authority value is updated from the hub values of the vertices pointing to it.
-
-The differences of the two versions are technical (and we omit the tedious description here)
-but have some less technical implications:
-- The original version needs twice as many global super-steps as our version.
-- The original version is guaranteed to converge, our version may also converge, but there are examples
-where it does not (for instance, on undirected stars).
-- In the original version, the output values are normed in the sense that the sum of their squared values
-is 1, our version does not guarantee that.
-
-In a call of either version, the `threshold` parameter can be used to set a limit for the convergence
-(measured as the maximum absolute difference of the hub and authority scores
-between the current and last iteration).
-
-If the value of the result field is ``, then the hub score is stored in
-the `_hub` field and the authority score in the `_auth` field.
-
-The algorithm can be executed like this:
-
-```js
-var pregel = require("@arangodb/pregel");
-var jobId = pregel.start("hits", "graphname", { threshold:0.00001, resultField: "score" });
-```
-
-for ArangoDB's version and
-
-```js
-var pregel = require("@arangodb/pregel");
-var jobId = pregel.start("hitskleinberg", "graphname", { threshold:0.00001, resultField: "score" });
-```
-
-for the original version.
-
-### Vertex Centrality
-
-Centrality measures help identify the most important vertices in a graph.
-They can be used in a wide range of applications:
-to identify influencers in social networks, or middlemen in terrorist
-networks.
-
-There are various definitions for centrality, the simplest one being the
-vertex degree. These definitions were not designed with scalability in mind.
-It is probably impossible to discover an efficient algorithm which computes
-them in a distributed way. Fortunately there are scalable substitutions
-available, which should be equally usable for most use cases.
-
-
-
-#### Effective Closeness
-
-A common definitions of centrality is the **closeness centrality**
-(or closeness). The closeness of a vertex in a graph is the inverse average
-length of the shortest path between the vertex and all other vertices.
-For vertices *x*, *y* and shortest distance `d(y, x)` it is defined as:
-
-
-
-Effective Closeness approximates the closeness measure. The algorithm works by
-iteratively estimating the number of shortest paths passing through each vertex.
-The score approximates the real closeness score, since it is not possible
-to actually count all shortest paths due to the horrendous `O(n^2 * d)` memory
-requirements. The algorithm is from the paper
-*Centralities in Large Networks: Algorithms and Observations (U Kang et.al. 2011)*.
-
-ArangoDBs implementation approximates the number of shortest path in each
-iteration by using a HyperLogLog counter with 64 buckets. This should work well
-on large graphs and on smaller ones as well. The memory requirements should be
-**O(n * d)** where *n* is the number of vertices and *d* the diameter of your
-graph. Each vertex stores a counter for each iteration of the algorithm.
-
-The algorithm can be used like this:
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("effectivecloseness", "graphname", { resultField: "closeness" });
-```
-
-#### LineRank
-
-Another common measure is the [*betweenness* centrality](https://en.wikipedia.org/wiki/Betweenness_centrality):
-It measures the number of times a vertex is part of shortest paths between any
-pairs of vertices. For a vertex *v* betweenness is defined as:
-
-
-
-Where the σ represents the number of shortest paths between *x* and *y*,
-and σ(v) represents the number of paths also passing through a vertex *v*.
-By intuition a vertex with higher betweenness centrality has more
-information passing through it.
-
-**LineRank** approximates the random walk betweenness of every vertex in a
-graph. This is the probability that someone, starting on an arbitrary vertex,
-visits this node when they randomly choose edges to visit.
-
-The algorithm essentially builds a line graph out of your graph
-(switches the vertices and edges), and then computes a score similar to PageRank.
-This can be considered a scalable equivalent to vertex betweenness, which can
-be executed distributedly in ArangoDB. The algorithm is from the paper
-*Centralities in Large Networks: Algorithms and Observations (U Kang et.al. 2011)*.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("linerank", "graphname", { resultField: "linerank" });
-```
-
-### Community Detection
-
-Graphs based on real world networks often have a community structure.
-This means it is possible to find groups of vertices such that each vertex
-group is internally more densely connected than outside the group.
-This has many applications when you want to analyze your networks, for example
-Social networks include community groups (the origin of the term, in fact)
-based on common location, interests, occupation, etc.
-
-#### Label Propagation
-
-*Label Propagation* can be used to implement community detection on large
-graphs. The algorithm assigns a community, more precisely, a Community ID
-(a natural number), to every vertex in the graph.
-The idea is that each vertex should be in the community that most of
-its neighbors are in.
-
-At first, the algorithm assigns unique initial Community IDs to the vertices.
-The assignment is deterministic given the graph and the distribution of vertices
-on the shards, but there is no guarantee that a vertex obtains
-the same initial ID in two different runs of the algorithm, even if the graph does not change
-(because the sharding may change). Moreover, there is no guarantee on a particular
-distribution of the initial IDs over the vertices.
-
-Then, in each iteration, a vertex sends its current Community
-ID to all its neighbor vertices. After that each vertex adopts the Community ID it
-received most frequently in the last step.
-
-Note that, in a usual implementation of Label Propagation, if there are
-multiple most frequently received Community IDs, one is chosen randomly.
-An advantage of our implementation is that this choice is deterministic.
-This comes for the price that the choice rules are somewhat involved:
-If a vertex obtains only one ID and the ID of the vertex from the previous step,
-its old ID, is less than the obtained ID, the old ID is kept.
-(IDs are numbers and thus comparable to each other.) If a vertex obtains
-more than one ID, its new ID is the lowest ID among the most frequently
-obtained IDs. (For example, if the obtained IDs are 1, 2, 2, 3, 3,
-then 2 is the new ID.) If, however, no ID arrives more than once, the new ID is
-the minimum of the lowest obtained IDs and the old ID. (For example, if the
-old ID is 5 and the obtained IDs are 3, 4, 6, then the new ID is 3.
-If the old ID is 2, it is kept.)
-
-If a vertex keeps its ID 20 times or more in a row, it does not send its ID.
-Vertices that did not obtain any IDs do not update their ID and do not send it.
-
-The algorithm runs until it converges, which likely never really happens on
-large graphs. Therefore you need to specify a maximum iteration bound.
-The default bound is 500 iterations, which is too large for
-common applications.
-
-The algorithm should work best on undirected graphs. On directed
-graphs, the resulting partition into communities might change, if the number
-of performed steps changes. How strong the dependence is
-may be influenced by the density of the graph.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("labelpropagation", "graphname", { maxGSS: 100, resultField: "community" });
-```
-
-#### Speaker-Listener Label Propagation
-
-The [Speaker-listener Label Propagation](https://arxiv.org/pdf/1109.5720.pdf)
-(SLPA) can be used to implement community detection. It works similar to the
-label propagation algorithm, but now every node additionally accumulates a
-memory of observed labels (instead of forgetting all but one label).
-
-Before the algorithm run, every vertex is initialized with an unique ID
-(the initial community label).
-During the run three steps are executed for each vertex:
-
-1. Current vertex is the listener, all other vertices are speakers.
-2. Each speaker sends out a label from memory, we send out a random label with a
- probability proportional to the number of times the vertex observed the label.
-3. The listener remembers one of the labels, we always choose the most
- frequently observed label.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("slpa", "graphname", { maxGSS:100, resultField: "community" });
-```
-
-You can also execute SLPA with the `maxCommunities` parameter to limit the
-number of output communities. Internally the algorithm still keeps the
-memory of all labels, but the output is reduced to just the `n` most frequently
-observed labels.
-
-```js
-const pregel = require("@arangodb/pregel");
-const jobId = pregel.start("slpa", "graphname", { maxGSS: 100, resultField: "community", maxCommunities: 1 });
-// check the status periodically for completion
-pregel.status(jobId);
-```
diff --git a/site/content/3.11/deploy/_index.md b/site/content/3.11/deploy/_index.md
deleted file mode 100644
index be8b6e30f4..0000000000
--- a/site/content/3.11/deploy/_index.md
+++ /dev/null
@@ -1,144 +0,0 @@
----
-title: Deploy ArangoDB
-menuTitle: Deploy
-weight: 185
-description: >-
- ArangoDB supports multiple deployment modes to meet the exact needs of your
- project for resilience and performance
----
-For installation instructions, please refer to the
-[Installation](../operations/installation/_index.md) chapter.
-
-For _production_ deployments, please also carefully check the
-[ArangoDB Production Checklist](production-checklist.md).
-
-## Deployment Modes
-
-ArangoDB can be deployed in different configurations, depending on your needs.
-
-### Single Instance
-
-A [Single Instance deployment](single-instance/_index.md) is the most simple way
-to get started. Unlike other setups, which require some specific procedures,
-deploying a stand-alone instance is straightforward and can be started manually
-or by using the ArangoDB Starter tool.
-
-### Active Failover
-
-[Active Failover deployments](active-failover/_index.md) use ArangoDB's
-multi-node technology to provide high availability for smaller projects with
-fast asynchronous replication from the leading node to multiple replicas.
-If the leader fails, then a follower takes over seamlessly.
-
-### Cluster
-
-[Cluster deployments](cluster/_index.md) are designed for large scale
-operations and analytics, allowing you to scale elastically with your
-applications and data models. ArangoDB's synchronously-replicating cluster
-technology runs on premises, on Kubernetes, and in the cloud on
-[ArangoGraph](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic) - ArangoDB's fully managed service.
-
-Clustering ArangoDB not only delivers better performance and capacity improvements,
-but it also provides resilience through replication and automatic failover.
-You can deploy systems that dynamically scale up and down according to demand.
-
-### OneShard
-
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-[OneShard deployments](oneshard.md) are cluster deployments but with the data of
-each database restricted to a single shard. This allows queries to run locally
-on a single DB-Server node for better performance and with transactional
-guarantees similar to a single server deployment. OneShard is primarily intended
-for multi-tenant use cases.
-
-### Datacenter-to-Datacenter
-
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-For cluster deployments, ArangoDB supports
-[Datacenter-to-Datacenter Replication](arangosync/_index.md) (DC2DC). You can
-use it as an additional security feature to replicate your entire cluster
-off-site to another datacenter. The leading datacenter asynchronously replicates
-the data and configuration to the other datacenter for disaster recovery.
-
-## How to deploy
-
-There are different ways to set up and operate ArangoDB.
-
-- You can start all the needed server processes manually, locally or on different
- machines, bare-metal or in Docker containers. This gives you the most control
- but you also need to manually deal with upgrades, outages, and so on.
-
-- You can use the ArangoDB _Starter_ (the _arangodb_ executable) to mostly
- automatically create and keep deployments running, either bare-metal or in
- Docker containers.
-
-- If you want to deploy in your Kubernetes cluster, you can use the
- ArangoDB Kubernetes Operator (`kube-arangodb`).
-
-The fastest way to get ArangoDB up and running is to run it in the cloud - the
-[ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic)
-offers a fully managed cloud service, available on AWS and GCP.
-
-### Manual Deployment
-
-**Single Instance:**
-
-- [Manually created processes](single-instance/manual-start.md)
-- [Manually created Docker containers](single-instance/manual-start.md#manual-start-in-docker)
-
-**Active Failover:**
-
-- [Manually created processes](active-failover/manual-start.md)
-- [Manually created Docker containers](active-failover/manual-start.md#manual-start-in-docker)
-
-**Cluster:**
-
-- [Manually created processes](cluster/deployment/manual-start.md)
-- [Manually created Docker containers](cluster/deployment/manual-start.md#manual-start-in-docker)
-
-### Deploying using the ArangoDB Starter
-
-Setting up an ArangoDB cluster, for example, involves starting various nodes
-with different roles (Agents, DB-Servers, and Coordinators). The starter
-simplifies this process.
-
-The Starter supports different deployment modes (single server, Active Failover,
-cluster) and it can either use Docker containers or processes (using the
-`arangod` executable).
-
-Besides starting and maintaining ArangoDB deployments, the Starter also provides
-various commands to create TLS certificates and JWT token secrets to secure your
-ArangoDB deployments.
-
-The ArangoDB Starter is an executable called `arangodb` and comes with all
-current distributions of ArangoDB.
-
-If you want a specific version, download the precompiled executable via the
-[GitHub releases page](https://github.com/arangodb-helper/arangodb/releases).
-
-**Single Instance:**
-
-- [_Starter_ using processes](single-instance/using-the-arangodb-starter.md)
-- [_Starter_ using Docker containers](single-instance/using-the-arangodb-starter.md#using-the-arangodb-starter-in-docker)
-
-**Active Failover:**
-
-- [_Starter_ using processes](active-failover/using-the-arangodb-starter.md)
-- [_Starter_ using Docker containers](active-failover/using-the-arangodb-starter.md#using-the-arangodb-starter-in-docker)
-
-**Cluster:**
-
-- [_Starter_ using processes](cluster/deployment/using-the-arangodb-starter.md)
-- [_Starter_ using Docker containers](cluster/deployment/using-the-arangodb-starter.md#using-the-arangodb-starter-in-docker)
-
-### Run in the cloud
-
-- [AWS and Azure](in-the-cloud.md)
-- [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
- fully managed, available on AWS and GCP
-
-### Run in Kubernetes
-
-- [ArangoDB Kubernetes Operator](kubernetes.md)
diff --git a/site/content/3.11/deploy/active-failover/_index.md b/site/content/3.11/deploy/active-failover/_index.md
deleted file mode 100644
index 21bd1e1bba..0000000000
--- a/site/content/3.11/deploy/active-failover/_index.md
+++ /dev/null
@@ -1,127 +0,0 @@
----
-title: Active Failover deployments
-menuTitle: Active Failover
-weight: 10
-description: >-
- You can set up multiple single server instances to have one leader and multiple
- asynchronously replicated followers with automatic failover
----
-An _Active Failover_ is defined as:
-
-- One ArangoDB Single-Server instance which is read / writable by clients called **Leader**
-- One or more ArangoDB Single-Server instances, which are passive and not writable
- called **Followers**, which asynchronously replicate data from the Leader
-- At least one _Agency_ acting as a "witness" to determine which server becomes the _leader_
- in a _failure_ situation
-
-An _Active Failover_ behaves differently from an [ArangoDB Cluster](../cluster/_index.md),
-please see the [limitations section](#limitations) for more details.
-
-{{< warning >}}
-The Active Failover deployment mode is deprecated and will no longer be
-supported in the next minor version of ArangoDB, from v3.12 onward.
-{{< /warning >}}
-
-
-
-The advantage of the _Active Failover_ setup is that there is an active third party, the _Agency_,
-which observes and supervises all involved server processes.
-_Follower_ instances can rely on the _Agency_ to determine the correct _Leader_ server.
-From an operational point of view, one advantage is that
-the failover, in case the _Leader_ goes down, is automatic. An additional operational
-advantage is that there is no need to start a _replication applier_ manually.
-
-The _Active Failover_ setup is made **resilient** by the fact that all the official
-ArangoDB drivers can automatically determine the correct _leader_ server and
-redirect requests appropriately. Furthermore, Foxx Services do also automatically
-perform a failover: should the _leader_ instance fail (which is also the _Foxxmaster_)
-the newly elected _leader_ will reinstall all Foxx services and resume executing
-queued [Foxx tasks](../../develop/foxx-microservices/guides/scripts-and-scheduling.md).
-[Database users](../../operations/administration/user-management/_index.md)
-which were created on the _leader_ will also be valid on the newly elected _leader_
-(always depending on the condition that they were synced already).
-
-Consider the case for two *arangod* instances. The two servers are connected via
-server wide (global) asynchronous replication. One of the servers is
-elected _Leader_, and the other one is made a _Follower_ automatically. At startup,
-the two servers race for the leadership position. This happens through the _Agency
-locking mechanism_ (which means that the _Agency_ needs to be available at server start).
-You can control which server becomes the _Leader_ by starting it earlier than
-other server instances in the beginning.
-
-The _Follower_ automatically starts replication from the _Leader_ for all
-available databases, using the server-level replication introduced in version 3.3.
-
-When the _Leader_ goes down, this is automatically detected by the _Agency_
-instance, which is also started in this mode. This instance will make the
-previous follower stop its replication and make it the new _Leader_.
-
-{{< info >}}
-The different instances participating in an Active Failover setup are supposed
-to be run in the same _Data Center_ (DC), with a reliable high-speed network
-connection between all the machines participating in the Active Failover setup.
-
-Multi-datacenter Active Failover setups are currently not supported.
-
-A multi-datacenter solution currently supported is the _Datacenter-to-Datacenter Replication_
-(DC2DC) among ArangoDB Clusters. See [DC2DC](../arangosync/deployment/_index.md) chapter for details.
-{{< /info >}}
-
-## Operative Behavior
-
-In contrast to the normal behavior of a single-server instance, the Active-Failover
-mode can change the behavior of ArangoDB in some situations.
-
-The _Follower_ will _always_ deny write requests from client applications. Starting from ArangoDB 3.4,
-read requests are _only_ permitted if the requests are marked with the `X-Arango-Allow-Dirty-Read: true` header,
-otherwise they are denied too.
-Only the replication itself is allowed to access the follower's data until the
-follower becomes a new _Leader_ (should a _failover_ happen).
-
-When sending a request to read or write data on a _Follower_, the _Follower_
-responds with `HTTP 503 (Service unavailable)` and provides the address of
-the current _Leader_. Client applications and drivers can use this information to
-then make a follow-up request to the proper _Leader_:
-
-```
-HTTP/1.1 503 Service Unavailable
-X-Arango-Endpoint: http://[::1]:8531
-....
-```
-
-Client applications can also detect who the current _Leader_ and the _Followers_
-are by calling the `/_api/cluster/endpoints` REST API. This API is accessible
-on _Leader_ and _Followers_ alike.
-
-## Reading from Followers
-
-Followers in the active-failover setup are in read-only mode. It is possible to read from these
-followers by adding a `X-Arango-Allow-Dirty-Read: true` header on each request. Responses will then automatically
-contain the `X-Arango-Potential-Dirty-Read: true` header so that clients can reject accidental dirty reads.
-
-Depending on the driver support for your specific programming language, you should be able
-to enable this option.
-
-## How to deploy
-
-The tool _ArangoDB Starter_ supports starting two servers with asynchronous
-replication and failover [out of the box](using-the-arangodb-starter.md).
-
-The _arangojs_ driver for JavaScript, the Go driver, the Java driver, ArangoJS and
-the PHP driver support active failover in case the currently accessed server endpoint
-responds with `HTTP 503`.
-
-You can also deploy an *Active Failover* environment [manually](manual-start.md).
-
-## Limitations
-
-The _Active Failover_ setup in ArangoDB has a few limitations.
-
-- In contrast to the [ArangoDB Cluster](../cluster/_index.md):
- - Active Failover has only asynchronous replication, and hence **no guarantee** on how many database operations may have been lost during a failover.
- - Active Failover has no global state, and hence a failover to a bad follower (see the example above) overrides all other followers with that state (including the previous leader, which might have more up-to-date data). In a Cluster setup, a global state is provided by the agency and hence ArangoDB is aware of the latest state.
-- Should you add more than one _follower_, be aware that during a _failover_ situation
- the failover attempts to pick the most up-to-date follower as the new leader on a **best-effort** basis.
-- Should you be using the [ArangoDB Starter](../../components/tools/arangodb-starter/_index.md)
- or the [Kubernetes Operator](../kubernetes.md) to manage your Active-Failover
- deployment, be aware that upgrading might trigger an unintentional failover between machines.
diff --git a/site/content/3.11/deploy/arangosync/_index.md b/site/content/3.11/deploy/arangosync/_index.md
deleted file mode 100644
index b660c58918..0000000000
--- a/site/content/3.11/deploy/arangosync/_index.md
+++ /dev/null
@@ -1,129 +0,0 @@
----
-title: Datacenter-to-Datacenter Replication
-menuTitle: Datacenter-to-Datacenter Replication
-weight: 25
-description: >-
- A detailed guide to Datacenter-to-Datacenter Replication (DC2DC) for clusters
- and the _arangosync_ tool
----
-{{< tag "ArangoDB Enterprise Edition" >}}
-
-At some point in the grows of a database, there comes a need for replicating it
-across multiple datacenters.
-
-Reasons for that can be:
-
-- Fallback in case of a disaster in one datacenter
-- Regional availability
-- Separation of concerns
-
-And many more.
-
-ArangoDB supports _Datacenter-to-Datacenter Replication_, via the _arangosync_ tool.
-
-ArangoDB's _Datacenter-to-Datacenter Replication_ is a solution that enables you
-to asynchronously replicate the entire structure and content in an ArangoDB Cluster
-in one place to a Cluster in another place. Typically it is used from one datacenter
-to another. It is possible to replicate to multiple other datacenters as well.
-It is not a solution for replicating single server instances.
-
-
-
-The replication done by _ArangoSync_ is **asynchronous**. That means that when
-a client is writing data into the source datacenter, it will consider the
-request finished before the data has been replicated to the other datacenter.
-The time needed to completely replicate changes to the other datacenter is
-typically in the order of seconds, but this can vary significantly depending on
-load, network & computer capacity.
-
-_ArangoSync_ performs replication in a **single direction** only. That means that
-you can replicate data from cluster _A_ to cluster _B_ or from cluster _B_ to
-cluster _A_, but never at the same time (one leader, one or more follower clusters).
-Data modified in the destination cluster **will be lost!**
-
-Replication is a completely **autonomous** process. Once it is configured it is
-designed to run 24/7 without frequent manual intervention.
-This does not mean that it requires no maintenance or attention at all.
-As with any distributed system some attention is needed to monitor its operation
-and keep it secure (e.g. certificate & password rotation).
-
-In the event of an outage of the leader cluster, user intervention is required
-to either bring the leader back up or to decide on making a follower cluster the
-new leader. There is no automatic failover as follower clusters lag behind the leader
-because of network latency etc. and resuming operation with the state of a follower
-cluster can therefore result in the loss of recent writes. How much can be lost
-largely depends on the data rate of the leader cluster and the delay between
-the leader and the follower clusters. Followers will typically be behind the
-leader by a couple of seconds or minutes.
-
-Once configured, _ArangoSync_ will replicate both **structure and data** of an
-**entire cluster**. This means that there is no need to make additional configuration
-changes when adding/removing databases or collections.
-Also meta data such as users, Foxx application & jobs are automatically replicated.
-
-A message queue developed by ArangoDB in Go and called **DirectMQ** is used for
-replication. It is tailored for DC2DC replication with efficient native
-networking routines.
-
-## When to use it... and when not
-
-The _Datacenter-to-Datacenter Replication_ is a good solution in all cases where
-you want to replicate data from one cluster to another without the requirement
-that the data is available immediately in the other cluster.
-
-The _Datacenter-to-Datacenter Replication_ is not a good solution when one of the
-following applies:
-
-- You want to replicate data from cluster A to cluster B and from cluster B
-to cluster A at the same time.
-- You need synchronous replication between 2 clusters.
-- There is no network connection between cluster A and B.
-- You want complete control over which database, collection & documents are replicate and which not.
-
-## Requirements
-
-To use _Datacenter-to-Datacenter Replication_ you need the following:
-
-- Two datacenters, each running an ArangoDB Enterprise Edition cluster.
-- A network connection between both datacenters with accessible endpoints
- for several components (see individual components for details).
-- TLS certificates for ArangoSync master instances (can be self-signed).
-- Optional (but recommended) TLS certificates for ArangoDB clusters (can be self-signed).
-- Client certificates CA for _ArangoSync masters_ (typically self-signed).
-- Client certificates for _ArangoSync masters_ (typically self-signed).
-- At least 2 instances of the _ArangoSync master_ in each datacenter.
-- One instances of the _ArangoSync worker_ on every machine in each datacenter.
-
-{{< info >}}
-In several places you will need a (x509) certificate.
-The [Certificates](security.md#certificates) section provides more guidance for creating
-and renewing these certificates.
-{{< /info >}}
-
-Besides the above list, you probably want to use the following:
-
-- An orchestrator to keep all components running, e.g. `systemd`.
-- A log file collector for centralized collection & access to the logs of all components.
-- A metrics collector & viewing solution such as _Prometheus_ + _Grafana_.
-
-## Limitations
-
-The _Datacenter-to-Datacenter Replication_ setup in ArangoDB has a few limitations.
-Some of these limitations may be removed in later versions of ArangoDB:
-
-- All the machines where the ArangoDB Server processes run must run the Linux
- operating system using the AMD64 (x86-64) or ARM64 (AArch64) architecture. Clients can run from any platform.
-
-- All the machines where the ArangoSync Server processes run must run the Linux
- operating system using the AMD64 (x86-64) or ARM64 (AArch64) architecture.
- The ArangoSync command line tool is available for Linux, Windows & macOS.
-
-- The entire cluster is replicated. It is not possible to exclude specific
- databases or collections from replication.
-
-- In any DC2DC setup, the minor version of the target cluster must be equal to
- or greater than the minor version of the source cluster. Replication from a higher to a
- lower minor version (i.e., from 3.9.x to 3.8.x) is not supported.
- Syncing between different patch versions of the same minor version is possible, however.
- For example, you cannot sync from a 3.9.1 cluster to a 3.8.7 cluster, but
- you can sync from a 3.9.1 cluster to a 3.9.0 cluster.
diff --git a/site/content/3.11/deploy/architecture/data-sharding.md b/site/content/3.11/deploy/architecture/data-sharding.md
deleted file mode 100644
index d495f38981..0000000000
--- a/site/content/3.11/deploy/architecture/data-sharding.md
+++ /dev/null
@@ -1,192 +0,0 @@
----
-title: Sharding
-menuTitle: Data Sharding
-weight: 10
-description: >-
- ArangoDB can divide collections into multiple shards to distribute the data
- across multiple cluster nodes
----
-ArangoDB organizes its collection data in _shards_. Sharding allows to use
-multiple machines to run a cluster of ArangoDB instances that together
-constitute a single database system.
-
-Sharding is used to distribute data across physical machines in an ArangoDB
-Cluster. It is a method to determine the optimal placement of documents on
-individual DB-Servers.
-
-This enables you to store much more data, since ArangoDB distributes the data
-automatically to the different servers. In many situations one can also reap a
-benefit in data throughput, again because the load can be distributed to
-multiple machines.
-
-Using sharding allows ArangoDB to support deployments with large amounts of
-data, which would not fit on a single machine. A high rate of write / read
-operations or AQL queries can also overwhelm a single servers RAM and disk
-capacity.
-
-There are two main ways of scaling a database system:
-- Vertical scaling
-- Horizontal scaling
-
-Vertical scaling scaling means to upgrade to better server hardware (faster
-CPU, more RAM / disk). This can be a cost effective way of scaling, because
-administration is easy and performance characteristics do not change much.
-Reasoning about the behavior of a single machine is also a lot easier than
-having multiple machines. However at a certain point larger machines are either
-not available anymore or the cost becomes prohibitive.
-
-Horizontal scaling is about increasing the number of servers. Servers typically
-being based on commodity hardware, which is readily available from many
-different Cloud providers. The capability of each single machine may not be
-high, but the combined the computing power of these machines can be arbitrarily
-large. Adding more machines on-demand is also typically easier and more
-cost-effective than pre-provisioning a single large machine. Increased
-complexity in infrastructure can be managed using modern containerization and
-cluster orchestrations tools like [Kubernetes](../kubernetes.md).
-
-
-
-To achieve this ArangoDB splits your dataset into so called _shards_. The number
-of shards is something you may choose according to your needs. Proper sharding
-is essential to achieve optimal performance. From the outside the process of
-splitting the data and assembling it again is fully transparent and as such we
-achieve the goals of what other systems call "master-master replication".
-
-An application may talk to any _Coordinator_ and it automatically figures
-out where the data is currently stored when reading or is to be stored
-when writing. The information about the _shards_ is shared across all
-_Coordinators_ using the _Agency_.
-
-_Shards_ are configured per _collection_ so multiple _shards_ of data form the
-_collection_ as a whole. To determine in which _shard_ the data is to be stored
-ArangoDB performs a hash across the values. By default this hash is being
-created from the `_key` document attribute.
-
-Every shard is a local collection on any _DB-Server_, that houses such a shard
-as depicted above for our example with 5 shards and 3 replicas. Here, every
-leading shard _S1_ through _S5_ is followed each by 2 replicas _R1_ through _R5_.
-The collection creation mechanism on ArangoDB _Coordinators_ tries to best
-distribute the shards of a collection among the _DB-Servers_. This seems to
-suggest, that one shards the data in 5 parts, to make best use of all our
-machines. We further choose a replication factor of 3 as it is a reasonable
-compromise between performance and data safety. This means, that the collection
-creation ideally distributes 15 shards, 5 of which are leaders to each 2
-replicas. This in turn implies, that a complete pristine replication would
-involve 10 shards which need to catch up with their leaders.
-
-Not all use cases require horizontal scalability. In such cases, consider the
-[OneShard](../oneshard.md) feature as alternative to flexible sharding.
-
-## Shard Keys
-
-ArangoDB uses the specified _shard key_ attributes to determine in which shard
-a given document is to be stored. Choosing the right shard key can have
-significant impact on your performance can reduce network traffic and increase
-performance.
-
-
-
-ArangoDB uses consistent hashing to compute the target shard from the given
-values (as specified via by the `shardKeys` collection property). The ideal set
-of shard keys allows ArangoDB to distribute documents evenly across your shards
-and your _DB-Servers_. By default ArangoDB uses the `_key` field as a shard key.
-For a custom shard key you should consider a few different properties:
-
-- **Cardinality**: The cardinality of a set is the number of distinct values
- that it contains. A shard key with only _N_ distinct values cannot be hashed
- onto more than _N_ shards. Consider using multiple shard keys, if one of your
- values has a low cardinality.
-
-- **Frequency**: Consider how often a given shard key value may appear in
- your data. Having a lot of documents with identical shard keys leads
- to unevenly distributed data.
-
-This means that a single shard could become a bottleneck in your cluster.
-The effectiveness of horizontal scaling is reduced if most documents end up in
-a single shard. Shards are not divisible at this time, so paying attention to
-the size of shards is important.
-
-Consider both frequency and cardinality when picking a shard key, if necessary
-consider picking multiple shard keys.
-
-### Configuring Shards
-
-The number of _shards_ can be configured at collection creation time, e.g. in
-the web interface or via _arangosh_:
-
-```js
-db._create("sharded_collection", {"numberOfShards": 4, "shardKeys": ["country"]});
-```
-
-The example above, where `country` has been used as _shardKeys_ can be useful
-to keep data of every country in one shard, which would result in better
-performance for queries working on a per country base.
-
-It is also possible to specify multiple `shardKeys`.
-
-Note however that if you change the shard keys from their default `["_key"]`,
-then finding a document in the collection by its primary key involves a request
-to every single shard. However this can be mitigated: All CRUD APIs and AQL
-support using the shard key values as a lookup hints. Just send them as part
-of the update / replace or removal operation, or in case of AQL, that
-you use a document reference or an object for the UPDATE, REPLACE or REMOVE
-operation which includes the shard key attributes:
-
-```aql
-UPDATE { _key: "123", country: "…" } WITH { … } IN sharded_collection
-```
-
-If custom shard keys are used, one can no longer prescribe the primary key value of
-a new document but must use the automatically generated one. This latter
-restriction comes from the fact that ensuring uniqueness of the primary key
-would be very inefficient if the user could specify the primary key.
-
-On which DB-Server in a Cluster a particular _shard_ is kept is undefined.
-There is no option to configure an affinity based on certain _shard_ keys.
-
-For more information on shard rebalancing and administration topics please have
-a look in the [Cluster Administration](../cluster/administration.md) section.
-
-### Indexes On Shards
-
-Unique indexes on sharded collections are only allowed if the fields used to
-determine the shard key are also included in the list of attribute paths for the index:
-
-| shardKeys | indexKeys | |
-|----------:|----------:|------------:|
-| a | a | allowed |
-| a | b | not allowed |
-| a | a, b | allowed |
-| a, b | a | not allowed |
-| a, b | b | not allowed |
-| a, b | a, b | allowed |
-| a, b | a, b, c | allowed |
-| a, b, c | a, b | not allowed |
-| a, b, c | a, b, c | allowed |
-
-## High Availability
-
-A cluster can still read from a collection if shards become unavailable for
-some reason. The data residing on the unavailable shard cannot be accessed,
-but reads on other shards can still succeed.
-
-If you enable data redundancy by setting a replication factor of `2` or higher
-for a collection, the collection data remains fully available for reading as
-long as at least one replica of every shard is available.
-In a production environment, you should always deploy your collections with a
-`replicationFactor` greater than `1` to ensure that the shards stay available
-even when a machine fails.
-
-Collection data also remains available for writing as long as a replica of every
-shard is available. You can optionally increase the write concern to require a
-higher number of in-sync shard replicas for writes. The `writeConcern` can be
-as high as the `replicationFactor`.
-
-## Storage Capacity
-
-The cluster distributes your data across multiple machines in your cluster.
-Every machine only contains a subset of your data. Thus, the cluster has
-the combined storage capacity of all your machines.
-
-Please note that increasing the replication factor also increases the space
-required to keep all your data in the cluster.
diff --git a/site/content/3.11/deploy/cluster/_index.md b/site/content/3.11/deploy/cluster/_index.md
deleted file mode 100644
index 4d10cec023..0000000000
--- a/site/content/3.11/deploy/cluster/_index.md
+++ /dev/null
@@ -1,395 +0,0 @@
----
-title: Cluster deployments
-menuTitle: Cluster
-weight: 15
-description: >-
- ArangoDB clusters are comprised of DB-Servers, Coordinators, and Agents, with
- synchronous data replication between DB-Servers and automatic failover
----
-The Cluster architecture of ArangoDB is a _CP_ master/master model with no
-single point of failure.
-
-With "CP" in terms of the [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem)
-we mean that in the presence of a
-network partition, the database prefers internal consistency over
-availability. With "master/master" we mean that clients can send their
-requests to an arbitrary node, and experience the same view on the
-database regardless. "No single point of failure" means that the cluster
-can continue to serve requests, even if one machine fails completely.
-
-In this way, ArangoDB has been designed as a distributed multi-model
-database. This section gives a short outline on the Cluster architecture and
-how the above features and capabilities are achieved.
-
-## Structure of an ArangoDB Cluster
-
-An ArangoDB Cluster consists of a number of ArangoDB instances
-which talk to each other over the network. They play different roles,
-which are explained in detail below.
-
-The current configuration
-of the Cluster is held in the _Agency_, which is a highly-available
-resilient key/value store based on an odd number of ArangoDB instances
-running [Raft Consensus Protocol](https://raft.github.io/).
-
-For the various instances in an ArangoDB Cluster there are three distinct
-roles:
-
-- _Agents_
-- _Coordinators_
-- _DB-Servers_.
-
-
-
-### Agents
-
-One or multiple _Agents_ form the _Agency_ in an ArangoDB Cluster. The
-_Agency_ is the central place to store the configuration in a Cluster. It
-performs leader elections and provides other synchronization services for
-the whole Cluster. Without the _Agency_ none of the other components can
-operate.
-
-While generally invisible to the outside the _Agency_ is the heart of the
-Cluster. As such, fault tolerance is of course a must have for the
-_Agency_. To achieve that the _Agents_ are using the
-[Raft Consensus Algorithm](https://raft.github.io/).
-The algorithm formally guarantees
-conflict free configuration management within the ArangoDB Cluster.
-
-At its core the _Agency_ manages a big configuration tree. It supports
-transactional read and write operations on this tree, and other servers
-can subscribe to HTTP callbacks for all changes to the tree.
-
-### Coordinators
-
-_Coordinators_ should be accessible from the outside. These are the ones
-the clients talk to. They coordinate cluster tasks like
-executing queries and running Foxx services. They know where the
-data is stored and optimize where to run user-supplied queries or
-parts thereof. _Coordinators_ are stateless and can thus easily be shut down
-and restarted as needed.
-
-### DB-Servers
-
-_DB-Servers_ are the ones where the data is actually hosted. They
-host shards of data and using synchronous replication a _DB-Server_ may
-either be _leader_ or _follower_ for a shard. Document operations are first
-applied on the _leader_ and then synchronously replicated to
-all followers.
-
-Shards must not be accessed from the outside but indirectly through the
-_Coordinators_. They may also execute queries in part or as a whole when
-asked by a _Coordinator_.
-
-See [Sharding](#sharding) below for more information.
-
-## Many sensible configurations
-
-This architecture is very flexible and thus allows many configurations,
-which are suitable for different usage scenarios:
-
- 1. The default configuration is to run exactly one _Coordinator_ and
- one _DB-Server_ on each machine. This achieves the classical
- master/master setup, since there is a perfect symmetry between the
- different nodes, clients can equally well talk to any one of the
- _Coordinators_ and all expose the same view to the data store. _Agents_
- can run on separate, less powerful machines.
- 2. One can deploy more _Coordinators_ than _DB-Servers_. This is a sensible
- approach if one needs a lot of CPU power for the Foxx services,
- because they run on the _Coordinators_.
- 3. One can deploy more _DB-Servers_ than _Coordinators_ if more data capacity
- is needed and the query performance is the lesser bottleneck
- 4. One can deploy a _Coordinator_ on each machine where an application
- server (e.g. a node.js server) runs, and the _Agents_ and _DB-Servers_
- on a separate set of machines elsewhere. This avoids a network hop
- between the application server and the database and thus decreases
- latency. Essentially, this moves some of the database distribution
- logic to the machine where the client runs.
-
-As you can see, the _Coordinator_ layer can be scaled and deployed independently
-from the _DB-Server_ layer.
-
-{{< warning >}}
-It is a best practice and a recommended approach to run _Agent_ instances
-on different machines than _DB-Server_ instances.
-
-When deploying using the tool [_Starter_](../../components/tools/arangodb-starter/_index.md)
-this can be achieved by using the options `--cluster.start-dbserver=false` and
-`--cluster.start-coordinator=false` on the first three machines where the _Starter_
-is started, if the desired _Agency_ _size_ is 3, or on the first 5 machines
-if the desired _Agency_ _size_ is 5.
-{{< /warning >}}
-
-{{< info >}}
-The different instances that form a Cluster are supposed to be run in the same
-_Data Center_ (DC), with reliable and high-speed network connection between
-all the machines participating to the Cluster.
-
-Multi-datacenter Clusters, where the entire structure and content of a Cluster located
-in a specific DC is replicated to others Clusters located in different DCs, are
-possible as well. See [Datacenter-to-Datacenter Replication](../arangosync/deployment/_index.md)
-(DC2DC) for further details.
-{{< /info >}}
-
-## Sharding
-
-Using the roles outlined above an ArangoDB Cluster is able to distribute
-data in so called _shards_ across multiple _DB-Servers_. Sharding
-allows to use multiple machines to run a cluster of ArangoDB
-instances that together constitute a single database. This enables
-you to store much more data, since ArangoDB distributes the data
-automatically to the different servers. In many situations one can
-also reap a benefit in data throughput, again because the load can
-be distributed to multiple machines.
-
-
-
-From the outside this process is fully transparent:
-An application may talk to any _Coordinator_ and
-it automatically figures out where the data is currently stored when reading
-or is to be stored when writing. The information about the _shards_
-is shared across all _Coordinators_ using the _Agency_.
-
-_Shards_ are configured per _collection_ so multiple _shards_ of data form
-the _collection_ as a whole. To determine in which _shard_ the data is to
-be stored ArangoDB performs a hash across the values. By default this
-hash is being created from the document __key_.
-
-For further information, please refer to the
-[_Cluster Sharding_](../architecture/data-sharding.md) section.
-
-## OneShard
-
-A OneShard deployment offers a practicable solution that enables significant
-performance improvements by massively reducing cluster-internal communication
-and allows running transactions with ACID guarantees on shard leaders.
-
-For more information, please refer to the [OneShard](../oneshard.md)
-chapter.
-
-## Synchronous replication
-
-In an ArangoDB Cluster, the replication among the data stored by the _DB-Servers_
-is synchronous.
-
-Synchronous replication works on a per-shard basis. Using the `replicationFactor`
-option, you can configure for each _collection_ how many copies of each _shard_
-are kept in the Cluster.
-
-{{< danger >}}
-If a collection has a _replication factor_ of `1`, its data is **not**
-replicated to other _DB-Servers_. This exposes you to a risk of data loss, if
-the machine running the _DB-Server_ with the only copy of the data fails permanently.
-
-You need to set the _replication factor_ to a value equal or higher than `2`
-to achieve minimal data redundancy via the synchronous replication.
-
-You need to set a _replication factor_ equal to or higher than `2`
-**explicitly** when creating a collection, or you can adjust it later if you
-forgot to set it at creation time. You can also enforce a
-minimum replication factor for all collections by setting the
-[`--cluster.min-replication-factor` startup option](../../components/arangodb-server/options.md#--clustermin-replication-factor).
-
-When using a Cluster, please make sure all the collections that are important
-(and should not be lost in any case) have a _replication factor_ equal or higher
-than `2`.
-{{< /danger >}}
-
-At any given time, one of the copies is declared to be the _leader_ and
-all other replicas are _followers_. Internally, write operations for this _shard_
-are always sent to the _DB-Server_ which happens to hold the _leader_ copy,
-which in turn replicates the changes to all _followers_ before the operation
-is considered to be done and reported back to the _Coordinator_.
-Internally, read operations are all served by the _DB-Server_ holding the _leader_ copy,
-this allows to provide snapshot semantics for complex transactions.
-
-Using synchronous replication alone guarantees consistency and high availability
-at the cost of reduced performance: write requests have a higher latency
-(due to every write-request having to be executed on the _followers_) and
-read requests do not scale out as only the _leader_ is being asked.
-
-In a Cluster, synchronous replication is managed by the _Coordinators_ for the client.
-The data is always stored on the _DB-Servers_.
-
-The following example gives you an idea of how synchronous operation
-has been implemented in ArangoDB Cluster:
-
-1. Connect to a _Coordinator_ via [_arangosh_](../../components/tools/arangodb-shell/_index.md)
-2. Create a collection: `db._create("test", {"replicationFactor": 2});`
-3. The _Coordinator_ figures out a *leader* and one *follower* and creates
- one *shard* (as this is the default)
-4. Insert data: `db.test.insert({"foo": "bar"});`
-5. The _Coordinator_ writes the data to the _leader_, which in turn
- replicates it to the _follower_.
-6. Only when both are successful, the result is reported indicating success:
-
- ```json
- {
- "_id" : "test/7987",
- "_key" : "7987",
- "_rev" : "7987"
- }
- ```
-
-Synchronous replication comes at the cost of an increased latency for
-write operations, simply because there is one more network hop within the
-Cluster for every request. Therefore the user can set the _replicationFactor_
-to 1, which means that only one copy of each shard is kept, thereby
-switching off synchronous replication. This is a suitable setting for
-less important or easily recoverable data for which low latency write
-operations matter.
-
-## Automatic failover
-
-### Failure of a follower
-
-If a _DB-Server_ that holds a _follower_ copy of a _shard_ fails, then the _leader_
-can no longer synchronize its changes to that _follower_. After a short timeout
-(3 seconds), the _leader_ gives up on the _follower_ and declares it to be
-out of sync.
-
-One of the following two cases can happen:
-
-- **A**: If another _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available in the Cluster, a new _follower_ is automatically
- created on this other _DB-Server_ (so the _replication factor_ constraint is
- satisfied again).
-
-- **B**: If no other _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available, the service continues with one _follower_ less than the number
- prescribed by the _replication factor_.
-
-If the old _DB-Server_ with the _follower_ copy comes back, one of the following
-two cases can happen:
-
-- Following case **A**, the _DB-Server_ recognizes that there is a new
- _follower_ that was elected in the meantime, so it is no longer a _follower_
- for that _shard_.
-
-- Following case **B**, the _DB-Server_ automatically resynchronizes its
- data with the _leader_. The _replication factor_ constraint is now satisfied again
- and order is restored.
-
-### Failure of a leader
-
-If a _DB-Server_ that holds a _leader_ copy of a shard fails, then the _leader_
-can no longer serve any requests. It no longer sends a heartbeat to
-the _Agency_. Therefore, a _supervision_ process running in the _Raft_ _leader_
-of the Agency, can take the necessary action (after 15 seconds of missing
-heartbeats), namely to promote one of the _DB-Servers_ that hold in-sync
-replicas of the _shard_ to _leader_ for that _shard_. This involves a
-reconfiguration in the _Agency_ and leads to the fact that _Coordinators_
-now contact a different _DB-Server_ for requests to this _shard_. Service
-resumes. The other surviving _replicas_ automatically resynchronize their
-data with the new _leader_.
-
-In addition to the above, one of the following two cases cases can happen:
-
-- **A**: If another _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available in the Cluster, a new _follower_ is automatically
- created on this other _DB-Server_ (so the _replication factor_ constraint is
- satisfied again).
-
-- **B**: If no other _DB-Server_ (that does not hold a _replica_ for this _shard_ already)
- is available the service continues with one _follower_ less than the number
- prescribed by the _replication factor_.
-
-When the _DB-Server_ with the original _leader_ copy comes back, it recognizes
-that a new _leader_ was elected in the meantime, and one of the following
-two cases can happen:
-
-- Following case **A**, since also a new _follower_ was created and
- the _replication factor_ constraint is satisfied, the _DB-Server_ is no
- longer a _follower_ for that _shard_.
-
-- Following case **B**, the _DB-Server_ notices that it now holds
- a _follower_ _replica_ of that _shard_ and it resynchronizes its data with the
- new _leader_. The _replication factor_ constraint is satisfied again,
- and order is restored.
-
-The following example gives you an idea of how _failover_
-has been implemented in ArangoDB Cluster:
-
-1. The _leader_ of a _shard_ (let's name it _DBServer001_) is going down.
-2. A _Coordinator_ is asked to return a document: `db.test.document("100069");`
-3. The _Coordinator_ determines which server is responsible for this document
- and finds _DBServer001_
-4. The _Coordinator_ tries to contact _DBServer001_ and timeouts because it is
- not reachable.
-5. After a short while, the _supervision_ (running in parallel on the _Agency_)
- sees that _heartbeats_ from _DBServer001_ are not coming in
-6. The _supervision_ promotes one of the _followers_ (say _DBServer002_), that
- is in sync, to be _leader_ and makes _DBServer001_ a _follower_.
-7. As the _Coordinator_ continues trying to fetch the document, it sees that
- the _leader_ changed to _DBServer002_
-8. The _Coordinator_ tries to contact the new _leader_ (_DBServer002_) and returns
- the result:
- ```json
- {
- "_key" : "100069",
- "_id" : "test/100069",
- "_rev" : "513",
- "foo" : "bar"
- }
- ```
-9. After a while the _supervision_ declares _DBServer001_ to be completely dead.
-10. A new _follower_ is determined from the pool of _DB-Servers_.
-11. The new _follower_ syncs its data from the _leader_ and order is restored.
-
-Please note that there may still be timeouts. Depending on when exactly
-the request has been done (in regard to the _supervision_) and depending
-on the time needed to reconfigure the Cluster the _Coordinator_ might fail
-with a timeout error.
-
-## Shard movement and resynchronization
-
-All _shard_ data synchronizations are done in an incremental way, such that
-resynchronizations are quick. This technology allows to move shards
-(_follower_ and _leader_ ones) between _DB-Servers_ without service interruptions.
-Therefore, an ArangoDB Cluster can move all the data on a specific _DB-Server_
-to other _DB-Servers_ and then shut down that server in a controlled way.
-This allows to scale down an ArangoDB Cluster without service interruption,
-loss of fault tolerance or data loss. Furthermore, one can re-balance the
-distribution of the _shards_, either manually or automatically.
-
-All these operations can be triggered via a REST/JSON API or via the
-graphical web interface. All fail-over operations are completely handled within
-the ArangoDB Cluster.
-
-## Microservices and zero administration
-
-The design and capabilities of ArangoDB are geared towards usage in
-modern microservice architectures of applications. With the
-[Foxx services](../../develop/foxx-microservices/_index.md) it is very easy to deploy a data
-centric microservice within an ArangoDB Cluster.
-
-In addition, one can deploy multiple instances of ArangoDB within the
-same project. One part of the project might need a scalable document
-store, another might need a graph database, and yet another might need
-the full power of a multi-model database actually mixing the various
-data models. There are enormous efficiency benefits to be reaped by
-being able to use a single technology for various roles in a project.
-
-To simplify life of the _devops_ in such a scenario we try as much as
-possible to use a _zero administration_ approach for ArangoDB. A running
-ArangoDB Cluster is resilient against failures and essentially repairs
-itself in case of temporary failures.
-
-## Deployment
-
-An ArangoDB Cluster can be deployed in several ways, e.g. by manually
-starting all the needed instances, by using the tool
-[_Starter_](../../components/tools/arangodb-starter/_index.md), in Docker and in Kubernetes.
-
-See the [Cluster Deployment](deployment/_index.md)
-chapter for instructions.
-
-ArangoDB is also available as a cloud service, the
-[**ArangoGraph Insights Platform**](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic).
-
-## Cluster ID
-
-Every ArangoDB instance in a Cluster is assigned a unique
-ID during its startup. Using its ID, a node is identifiable
-throughout the Cluster. All cluster operations communicate
-via this ID.
diff --git a/site/content/3.11/deploy/cluster/deployment/_index.md b/site/content/3.11/deploy/cluster/deployment/_index.md
deleted file mode 100644
index 102d40bed3..0000000000
--- a/site/content/3.11/deploy/cluster/deployment/_index.md
+++ /dev/null
@@ -1,96 +0,0 @@
----
-title: Cluster Deployment
-menuTitle: Deployment
-weight: 5
-description: ''
----
-You can deploy an ArangoDB cluster in different ways:
-
-- [Using the ArangoDB Starter](using-the-arangodb-starter.md)
-- [Manual Start](manual-start.md)
-- [Kubernetes](../../kubernetes.md)
-- [ArangoGraph Insights Platform](https://dashboard.arangodb.cloud/home?utm_source=docs&utm_medium=cluster_pages&utm_campaign=docs_traffic),
- fully managed, available on AWS and GCP
-
-## Preliminary Information For Debian/Ubuntu Systems
-
-### Use a different configuration file for the Cluster instance
-
-The configuration file used for the standalone instance is
-`/etc/arangodb3/arangod.conf` (on Linux), and you should use a different one for
-the cluster instance(s). If you are using the _Starter_ binary `arangodb`, that is
-automatically the case. Otherwise, you might have to copy the configuration
-somewhere else and pass it to your `arangod` cluster instance via
-`--configuration`.
-
-### Use a different data directory for the standalone instance
-
-The data directory is configured in `arangod.conf`:
-
-```conf
-[database]
-directory = /var/lib/arangodb3
-```
-
-You have to make sure that the Cluster instance uses a different data directory
-as the standalone instance. If that is not already the case, change the
-`database.directory` entry in `arangod.conf` as seen above to a different
-directory
-
-```conf
-# in arangod.conf:
-[database]
-directory = /var/lib/arangodb3.standalone
-```
-
-and create it with the correct permissions:
-
-```bash
-$ mkdir -vp /var/lib/arangodb3.standalone
-$ chown -c arangodb:arangodb /var/lib/arangodb3.standalone
-$ chmod -c 0700 /var/lib/arangodb3.standalone
-```
-
-### Use a different socket for the standalone instance
-
-The standalone instance must use a different socket, i.e. it cannot use the
-same port on the same network interface than the Cluster. For that, change the
-standalone instance's port in `/etc/arangodb3/arangod.conf`
-
-```conf
-[server]
-endpoint = tcp://127.0.0.1:8529
-```
-
-to something unused, e.g.
-
-```conf
-[server]
-endpoint = tcp://127.1.2.3:45678
-```
-.
-
-### Use a different *init* script for the Cluster instance
-
-This section applies to SystemV-compatible init systems (e.g. sysvinit, OpenRC,
-upstart). The steps are different for systemd.
-
-The package install scripts use the default _init_ script `/etc/init.d/arangodb3`
-(on Linux) to stop and start ArangoDB during the installation. If you are using
-an _init_ script for your Cluster instance, make sure it is named differently.
-In addition, the installation might overwrite your _init_ script otherwise.
-
-If you have previously changed the default _init_ script, move it out of the way
-
-```bash
-$ mv -vi /etc/init.d/arangodb3 /etc/init.d/arangodb3.cluster
-```
-
-and add it to the _autostart_; how this is done depends on your distribution and
-_init_ system. On older Debian and Ubuntu systems, you can use `update-rc.d`:
-
-```bash
-$ update-rc.d arangodb3.cluster defaults
-```
-
-Make sure your _init_ script uses a different `PIDFILE` than the default script!
diff --git a/site/content/3.11/deploy/oneshard.md b/site/content/3.11/deploy/oneshard.md
deleted file mode 100644
index cd4eed572b..0000000000
--- a/site/content/3.11/deploy/oneshard.md
+++ /dev/null
@@ -1,320 +0,0 @@
----
-title: OneShard cluster deployments
-menuTitle: OneShard
-weight: 20
-description: >-
- The OneShard feature offers a practicable solution that enables significantly
- improved performance and transactional guarantees for cluster deployments
----
-{{< tag "ArangoDB Enterprise Edition" "ArangoGraph" >}}
-
-The OneShard option for ArangoDB clusters restricts all collections of a
-database to a single shard so that every collection has `numberOfShards` set to `1`,
-and all leader shards are placed on one DB-Server node. This way, whole queries
-can be pushed to and executed on that server, massively reducing cluster-internal
-communication. The Coordinator only gets back the final result.
-
-Queries are always limited to a single database, and with the data of a whole
-database on a single node, the OneShard option allows running transactions with
-ACID guarantees on shard leaders.
-
-Collections can have replicas by setting a `replicationFactor` greater than `1`
-as usual. For each replica, the follower shards are all placed on one DB-Server
-node when using the OneShard option. This allows for a quick failover in case
-the DB-Server with the leader shards fails.
-
-A OneShard setup is highly recommended for most graph use cases and join-heavy
-queries.
-
-{{< info >}}
-For graphs larger than what fits on a single DB-Server node, you can use the
-[SmartGraphs](../graphs/smartgraphs/_index.md) feature to efficiently limit the
-network hops between Coordinator and DB-Servers.
-{{< /info >}}
-
-Without the OneShard feature query processing works as follows in a cluster:
-
-- The Coordinator accepts and analyzes the query.
-- If collections are accessed then the Coordinator distributes the accesses
- to collections to different DB-Servers that hold parts (shards) of the
- collections in question.
-- This distributed access requires network-traffic from Coordinator to
- DB-Servers and back from DB-Servers to Coordinators and is therefore
- expensive.
-
-Another cost factor is the memory and CPU time required on the Coordinator
-when it has to process several concurrent complex queries. In such
-situations Coordinators may become a bottleneck in query processing,
-because they need to send and receive data on several connections, build up
-results for collection accesses from the received parts followed by further
-processing.
-
-
-
-If the database involved in a query is a OneShard database,
-then the OneShard optimization can be applied to run the query on the
-responsible DB-Server node like on a single server. However, it still being
-a cluster setup means collections can be replicated synchronously to ensure
-resilience etc.
-
-### How to use the OneShard feature
-
-The OneShard feature is enabled by default if you use the ArangoDB
-Enterprise Edition and if the database is sharded as `"single"`. In this case the
-optimizer rule `cluster-one-shard` is applied automatically.
-There are two ways to achieve this:
-
-- If you want your entire cluster to be a OneShard deployment, use the
- [startup option](../components/arangodb-server/options.md#cluster)
- `--cluster.force-one-shard`. It sets the immutable `sharding` database
- property to `"single"` for all newly created databases, which in turn
- enforces the OneShard conditions for collections that are created in it.
- The `_graphs` system collection is used for `distributeShardsLike`.
-
-- For individual OneShard databases, set the `sharding` database property to `"single"`
- to enforce the OneShard condition. The `_graphs` system collection is used for
- `distributeShardsLike`. It is not possible to change the `sharding` database
- property afterwards or overwrite this setting for individual collections.
- For non-OneShard databases the value of the `sharding` database property is
- either `""` or `"flexible"`.
-
-{{< info >}}
-The prototype collection does not only control the sharding, but also the
-replication factor for all collections which follow its example. If the
-`_graphs` system collection is used for `distributeShardsLike`, then the
-replication factor can be adjusted by changing the `replicationFactor`
-property of the `_graphs` collection (affecting this and all following
-collections) or via the startup option `--cluster.system-replication-factor`
-(affecting all system collections and all following collections).
-{{< /info >}}
-
-**Example**
-
-The easiest way to make use of the OneShard feature is to create a database
-with the extra option `{ sharding: "single" }`. As done in the following
-example:
-
-```js
-arangosh> db._createDatabase("oneShardDB", { sharding: "single" } )
-
-arangosh> db._useDatabase("oneShardDB")
-
-arangosh@oneShardDB> db._properties()
-{
- "id" : "6010005",
- "name" : "oneShardDB",
- "isSystem" : false,
- "sharding" : "single",
- "replicationFactor" : 1,
- "writeConcern" : 1,
- "path" : ""
-}
-```
-
-Now you can go ahead and create a collection as usual:
-
-```js
-arangosh@oneShardDB> db._create("example1")
-
-arangosh@oneShardDB> db.example1.properties()
-{
- "isSmart" : false,
- "isSystem" : false,
- "waitForSync" : false,
- "shardKeys" : [
- "_key"
- ],
- "numberOfShards" : 1,
- "keyOptions" : {
- "allowUserKeys" : true,
- "type" : "traditional"
- },
- "replicationFactor" : 2,
- "minReplicationFactor" : 1,
- "writeConcern" : 1,
- "distributeShardsLike" : "_graphs",
- "shardingStrategy" : "hash",
- "cacheEnabled" : false
-}
-```
-
-As you can see, the `numberOfShards` is set to `1` and `distributeShardsLike`
-is set to `_graphs`. These attributes have automatically been set
-because the `{ "sharding": "single" }` options object was
-specified when creating the database.
-
-To do this manually for individual collections, use `{ "sharding": "flexible" }`
-on the database level and then create a collection in the following way:
-
-```js
-db._create("example2", { "numberOfShards": 1 , "distributeShardsLike": "_graphs" })
-```
-
-Here, the `_graphs` collection is used again, but any other existing
-collection that has not been created with the `distributeShardsLike`
-option itself can be used as well in a flexibly sharded database.
-
-### Running Queries
-
-For this arangosh example, first insert a few documents into a collection,
-then create a query and explain it to inspect the execution plan.
-
-```js
-arangosh@oneShardDB> for (let i = 0; i < 10000; i++) { db.example.insert({ "value" : i }); }
-
-arangosh@oneShardDB> q = "FOR doc IN @@collection FILTER doc.value % 2 == 0 SORT doc.value ASC LIMIT 10 RETURN doc";
-
-arangosh@oneShardDB> db._explain(q, { "@collection" : "example" })
-
-Query String (88 chars, cacheable: true):
- FOR doc IN @@collection FILTER doc.value % 2 == 0 SORT doc.value ASC LIMIT 10 RETURN doc
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 2 EnumerateCollectionNode DBS 10000 - FOR doc IN example /* full collection scan, 1 shard(s) */ FILTER ((doc.`value` % 2) == 0) /* early pruning */
- 5 CalculationNode DBS 10000 - LET #3 = doc.`value` /* attribute expression */ /* collections used: doc : example */
- 6 SortNode DBS 10000 - SORT #3 ASC /* sorting strategy: constrained heap */
- 7 LimitNode DBS 10 - LIMIT 0, 10
- 9 RemoteNode COOR 10 - REMOTE
- 10 GatherNode COOR 10 - GATHER
- 8 ReturnNode COOR 10 - RETURN doc
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 move-filters-up
- 3 move-calculations-up-2
- 4 move-filters-up-2
- 5 cluster-one-shard
- 6 sort-limit
- 7 move-filters-into-enumerate
-
-```
-
-As it can be seen in the explain output, almost the complete query is
-executed on the DB-Server (`DBS` for nodes 1-7) and only 10 documents are
-transferred to the Coordinator. In case you do the same with a collection
-that consists of several shards, you get a different result:
-
-```js
-arangosh> db._createDatabase("shardedDB")
-
-arangosh> db._useDatabase("shardedDB")
-
-arangosh@shardedDB> db._properties()
-{
- "id" : "6010017",
- "name" : "shardedDB",
- "isSystem" : false,
- "sharding" : "flexible",
- "replicationFactor" : 1,
- "writeConcern" : 1,
- "path" : ""
-}
-
-arangosh@shardedDB> db._create("example", { numberOfShards : 5})
-
-arangosh@shardedDB> for (let i = 0; i < 10000; i++) { db.example.insert({ "value" : i }); }
-
-arangosh@shardedDB> db._explain(q, { "@collection" : "example" })
-
-Query String (88 chars, cacheable: true):
- FOR doc IN @@collection FILTER doc.value % 2 == 0 SORT doc.value ASC LIMIT 10 RETURN doc
-
-Execution plan:
- Id NodeType Site Est. Comment
- 1 SingletonNode DBS 1 * ROOT
- 2 EnumerateCollectionNode DBS 10000 - FOR doc IN example /* full collection scan, 5 shard(s) */ FILTER ((doc.`value` % 2) == 0) /* early pruning */
- 5 CalculationNode DBS 10000 - LET #3 = doc.`value` /* attribute expression */ /* collections used: doc : example */
- 6 SortNode DBS 10000 - SORT #3 ASC /* sorting strategy: constrained heap */
- 11 RemoteNode COOR 10000 - REMOTE
- 12 GatherNode COOR 10000 - GATHER #3 ASC /* parallel, sort mode: heap */
- 7 LimitNode COOR 10 - LIMIT 0, 10
- 8 ReturnNode COOR 10 - RETURN doc
-
-Indexes used:
- none
-
-Optimization rules applied:
- Id RuleName
- 1 move-calculations-up
- 2 move-filters-up
- 3 move-calculations-up-2
- 4 move-filters-up-2
- 5 scatter-in-cluster
- 6 distribute-filtercalc-to-cluster
- 7 distribute-sort-to-cluster
- 8 remove-unnecessary-remote-scatter
- 9 sort-limit
- 10 move-filters-into-enumerate
- 11 parallelize-gather
-```
-
-{{< tip >}}
-It can be checked whether the OneShard feature is active or not by
-inspecting the explain output. If the list of rules contains
-`cluster-one-shard`, then the feature is active for the given query.
-{{< /tip >}}
-
-Without the OneShard feature all documents potentially have to be sent to
-the Coordinator for further processing. With this simple query this is actually
-not true, because some other optimizations are performed that reduce the number
-of documents. But still, a considerable amount of documents has to be
-transferred from DB-Server to Coordinator only to apply a `LIMIT` of 10
-documents there. The estimate for the *RemoteNode* is 10,000 in this example,
-whereas it is 10 in the OneShard case.
-
-### ACID Transactions on Leader Shards
-
-ArangoDB's transactional guarantees are tunable. For transactions to be ACID
-on the leader shards in a cluster, a few things need to be considered:
-
-- The AQL query or [Stream Transaction](../develop/http-api/transactions/stream-transactions.md)
- must be eligible for the OneShard optimization, so that it is executed on a
- single DB-Server node.
-- To ensure durability, enable `waitForSync` on query level to wait until data
- modifications have been written to disk.
-- The collection option `writeConcern: 2` makes sure that a transaction is only
- successful if at least one follower shard is in sync with the leader shard,
- for a total of two shard replicas.
-- The RocksDB storage engine uses intermediate commits for larger document
- operations carried out by standalone AQL queries
- (outside of JavaScript Transactions and Stream Transactions).
- This potentially breaks the atomicity of transactions. To prevent
- this for individual queries you can increase `intermediateCommitSize`
- (default 512 MB) and `intermediateCommitCount` accordingly as query option.
- Also see [Known limitations for AQL queries](../aql/fundamentals/limitations.md#storage-engine-properties).
-
-### Limitations
-
-The OneShard optimization is used automatically for all eligible AQL queries
-and Stream Transactions.
-
-For AQL queries, any of the following factors currently makes a query
-unsuitable for the OneShard optimization:
-
-- The query accesses collections with more than a single shard, different leader
- DB-Servers, or different `distributeShardsLike` prototype collections
-- The query writes into a SatelliteCollection
-- The query accesses an edge collection of a SmartGraph
-- Usage of AQL functions that can only execute on Coordinators.
- These functions are:
- - `APPLY`
- - `CALL`
- - `COLLECTION_COUNT`
- - `COLLECTIONS`
- - `CURRENT_DATABASE`
- - `CURRENT_USER`
- - `FULLTEXT`
- - `NEAR`
- - `SCHEMA_GET`
- - `SCHEMA_VALIDATE`
- - `V8`
- - `VERSION`
- - `WITHIN`
- - `WITHIN_RECTANGLE`
- - User-defined AQL functions (UDFs)
diff --git a/site/content/3.11/develop/http-api/_index.md b/site/content/3.11/develop/http-api/_index.md
deleted file mode 100644
index 3068e60f26..0000000000
--- a/site/content/3.11/develop/http-api/_index.md
+++ /dev/null
@@ -1,111 +0,0 @@
----
-title: HTTP API Documentation
-menuTitle: HTTP API
-weight: 275
-description: >-
- All functionality of ArangoDB servers is provided via a RESTful API over the
- HTTP protocol, and you can call the API endpoints directly, via database
- drivers, or other tools
----
-ArangoDB servers expose an application programming interface (API) for managing
-the database system. It is based on the HTTP protocol that powers the
-world wide web. All interactions with a server are ultimately carried out via
-this HTTP API.
-
-You can use the API by sending HTTP requests to the server directly, but the
-more common way of communicating with the server is via a [database driver](../drivers/_index.md).
-A driver abstracts the complexity of the API away by providing a simple
-interface for your programming language or environment and handling things like
-authentication, connection pooling, asynchronous requests, and multi-part replies
-in the background. You can also use ArangoDB's [web interface](../../components/web-interface/_index.md),
-the [_arangosh_](../../components/tools/arangodb-shell/_index.md) shell, or other tools.
-
-The API documentation is relevant for you in the following cases:
-
-- You want to build or extend a driver.
-- You want to utilize a feature that isn't exposed by your driver or tool.
-- You need to send many requests and avoid any overhead that a driver or tool might add.
-- You operate a server instance and need to perform administrative actions via the API.
-- You are interested in how the low-level communication works.
-
-## RESTful API
-
-The API adheres to the design principles of [REST](https://en.wikipedia.org/wiki/Representational_state_transfer)
-(Representational State Transfer). A REST API is a specific type of HTTP API
-that uses HTTP methods to represent operations on resources (mainly `GET`,
-`POST`, `PATCH`, `PUT`, and `DELETE`), and resources are identified by URIs.
-A resource can be a database record, a server log, or any other data entity or
-object. The communication between client and server is stateless.
-
-A request URL can look like this:
-`http://localhost:8529/_db/DATABASE/_api/document/COLLECTION/KEY?returnOld=true&keepNull=false`
-- `http` is the scheme, which is `https` if you use TLS encryption
-- `http:` is the protocol
-- `localhost` is the hostname, which can be an IP address or domain name including subdomains
-- `8529` is the port
-- `/_db/DATABASE/_api/document/COLLECTION/KEY` is the pathname
-- `?returnOld=true&keepNull=false` is the search string
-- `returnOld=true&keepNull=false` is the query string
-
-The HTTP API documentation mainly describes the available **endpoints**, like
-for updating a document, creating a graph, and so on. Each endpoint description
-starts with the HTTP method and the pathname, like `PATCH /_api/document/{collection}/{key}`.
-- The `PATCH` method is for updating, `PUT` for replacing, `POST` for creating
- (or triggering an action), `DELETE` for removing, `GET` for reading,
- `HEAD` for reading metadata only
-- `/_api/document/…` is the path of ArangoDB's HTTP API for handling documents
- and can be preceded by `/_db/{database}` with `{database}` replaced by a
- database name to select another database than the default `_system` database
-- `{collection}` and `{key}` are placeholders called **Path Parameters** that
- you have to replace with a collection name and document key in this case
-- The pathname can be followed by a question mark and the so-called
- **Query Parameters**, which is a series of key/value pairs separated by
- ampersands to set options, like `/_api/document/COLLECTION/KEY?returnOld=true&keepNull=false`
-- Some endpoints allow you to specify **HTTP headers** in the request
- (not in the URL), like `If-Match: "REVISION"`
-- A **Request Body** is the payload you may need to send, typically JSON data
-- **Responses** are the possible HTTP responses in reply to your request in terms
- of the HTTP status code and typically a JSON payload with a result or error information
-
-On the wire, a simplified HTTP request can look like this:
-
-```
-PATCH /_api/document/coll1/docA?returnOld=true HTTP/1.1
-Host: localhost:8529
-Authorization: Basic cm9vdDo=
-If-Match: "_hV2oH9y---"
-Content-Type: application/json; charset=utf-8
-Content-Length: 20
-
-{"attr":"new value"}
-```
-
-And a simplified HTTP response can look like this:
-
-```
-HTTP/1.1 202 Accepted
-Etag: "_hV2r5XW---"
-Location: /_db/_system/_api/document/coll1/docA
-Server: ArangoDB
-Connection: Keep-Alive
-Content-Type: application/json; charset=utf-8
-Content-Length: 160
-
-{"_id":"coll1/docA","_key":"docA","_rev":"_hV2r5XW---","_oldRev":"_hV2oH9y---","old":{"_key":"docA","_id":"coll1/docA","_rev":"_hV2oH9y---","attr":"value"}}
-```
-
-## Swagger specification
-
-ArangoDB's RESTful HTTP API is documented using the industry-standard
-**OpenAPI Specification**, more specifically [OpenAPI version 3.1](https://swagger.io/specification/).
-You can explore the API with the interactive **Swagger UI** using the
-[ArangoDB web interface](../../components/web-interface/_index.md).
-
-1. Click **SUPPORT** in the main navigation of the web interface.
-2. Click the **Rest API** tab.
-3. Click a section and endpoint to view the description and parameters.
-
-
-
-Also see this blog post:
-[Using the ArangoDB Swagger.io Interactive API Documentation](https://www.arangodb.com/2018/03/using-arangodb-swaggerio-interactive-api-documentation/).
diff --git a/site/content/3.11/develop/http-api/documents.md b/site/content/3.11/develop/http-api/documents.md
deleted file mode 100644
index 6487f83c29..0000000000
--- a/site/content/3.11/develop/http-api/documents.md
+++ /dev/null
@@ -1,3068 +0,0 @@
----
-title: HTTP interface for documents
-menuTitle: Documents
-weight: 30
-description: >-
- The HTTP API for documents lets you create, read, update, and delete documents
- in collections, either one or multiple at a time
----
-The basic operations for documents are mapped to the standard HTTP methods:
-
-- Create: `POST`
-- Read: `GET`
-- Update: `PATCH` (partially modify)
-- Replace: `PUT`
-- Delete: `DELETE`
-- Check: `HEAD` (test for existence and get document metadata)
-
-## Addresses of documents
-
-Any document can be retrieved using its unique URI:
-
-```
-http://server:port/_api/document/