Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

new docs pv expansion & admission control #2319

Merged
merged 2 commits into from
Oct 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ NebulaGraph Operator does not support the v1.x version of NebulaGraph. NebulaGra

| NebulaGraph | NebulaGraph Operator |
| ------------- | -------------------- |
| 3.5.x ~ 3.6.0 | 1.5.0, 1.6.x |
| 3.5.x ~ 3.6.0 | 1.5.0 ~ 1.7.x |
| 3.0.0 ~ 3.4.1 | 1.3.0, 1.4.0 ~ 1.4.2 |
| 3.0.0 ~ 3.3.x | 1.0.0, 1.1.0, 1.2.0 |
| 2.5.x ~ 2.6.x | 0.9.0 |
Expand Down

This file was deleted.

Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Enable admission control

Kubernetes [Admission Control](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) is a security mechanism running as a webhook at runtime. It intercepts and modifies requests to ensure the cluster's security. Admission webhooks involve two main operations: validation and mutation. NebulaGraph Operator supports only validation operations and provides some default admission control rules. This topic describes NebulaGraph Operator's default admission control rules and how to enable admission control.

## Prerequisites

You have already created a cluster using Kubernetes. For detailed steps, see [Creating a NebulaGraph Cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md).

## Admission control rules

Kubernetes admission control allows you to insert custom logic or policies before Kubernetes API Server processes requests. This mechanism can be used to implement various security policies, such as restricting a Pod's resource consumption or limiting its access permissions. NebulaGraph Operator supports validation operations, which means it validates and intercepts requests without making changes. NebulaGraph Operator's default admission validation control rules include:

- Ensuring the minimum number of replicas in high availability mode:

- For Graph service: At least 2 replicas are required.
- For Meta service: At least 3 replicas are required.
- For Storage service: At least 3 replicas are required.

!!! note

High availability mode refers to the high availability of NebulaGraph cluster services. Storage and Meta services are stateful, and the number of replicas should be an odd number due to [Raft](../../1.introduction/3.nebula-graph-architecture/4.storage-service.md#raft) protocol requirements for data consistency. In high availability mode, at least 3 Storage services and 3 Meta services are required. Graph services are stateless, so their number of replicas can be even but should be at least 2.

- Preventing additional PVs from being added to Storage service via `dataVolumeClaims`.

- Disallowing shrinking the capacity of all service's PVCs, but allowing expansion.

- Forbidding any secondary operation during Storage service scale-in/scale-out.
abby-cyber marked this conversation as resolved.
Show resolved Hide resolved

## TLS certificates for admission webhooks

To ensure secure communication and data integrity between the K8s API server and the admission webhook, this communication is done over HTTPS by default. This means that TLS certificates are required for the admission webhook. [cert-manager](https://cert-manager.io/docs/) is a Kubernetes certificate management controller that automates the issuance and renewal of certificates. NebulaGraph Operator uses cert-manager to manage certificates.

Once cert-manager is installed and admission control is enabled, NebulaGraph Operator will automatically create an [Issuer](https://cert-manager.io/docs/concepts/issuer/) for issuing the necessary certificate for the admission webhook, and a [Certificate](https://cert-manager.io/docs/concepts/certificate/) for storing the issued certificate. The issued certificate is stored in the `nebula-operator-webhook-secret` Secret.

## Steps of enabling admission control

1. Install cert-manager.

```bash
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.1/cert-manager.yaml
```

It is suggested to deploy the latest version of cert-manager. For details, see the [official cert-manager documentation](https://cert-manager.io/docs/installation/).

2. Modify the NebulaGraph Operator configuration file to enable admission control. Admission control is disabled by default and needs to be enabled manually.

```bash
# Check the current configuration
helm show values nebula-operator/nebula-operator
```

```bash
# Modify the configuration by setting `enableAdmissionWebhook` to `true`.
helm upgrade nebula-operator nebula-operator/nebula-operator --set enableAdmissionWebhook=true
```

!!! note

`nebula-operator` is the name of the chart repository, and `nebula-operator/nebula-operator` is the chart name. If the chart's namespace is not specified, it defaults to `default`.

3. View the certificate Secret for the admission webhook.

```bash
kubectl get secret nebula-operator-webhook-secret -o yaml
```

If the output includes certificate contents, it means that the admission webhook's certificate has been successfully created.

4. Verify the control rules.

- Verify the minimum number of replicas in high availability mode.

```bash
# Annotate the cluster to enable high availability mode.
$ kubectl annotate nc nebula nebula-graph.io/ha-mode=true
# Verify the minimum number of the Graph service's replicas.
$ kubectl patch nc nebula --type='merge' --patch '{"spec": {"graphd": {"replicas":1}}}'
Error from server: admission webhook "nebulaclustervalidating.nebula-graph.io" denied the request: spec.graphd.replicas: Invalid value: 1: should be at least 2 in HA mode
```

- Verify preventing additional PVs from being added to Storage service.

```bash
$ kubectl patch nc nebula --type='merge' --patch '{"spec": {"storaged": {"dataVolumeClaims":[{"resources": {"requests": {"storage": "2Gi"}}, "storageClassName": "local-path"},{"resources": {"requests": {"storage": "3Gi"}}, "storageClassName": "fask-disks"}]}}}'
Error from server: admission webhook "nebulaclustervalidating.nebula-graph.io" deniedthe request: spec.storaged.dataVolumeClaims: Forbidden: storaged dataVolumeClaims is immutable
```

- Verify disallowing shrinking Storage service's PVC capacity.

```bash
$ kubectl patch nc nebula --type='merge' --patch '{"spec": {"storaged": {"dataVolumeClaims":[{"resources": {"requests": {"storage": "1Gi"}}, "storageClassName": "fast-disks"}]}}}'
Error from server: admission webhook "nebulaclustervalidating.nebula-graph.io" denied the request: spec.storaged.dataVolumeClaims: Invalid value: resource.Quantity{i:resource.int64Amount{value:1073741824, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"1Gi", Format:"BinarySI"}: data volume size can only be increased
```

- Verify disallowing any secondary operation during Storage service scale-in.

```bash
$ kubectl patch nc nebula --type='merge' --patch '{"spec": {"storaged": {"replicas": 5}}}'
nebulacluster.apps.nebula-graph.io/nebula patched
$ kubectl patch nc nebula --type='merge' --patch '{"spec": {"storaged": {"replicas": 3}}}'
Error from server: admission webhook "nebulaclustervalidating.nebula-graph.io" denied the request: [spec.storaged: Forbidden: field is immutable while in ScaleOut phase, spec.storaged.replicas: Invalid value: 3: field is immutable while not in Running phase]
```
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,8 @@ You can also define the automatic deletion of PVCs to release data by setting th

## Prerequisites

You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md).
## Notes
You have created a cluster. For how to create a cluster with Kubectl, see [Create a cluster with Kubectl](../../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md).

NebulaGraph Operator does not support dynamically adding or mounting storage volumes to a running storaged instance.

## Steps

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# Dynamically expand persistent volumes

In a Kubernetes environment, NebulaGraph's data is stored on Persistent Volumes (PVs). Dynamic volume expansion refers to increasing the capacity of a volume without stopping the service, enabling NebulaGraph to accommodate growing data. This topic explains how to dynamically expand the PV for NebulaGraph services in a Kubernetes environment.

!!! note

- After the cluster is created, you cannot dynamically increase the number of PVs while the cluster is running.
- The method described in this topic is only for online volume expansion and does not support volume reduction.

## Background

In Kubernetes, a StorageClass is a resource that defines a particular storage type. It describes a class of storage, including its provisioner, parameters, and other details. When creating a PersistentVolumeClaim (PVC) and specifying a StorageClass, Kubernetes automatically creates a corresponding PV. The principle of dynamic volume expansion is to edit the PVC and increase the volume's capacity. Kubernetes will then automatically expand the capacity of the PV associated with this PVC based on the specified `storageClassName` in the PVC. During this process, new PVs are not created; the size of the existing PV is changed. Only dynamic storage volumes, typically those associated with a `storageClassName`, support dynamic volume expansion. Additionally, the `allowVolumeExpansion` field in the StorageClass must be set to `true`. For more details, see the [Kubernetes documentation on expanding Persistent Volume Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).

In NebulaGraph Operator, you cannot directly edit PVC because Operator automatically creates PVC based on the configuration in the `spec.<metad|storaged>.dataVolumeClaim` of the Nebula Graph cluster. Therefore, you need to modify the cluster's configuration to update the PVC and trigger dynamic online volume expansion for the PV.

## Prerequisites

- Kubernetes version is equal to or greater than 1.18.
- A StorageClass has been created in the Kubernetes environment. For details, see [Expanding Persistent Volumes Claims](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#expanding-persistent-volumes-claims).
- Ensure the `allowVolumeExpansion` field in the StorageClass is set to `true`.
- Make sure that the `provisioner` configured in the StorageClass supports dynamic expansion.
- A NebulaGraph cluster has been created in Kubernetes. For specific steps, see [Create a Nebula Graph Cluster with Kubectl](../../3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md).
- NebulaGraph cluster Pods are in running status.

## Online volume expansion example

In the following example, we assume that the StorageClass is named `ebs-sc` and the NebulaGraph cluster is named `nebula`. We will demonstrate how to dynamically expand the PV for the Storage service.

1. Check the status of the Storage service Pod:

```bash
kubectl get pod
```

Example output:

```bash
nebula-storaged-0 1/1 Running 0 43h
```

2. Check the PVC and PV information for the Storage service:

```bash
# View PVC
kubectl get pvc
```

Example output:

```bash
storaged-data-nebula-storaged-0 Bound pvc-36ca3871-9265-460f-b812-7e73a718xxxx 5Gi RWO ebs-sc 43h
```

```bash
# View PV and confirm that the capacity of the PV is 5Gi
kubectl get pv
```

Example output:

```bash
pvc-36ca3871-9265-460f-b812-xxx 5Gi RWO Delete Bound default/storaged-data-nebula-storaged-0 ebs-sc 43h
```

3. Assuming all the above-mentioned prerequisites are met, use the following command to request an expansion of the PV for the Storage service to 10Gi:

```bash
kubectl patch nc nebula --type='merge' --patch '{"spec": {"storaged": {"dataVolumeClaims":[{"resources": {"requests": {"storage": "10Gi"}}, "storageClassName": "ebs-sc"}]}}}'
```

Example output:

```bash
nebulacluster.apps.nebula-graph.io/nebula patched
```

4. After waiting for about a minute, check the expanded PVC and PV information:

```bash
kubectl get pvc
```

Example output:

```bash
storaged-data-nebula-storaged-0 Bound pvc-36ca3871-9265-460f-b812-7e73a718xxxx 10Gi RWO ebs-sc 43h
```

```bash
kubectl get pv
```

Example output:

```bash
pvc-36ca3871-9265-460f-b812-xxx 10Gi RWO Delete Bound default/storaged-data-nebula-storaged-0 ebs-sc 43h
```

As you can see, both the PVC and PV capacity have been expanded to 10Gi.
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ NebulaGraph Operator 不支持 v1.x 版本的 NebulaGraph,其与{{nebula.name}

| {{nebula.name}}版本 | NebulaGraph Operator 版本 |
| ------------------- | ------------------------- |
| 3.5.x ~ 3.6.0 | 1.5.0 、1.6.x |
| 3.5.x ~ 3.6.0 | 1.5.0 ~ 1.7.x |
| 3.0.0 ~ 3.4.1 | 1.3.0、1.4.0 ~ 1.4.2 |
| 3.0.0 ~ 3.3.x | 1.0.0、1.1.0、1.2.0 |
| 2.5.x ~ 2.6.x | 0.9.0 |
Expand Down
Loading