Skip to content

Commit

Permalink
New and updates for operator 090 (#1224)
Browse files Browse the repository at this point in the history
* custom-conf-parameters&pv-claim&balance data (#1176)

* custom-conf-parameters

* update-pv-claim

* auto-balance-data-parameter

* Update mkdocs.yml

* version macro

* updates

* Update 8.3.balance-data-when-scaling-storage.md

* Update 2.deploy-nebula-operator.md

* Update 2.deploy-nebula-operator.md

* Update 8.3.balance-data-when-scaling-storage.md

* Update 8.1.custom-conf-parameter.md

* updates

* Update 8.1.custom-conf-parameter.md

* Update 8.3.balance-data-when-scaling-storage.md

* colon fix

* Update 3.1create-cluster-with-kubectl.md

* Create 9.upgrade-nebula-cluster.md (#1190)

* Create 9.upgrade-nebula-cluster.md

* Update 9.upgrade-nebula-cluster.md

* Update 9.upgrade-nebula-cluster.md

* Update 9.upgrade-nebula-cluster.md

* add crd updating step for 090 (#1221)&crd&macro&yaml

* upgrade Helm

* Update 3.2create-cluster-with-helm.md

* describe optimization

* Update 9.upgrade-nebula-cluster.md

* Update 9.upgrade-nebula-cluster.md

* Update 2.deploy-nebula-operator.md

* Update 2.deploy-nebula-operator.md

* crd upgrading

* Update 2.deploy-nebula-operator.md

Helm Updates for 090 (#1214)

* upgrade Helm

* Update 3.2create-cluster-with-helm.md

* describe optimization

* Update 9.upgrade-nebula-cluster.md

* Update 9.upgrade-nebula-cluster.md

* Update 2.deploy-nebula-operator.md

Update 2.deploy-nebula-operator.md

Update mkdocs.yml (#1213)

* updates for ingress and upgrading (#1205)

* Update 4.connect-to-nebula-graph-service.md

* Update 4.connect-to-nebula-graph-service.md

* Update 9.upgrade-nebula-cluster.md

* Service access via Ingress  (#1187)

* Update 4.connect-to-nebula-graph-service.md

* Update 4.connect-to-nebula-graph-service.md

* Update 4.connect-to-nebula-graph-service.md
  • Loading branch information
abby-cyber authored Nov 16, 2021
1 parent 992701b commit b8bf070
Show file tree
Hide file tree
Showing 12 changed files with 758 additions and 69 deletions.
3 changes: 1 addition & 2 deletions docs-2.0/20.appendix/6.eco-tool-version.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,7 @@ Nebula Operator(简称Operator)是用于在Kubernetes系统上自动化部
|Nebula Graph版本|Operator版本(commit id)|
|:---|:---|
| {{ nebula.release }} | {{operator.release}}(6d1104e) |
-->
| {{ nebula.release }} | {{operator.release}}(ba88e28) |
## Nebula Importer
Expand Down
7 changes: 5 additions & 2 deletions docs-2.0/nebula-operator/1.introduction-to-nebula-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,9 +18,11 @@ Nebula Operator已具备的功能如下:

- **集群扩容和缩容**:通过在控制循环中调用Nebula Graph原生提供的扩缩容接口,Nebula Graph封装Nebula Operator实现了扩缩容的逻辑,用户可以通过YAML配置进行简单的扩缩容,且保证数据的稳定性。更多信息参考[使用Kubeclt扩缩容集群](3.deploy-nebula-graph-cluster/3.1create-cluster-with-kubectl.md#_3)[使用Helm扩缩容集群](3.deploy-nebula-graph-cluster/3.2create-cluster-with-helm.md#_2)

- **集群升级**:支持升级2.5.x版的Nebula Graph集群至2.6.x版。

- **故障自愈**:Nebula Operator调用Nebula Graph集群提供的接口,动态地感知服务状态。一旦发现异常,Nebula Operator自动进行容错处理。更多信息参考[故障自愈](5.operator-failover.md)

- **均衡调度**: 基于调度器扩展接口,Nebula Operator提供的调度器可以将应用Pods均匀地分布在Nebula Graph集群中。
- **均衡调度**基于调度器扩展接口,Nebula Operator提供的调度器可以将应用Pods均匀地分布在Nebula Graph集群中。

## 使用限制

Expand All @@ -30,7 +32,8 @@ Nebula Operator不支持v1.x版本的Nebula Graph,其与Nebula Graph版本的

| Nebula Operator版本 | Nebula Graph版本 |
| ------------------- | ---------------- |
| {{operator.release}}| {{nebula.release}} |
| {{operator.release}}| 2.5.x ~ 2.6.x |
|0.8.0|2.5.x|

### 功能限制

Expand Down
115 changes: 96 additions & 19 deletions docs-2.0/nebula-operator/2.deploy-nebula-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,12 +69,20 @@
3. 安装Nebula Operator。

```bash
helm install nebula-operator nebula-operator/nebula-operator --namespace=<nebula-operator-system> --version=${chart_version}
helm install nebula-operator nebula-operator/nebula-operator --namespace=<namespace_name> --version=${chart_version}
```

- 上述命令中的`<nebula-operator-system>`为用户创建的命名空间。如果用户未创建该命名空间,可以执行`kubectl create namespace nebula-operator-system`进行创建。用户也可创建其他命名空间。
例如,安装{{operator.release}}版的Operator命令如下。

```bash
helm install nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}}
```

- 上述命令中的`nebula-operator-system`为用户创建的命名空间。如果用户未创建该命名空间,可以执行`kubectl create namespace nebula-operator-system`进行创建。用户也可创建其他命名空间。

- `${chart_version}`为Nebula Operator chart的版本。当Chart中只有一个默认版本时,可不指定。执行`helm search repo -l nebula-operator`查看Chart版本。
- `{{operator.release}}`为Nebula Operator chart的版本。当Chart中只有一个默认版本时,可不指定。执行`helm search repo -l nebula-operator`查看Chart版本。



用户可在执行安装Nebula Operator chart命令时自定义其配置。更多信息,查看下文**自定义配置Chart**

Expand All @@ -85,17 +93,17 @@
示例如下:

```yaml
[abby@master ~]$ helm show values nebula-operator/nebula-operator
[abby@master ~]$ helm show values nebula-operator/nebula-operator
image:
nebulaOperator:
image: vesoft/nebula-operator:v0.8.0
imagePullPolicy: IfNotPresent
image: vesoft/nebula-operator:{{operator.branch}}
imagePullPolicy: Always
kubeRBACProxy:
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0
imagePullPolicy: IfNotPresent
imagePullPolicy: Always
kubeScheduler:
image: k8s.gcr.io/kube-scheduler:v1.18.8
imagePullPolicy: IfNotPresent
imagePullPolicy: Always

imagePullSecrets: []
kubernetesClusterDomain: ""
Expand All @@ -106,11 +114,11 @@ controllerManager:
env: []
resources:
limits:
cpu: 100m
memory: 30Mi
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 20Mi
memory: 100Mi

admissionWebhook:
create: true
Expand All @@ -122,21 +130,22 @@ scheduler:
env: []
resources:
limits:
cpu: 100m
memory: 30Mi
cpu: 200m
memory: 200Mi
requests:
cpu: 100m
memory: 20Mi
memory: 100Mi
...
```

`values.yaml`中参数描述如下
部分参数描述如下

| 参数 | 默认值 | 描述 |
| :------------------------------------- | :------------------------------ | :----------------------------------------- |
| `image.nebulaOperator.image` | `vesoft/nebula-operator:v0.8.0` | Nebula Operator的镜像,版本为v0.8.0。 |
| `image.nebulaOperator.image` | `vesoft/nebula-operator:{{operator.branch}}` | Nebula Operator的镜像,版本为{{operator.release}}|
| `image.nebulaOperator.imagePullPolicy` | `IfNotPresent` | 镜像拉取策略。 |
| `imagePullSecrets` | - | 镜像拉取密钥。 |
| `kubernetesClusterDomain` | `cluster.local` | 集群域名。 |
| `kubernetesClusterDomain` | `cluster.local` | 集群域名。 |
| `controllerManager.create` | `true` | 是否启用controller-manager。 |
| `controllerManager.replicas` | `2` | controller-manager副本数。 |
| `admissionWebhook.create` | `true` | 是否启用Admission Webhook。 |
Expand Down Expand Up @@ -169,11 +178,79 @@ helm install nebula-operator nebula-operator/nebula-operator --namespace=<nebula
3. 更新Nebula Operator。

```bash
helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=<nebula-operator-system> -f ${HOME}/nebula-operator/charts/nebula-operator/values.yaml
helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=<namespace_name> -f ${HOME}/nebula-operator/charts/nebula-operator/values.yaml
```

`<nebula-operator-system>`为用户创建的命名空间,nebula-operator相关Pods在此命名空间下。
`<namespace_name>`为用户创建的命名空间,nebula-operator相关Pods在此命名空间下。


### 升级Nebula Operator

!!! Compatibility "历史版本兼容性"

由于0.9.0版本的Nebula Operator的日志盘和数据盘分开存储,因此用升级后的Operator管理2.5.x版本的Nebula Graph集群会导致兼容性问题。用户可以备份2.5.x版本的Nebula Graph集群,然后使用升级版本的Operator创建2.6.x版本集群。

1. 拉取最新的Helm仓库。


```bash
helm repo update
```

2. 升级Operator。

```bash
helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=<namespace_name> --version={{operator.release}}
```

示例:

```bash
helm upgrade nebula-operator nebula-operator/nebula-operator --namespace=nebula-operator-system --version={{operator.release}}
```

输出:

```bash
Release "nebula-operator" has been upgraded. Happy Helming!
NAME: nebula-operator
LAST DEPLOYED: Tue Nov 16 02:21:08 2021
NAMESPACE: nebula-operator-system
STATUS: deployed
REVISION: 3
TEST SUITE: None
NOTES:
Nebula Operator installed!
```

3. 拉取最新的CRD配置文件。

!!! note
升级Operator后,需要同时升级相应的CRD配置,否则Nebula Graph集群创建会失败。有关CRD的配置,参见[apps.nebula-graph.io_nebulaclusters.yaml](https://github.com/vesoft-inc/nebula-operator/blob/{{operator.branch}}/config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml)

```bash
helm pull nebula-operator/nebula-operator
```

4. 升级CRD配置文件。

```bash
kubectl apply -f <crd_file_name>.yaml
```

示例:

```bash
kubectl apply -f config/crd/bases/apps.nebula-graph.io_nebulaclusters.yaml
```

输出:

```bash
customresourcedefinition.apiextensions.k8s.io/nebulaclusters.apps.nebula-graph.io created
```


### 卸载Nebula Operator

1. 卸载Nebula Operator chart。
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,11 @@
memory: "1Gi"
replicas: 1
image: vesoft/nebula-graphd
version: v2.5.1
version: {{nebula.branch}}
service:
type: NodePort
externalTrafficPolicy: Local
storageClaim:
logVolumeClaim:
resources:
requests:
storage: 2Gi
Expand All @@ -47,8 +47,13 @@
memory: "1Gi"
replicas: 1
image: vesoft/nebula-metad
version: v2.5.1
storageClaim:
version: {{nebula.branch}}
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
logVolumeClaim:
resources:
requests:
storage: 2Gi
Expand All @@ -63,8 +68,13 @@
memory: "1Gi"
replicas: 3
image: vesoft/nebula-storaged
version: v2.5.1
storageClaim:
version: {{nebula.branch}}
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
logVolumeClaim:
resources:
requests:
storage: 2Gi
Expand All @@ -73,7 +83,7 @@
name: statefulsets.apps
version: v1
schedulerName: default-scheduler
imagePullPolicy: IfNotPresent
imagePullPolicy: Always
```
参数描述如下:
Expand All @@ -83,23 +93,25 @@
| `metadata.name` | - | 创建的Nebula Graph集群名称。 |
| `spec.graphd.replicas` | `1` | Graphd服务的副本数。 |
| `spec.graphd.images` | `vesoft/nebula-graphd` | Graphd服务的容器镜像。 |
| `spec.graphd.version` | `v2.5.1` | Graphd服务的版本号。 |
| `spec.graphd.version` | `{{nebula.branch}}` | Graphd服务的版本号。 |
| `spec.graphd.service` | - | Graphd服务Service配置。 |
| `spec.graphd.storageClaim` | - | Graphd服务存储配置。 |
| `spec.graphd.logVolumeClaim.storageClassName` | - | Graphd服务的日志盘存储配置。 |
| `spec.metad.replicas` | `1` | Metad服务的副本数。 |
| `spec.metad.images` | `vesoft/nebula-metad` | Metad服务的容器镜像。 |
| `spec.metad.version` | `v2.5.1` | Metad服务的版本号。 |
| `spec.metad.storageClaim` | - | Metad服务存储配置。 |
| `spec.metad.version` | `{{nebula.branch}}` | Metad服务的版本号。 |
| `spec.metad.dataVolumeClaim.storageClassName` | - | Metad服务的数据盘存储配置。 |
| `spec.metad.logVolumeClaim.storageClassName`|-|Metad服务的日志盘存储配置。|
| `spec.storaged.replicas` | `3` | Storaged服务的副本数。 |
| `spec.storaged.images` | `vesoft/nebula-storaged` | Storaged服务的容器镜像。 |
| `spec.storaged.version` | `v2.5.1` | Storaged服务的版本号。 |
| `spec.storaged.storageClaim` | - | Storaged服务存储配置。 |
| `spec.storaged.version` | `{{nebula.branch}}` | Storaged服务的版本号。 |
| `spec.storaged.dataVolumeClaim.storageClassName` | - | Storaged服务的数据盘存储配置。 |
| `spec.storaged.logVolumeClaim.storageClassName`|-|Storaged服务的日志盘存储配置。|
| `spec.reference.name` | - | 依赖的控制器名称。 |
| `spec.schedulerName` | - | 调度器名称。 |
| `spec.imagePullPolicy` | Nebula Graph镜像的拉取策略。关于拉取策略详情,请参考[Image pull policy](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy)。 | 镜像拉取策略。 |


2. 创建Nebula Graph集群。
1. 创建Nebula Graph集群。

```bash
kubectl create -f apps_v1alpha1_nebulacluster.yaml
Expand All @@ -120,8 +132,8 @@
返回:

```bash
NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula-cluster 1 1 1 1 3 3 31h
NAME GRAPHD-DESIRED GRAPHD-READY METAD-DESIRED METAD-READY STORAGED-DESIRED STORAGED-READY AGE
nebula 1 1 1 1 3 3 86s
```

## 扩缩容集群
Expand All @@ -138,19 +150,28 @@
storaged:
resources:
requests:
cpu: "1"
memory: "1Gi"
cpu: "500m"
memory: "500Mi"
limits:
cpu: "1"
memory: "1Gi"
replicas: 5
image: vesoft/nebula-storaged
version: v2.5.1
storageClaim:
version: {{nebula.branch}}
dataVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: gp2
logVolumeClaim:
resources:
requests:
storage: 2Gi
storageClassName: fast-disks
storageClassName: gp2
reference:
name: statefulsets.apps
version: v1
schedulerName: default-scheduler
```

2. 执行以下命令使上述更新同步至Nebula Graph集群CR中。
Expand Down
Loading

0 comments on commit b8bf070

Please sign in to comment.