Skip to content

Commit

Permalink
Upgrade HA network configuration (#80)
Browse files Browse the repository at this point in the history
Update HA network configuration
---------

Co-authored-by: Ante Javor <ante.javor@memgraph.io>
  • Loading branch information
as51340 and antejavor authored Nov 28, 2024
1 parent 57a5934 commit 9bb5535
Show file tree
Hide file tree
Showing 11 changed files with 100 additions and 132 deletions.
67 changes: 35 additions & 32 deletions charts/memgraph-high-availability/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,43 +38,46 @@ The affinity is disabled either by running the command above, or by modifying th

The following table lists the configurable parameters of the Memgraph chart and their default values.

| Parameter | Description | Default |
| -------------------------------------------------- | --------------------------------------------------------------------------------------------------- | -------------------------- |
| `memgraph.image.repository` | Memgraph Docker image repository | `memgraph/memgraph` |
| `memgraph.image.tag` | Specific tag for the Memgraph Docker image. Overrides the image tag whose default is chart version. | `2.17.0` |
| `memgraph.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `memgraph.env.MEMGRAPH_ENTERPRISE_LICENSE` | Memgraph enterprise license | `<your-license>` |
| `memgraph.env.MEMGRAPH_ORGANIZATION_NAME` | Organization name | `<your-organization-name>` |
| `memgraph.probes.startup.failureThreshold` | Startup probe failure threshold | `30` |
| `memgraph.probes.startup.periodSeconds` | Startup probe period in seconds | `10` |
| `memgraph.probes.readiness.initialDelaySeconds` | Readiness probe initial delay in seconds | `5` |
| `memgraph.probes.readiness.periodSeconds` | Readiness probe period in seconds | `5` |
| `memgraph.probes.liveness.initialDelaySeconds` | Liveness probe initial delay in seconds | `30` |
| `memgraph.probes.liveness.periodSeconds` | Liveness probe period in seconds | `10` |
| `memgraph.data.volumeClaim.storagePVC` | Enable storage PVC | `true` |
| `memgraph.data.volumeClaim.storagePVCSize` | Size of the storage PVC | `1Gi` |
| `memgraph.data.volumeClaim.logPVC` | Enable log PVC | `false` |
| `memgraph.data.volumeClaim.logPVCSize` | Size of the log PVC | `256Mi` |
| `memgraph.coordinators.volumeClaim.storagePVC` | Enable storage PVC for coordinators | `true` |
| `memgraph.coordinators.volumeClaim.storagePVCSize` | Size of the storage PVC for coordinators | `1Gi` |
| `memgraph.coordinators.volumeClaim.logPVC` | Enable log PVC for coordinators | `false` |
| `memgraph.coordinators.volumeClaim.logPVCSize` | Size of the log PVC for coordinators | `256Mi` |
| `memgraph.affinity.enabled` | Enables affinity so each instance is deployed to unique node | `true` |
| `data` | Configuration for data instances | See `data` section |
| `coordinators` | Configuration for coordinator instances | See `coordinators` section |

| Parameter | Description | Default |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------|-----------------------------------------|
| `memgraph.image.repository` | Memgraph Docker image repository | `memgraph/memgraph` |
| `memgraph.image.tag` | Specific tag for the Memgraph Docker image. Overrides the image tag whose default is chart version. | `2.22.0` |
| `memgraph.image.pullPolicy` | Image pull policy | `IfNotPresent` |
| `memgraph.env.MEMGRAPH_ENTERPRISE_LICENSE` | Memgraph enterprise license | `<your-license>` |
| `memgraph.env.MEMGRAPH_ORGANIZATION_NAME` | Organization name | `<your-organization-name>` |
| `memgraph.probes.startup.failureThreshold` | Startup probe failure threshold | `30` |
| `memgraph.probes.startup.periodSeconds` | Startup probe period in seconds | `10` |
| `memgraph.probes.readiness.initialDelaySeconds` | Readiness probe initial delay in seconds | `5` |
| `memgraph.probes.readiness.periodSeconds` | Readiness probe period in seconds | `5` |
| `memgraph.probes.liveness.initialDelaySeconds` | Liveness probe initial delay in seconds | `30` |
| `memgraph.probes.liveness.periodSeconds` | Liveness probe period in seconds | `10` |
| `memgraph.data.volumeClaim.storagePVC` | Enable storage PVC | `true` |
| `memgraph.data.volumeClaim.storagePVCSize` | Size of the storage PVC | `1Gi` |
| `memgraph.data.volumeClaim.logPVC` | Enable log PVC | `false` |
| `memgraph.data.volumeClaim.logPVCSize` | Size of the log PVC | `256Mi` |
| `memgraph.coordinators.volumeClaim.storagePVC` | Enable storage PVC for coordinators | `true` |
| `memgraph.coordinators.volumeClaim.storagePVCSize` | Size of the storage PVC for coordinators | `1Gi` |
| `memgraph.coordinators.volumeClaim.logPVC` | Enable log PVC for coordinators | `false` |
| `memgraph.coordinators.volumeClaim.logPVCSize` | Size of the log PVC for coordinators | `256Mi` |
| `memgraph.affinity.enabled` | Enables affinity so each instance is deployed to unique node | `true` |
| `memgraph.externalAccess.serviceType` | NodePort or LoadBalancer. Use LoadBalancer for Cloud production deployment and NodePort for local testing | `LoadBalancer` |
| `memgraph.ports.boltPort` | Bolt port used on coordinator and data instances. | `7687` |
| `memgraph.ports.managementPort` | Management port used on coordinator and data instances. | `10000` |
| `memgraph.ports.replicationPort` | Replication port used on data instances. | `20000` |
| `memgraph.ports.coordinatorPort` | Coordinator port used on coordinators. | `12000` |
| `data` | Configuration for data instances | See `data` section |
| `coordinators` | Configuration for coordinator instances | See `coordinators` section |
| `sysctlInitContainer.enabled` | Enable the init container to set sysctl parameters | `true` |
| `sysctlInitContainer.maxMapCount` | Value for `vm.max_map_count` to be set by the init container | `262144` |

For the `data` and `coordinators` sections, each item in the list has the following parameters:

| Parameter | Description | Default |
| ------------------------------------- | -------------------------------------------- | ---------------------------------- |
| `id` | ID of the instance | `0` for data, `1` for coordinators |
| `boltPort` | Bolt port of the instance | `7687` |
| `managementPort` | Management port of the data instance | `10000` |
| `replicationPort` (data only) | Replication port of the data instance | `20000` |
| `coordinatorPort` (coordinators only) | Coordinator port of the coordinator instance | `12000` |
| `args` | List of arguments for the instance | See `args` section |
| Parameter | Description | Default |
|---------------------------------------------|-----------------------------------------------------------------------------------------------------|-----------------------------------------|
| `id` | ID of the instance | `0` for data, `1` for coordinators |
| `args` | List of arguments for the instance | See `args` section |


The `args` section contains a list of arguments for the instance. The default values are the same for all instances:

Expand Down
5 changes: 3 additions & 2 deletions charts/memgraph-high-availability/aws/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ eksctl create cluster -f cluster.yaml`
```

should be sufficient. Make sure to change the path to the public SSH key if you want to have SSH access to EC2 instances. After creating the cluster, `kubectl` should pick up
the AWS context and you can verify this by running `kubectl context current-context`. My is pointing to `andi.skrgat@test-cluster-ha.eu-west-1.eksctl.io`.
the AWS context and you can verify this by running `kubectl config current-context`. My is pointing to `andi.skrgat@test-cluster-ha.eu-west-1.eksctl.io`.

## Add Helm Charts repository

Expand Down Expand Up @@ -57,7 +57,8 @@ aws eks describe-nodegroup --cluster-name test-cluster-ha --nodegroup-name stand
and then provide full access to it:

```
aws iam list-attached-role-policies --role-name eksctl-test-cluster-ha-nodegroup-s-NodeInstanceRole-<ROLE_ID_FROM_PREVIOUS_OUTPUT>
aws iam attach-role-policy --role-name eksctl-test-cluster-ha-nodegroup-s-NodeInstanceRole-<ROLE-ID> --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess
aws iam list-attached-role-policies --role-name eksctl-test-cluster-ha-nodegroup-s-NodeInstanceRole-<ROLE-ID>
```

It is also important to create Inbound Rule in the Security Group attached to the eksctl cluster which will allow TCP traffic
Expand Down
4 changes: 2 additions & 2 deletions charts/memgraph-high-availability/aws/cluster.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ managedNodeGroups:
instanceSelector: {}
instanceType: t3.small
labels:
alpha.eksctl.io/cluster-name: test-cluster-ha
alpha.eksctl.io/cluster-name: mg-ha
alpha.eksctl.io/nodegroup-name: standard-workers
maxSize: 5
minSize: 5
Expand All @@ -58,7 +58,7 @@ managedNodeGroups:
volumeThroughput: 125
volumeType: gp3
metadata:
name: test-cluster-ha
name: mg-ha
region: eu-west-1
version: "1.30"
privateCluster:
Expand Down
16 changes: 12 additions & 4 deletions charts/memgraph-high-availability/templates/NOTES.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,18 @@ The cluster setup requires the proper enterprise license to work since HA is an
You can connect to Memgraph instances via Lab, mgconsole, or any other client. By default, all Memgraph instances (coordinators and data instances) listen on port 7687 for a bolt connection.
Make sure your are connecting to the correct ip address and port. For details check the configuration on your cloud provider(aws, gcp, azure, etc.)

If you are connecting via mgconsole, you can use the following command:
To start, you should add coordinators and register data instances in order to completely setup cluster. In both cases you only need to modify 'bolt_server' part and set it to the DNS
of the node on which instance is being started. Node ports are fixed. Example:

mgconsole --host <your-instance-ip> --port <your-instance-port>
ADD COORDINATOR 2 WITH CONFIG {"bolt_server": "<NODE-2-IP>:32002", "management_server": "memgraph-coordinator-2.default.svc.cluster.local:10000", "coordinator_server": "memgraph-coordinator-2.default.svc.cluster.local:12000"};
ADD COORDINATOR 3 WITH CONFIG {"bolt_server": "<NODE-3-IP>:32003", "management_server": "memgraph-coordinator-3.default.svc.cluster.local:10000", "coordinator_server": "memgraph-coordinator-3.default.svc.cluster.local:12000"};
REGISTER INSTANCE instance_1 WITH CONFIG {"bolt_server": "<NODE-4-IP>:32010", "management_server": "memgraph-data-0.default.svc.cluster.local:10000", "replication_server": "memgraph-data-0.default.svc.cluster.local:20000"};
REGISTER INSTANCE instance_2 WITH CONFIG {"bolt_server": "<NODE-5-IP>:32011", "management_server": "memgraph-data-1.default.svc.cluster.local:10000", "replication_server": "memgraph-data-1.default.svc.cluster.local:20000"};

If you are connecting via Lab, specify your instance IP address and port in Memgraph Lab GUI.
If you are connecting via Lab, specify your coordinator instance IP address and port in Memgraph Lab GUI and select Memgraph HA cluster type of connection.

If you are using minikube, you can find out your instance ip using `minikube ip`.
If you are using minikube, you can find out your node ip using `minikube ip`.

ADD COORDINATOR 3 WITH CONFIG {"bolt_server": "34.251.38.32:32003", "management_server": "memgraph-coordinator-3.default.svc.cluster.local:10000", "coordinator_server": "memgraph-coordinator-3.default.svc.cluster.local:12000"};
REGISTER INSTANCE instance_1 WITH CONFIG {"bolt_server": "52.50.209.155:32010", "management_server": "memgraph-data-0.default.svc.cluster.local:10000", "replication_server": "memgraph-data-0.default.svc.cluster.local:20000"};
REGISTER INSTANCE instance_2 WITH CONFIG {"bolt_server": "34.24.10.69:32011", "management_server": "memgraph-data-1.default.svc.cluster.local:10000", "replication_server": "memgraph-data-1.default.svc.cluster.local:20000"};
Loading

0 comments on commit 9bb5535

Please sign in to comment.