Skip to content

Latest commit

 

History

History
161 lines (122 loc) · 4.15 KB

tier2.md

File metadata and controls

161 lines (122 loc) · 4.15 KB

Tier 2 Storage

The following Tier 2 storage providers are supported:

Use NFS as Tier 2

The following example uses an NFS volume provisioned by the NFS Server Provisioner helm chart to provide Tier 2 storage.

$ helm install stable/nfs-server-provisioner

Note that the nfs-server-provisioner is a toy NFS server and is ONLY intended as a demo and should NOT be used for production deployments.

You can also connect to an existing NFS server by using NFS Client Provisioner.

helm install --set nfs.server=<address:x.x.x.x> --set nfs.path=</exported/path> --set storageClass.name=nfs --set nfs.mountOptions='{nolock,sec=sys,vers=4.0}' stable/nfs-client-provisioner

Verify that the nfs storage class is now available.

$ kubectl get storageclass
NAME   PROVISIONER                                             AGE
nfs    cluster.local/elevated-leopard-nfs-server-provisioner   24s
...

Once the NFS provisioner is installed, you can create a PersistentVolumeClaim that will be used as Tier 2 for Pravega. Create a pvc.yaml file with the following content.

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pravega-tier2
spec:
  storageClassName: "nfs"
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
$ kubectl create -f pvc.yaml

Use Google Filestore Storage as Tier 2

  1. Create a Google Filestore.

Refer to https://cloud.google.com/filestore/docs/accessing-fileshares for more information

  1. Create a pv.yaml file with the PersistentVolume specification to provide Tier 2 storage.
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pravega-volume
spec:
  capacity:
    storage: 1T
  accessModes:
  - ReadWriteMany
  nfs:
    path: /[FILESHARE]
    server: [IP_ADDRESS]

where:

  • [FILESHARE] is the name of the fileshare on the Cloud Filestore instance (e.g. vol1)
  • [IP_ADDRESS] is the IP address for the Cloud Filestore instance (e.g. 10.123.189.202)
  1. Deploy the PersistentVolume specification.
$ kubectl create -f pv.yaml
  1. Create and deploy a PersistentVolumeClaim to consume the volume created.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pravega-tier2
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Gi
$ kubectl create -f pvc.yaml

Use Dell EMC ECS as Tier 2

Pravega can also use an S3-compatible storage backend such as Dell EMC ECS as Tier 2.

Create a file with the secret definition containing your access and secret keys.

apiVersion: v1
kind: Secret
metadata:
  name: ecs-secret
type: Opaque
stringData:
  ACCESS_KEY_ID: QWERTY@ecstestdrive.emc.com
  SECRET_KEY: 0123456789

Assuming that the file is named ecs-secret.yaml.

$ kubectl create -f ecs-secret.yaml

Follow the instructions to deploy Pravega manually and configure the Tier 2 block in your PravegaCluster manifest with your ECS connection details and a reference to the secret above.

...
spec:
  tier2:
    ecs:
      uri: http://10.247.10.52:9020
      bucket: shared
      prefix: "pravega/example"
      namespace: pravega
      credentials: ecs-secret

Use HDFS as Tier 2

Pravega can also use HDFS as the storage backend for Tier 2. The only requisite is that the HDFS backend must support Append operation.

Follow the instructions to deploy Pravega manually and configure the Tier 2 block in your PravegaCluster manifest with your HDFS connection details.

spec:
  tier2:
    hdfs:
      uri: hdfs://10.28.2.14:8020/
      root: /example
      replicationFactor: 3