diff --git a/docs/operating-eck/eck-permissions.asciidoc b/docs/operating-eck/eck-permissions.asciidoc index 19242d0e69..b77038de3b 100644 --- a/docs/operating-eck/eck-permissions.asciidoc +++ b/docs/operating-eck/eck-permissions.asciidoc @@ -61,7 +61,7 @@ These permissions are needed by the Service Account that ECK operator runs as. |Pod||no|Assuring expected Pods presence during Elasticsearch reconciliation, safely deleting Pods during configuration changes and validating `podTemplate` by dry-run creation of Pods. |Endpoint||no|Checking availability of service endpoints. |Event||no|Emitting events concerning reconciliation progress and issues. -|PersistentVolumeClaim||no|Expanding existing volumes. Check link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html#k8s_updating_the_volume_claim_settings[docs] to learn more. +|PersistentVolumeClaim||no|Expanding existing volumes. Check <<{p}-volume-claim-templates-update,docs>> to learn more. |Secret||no|Reading/writing configuration, passwords, certificates, and so on. |Service||no|Creating Services fronting Elastic Stack applications. |ConfigMap||no|Reading/writing configuration. @@ -69,7 +69,7 @@ These permissions are needed by the Service Account that ECK operator runs as. |Deployment|apps|no|Deploying Kibana, APM Server, EnterpriseSearch, Maps, Beats or Elastic Agent. |DaemonSet|apps|no|Deploying Beats or Elastic Agent. |PodDisruptionBudget|policy|no|Ensuring update safety for Elasticsearch. Check link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-pod-disruption-budget.html[docs] to learn more. -|StorageClass|storage.k8s.io|yes|Validating storage expansion support. Check link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html#k8s_updating_the_volume_claim_settings[docs] to learn more. +|StorageClass|storage.k8s.io|yes|Validating storage expansion support. Check <<{p}-volume-claim-templates-update,docs>> to learn more. |coreauthorization.k8s.io|SubjectAccessReview|yes|Controlling access between referenced resources. Check link:https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-restrict-cross-namespace-associations.html[docs] to learn more. |=== diff --git a/docs/orchestrating-elastic-stack-applications/elasticsearch/volume-claim-templates.asciidoc b/docs/orchestrating-elastic-stack-applications/elasticsearch/volume-claim-templates.asciidoc index 42694e9221..689743c76b 100644 --- a/docs/orchestrating-elastic-stack-applications/elasticsearch/volume-claim-templates.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/elasticsearch/volume-claim-templates.asciidoc @@ -56,11 +56,12 @@ spec: The possible values are `DeleteOnScaledownAndClusterDeletion` and `DeleteOnScaledownOnly`. By default `DeleteOnScaledownAndClusterDeletion` is in effect, which means that all PersistentVolumeClaims are deleted together with the Elasticsearch cluster. However, `DeleteOnScaledownOnly` keeps the PersistentVolumeClaims when deleting the Elasticsearch cluster. If you recreate a deleted cluster with the same name and node sets as before, the existing PersistentVolumeClaims will be adopted by the new cluster. [float] +[id="{p}-{page_id}-update"] == Updating the volume claim settings If the storage class allows link:https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/[volume expansion], you can increase the storage requests size in the volumeClaimTemplates. ECK will update the existing PersistentVolumeClaims accordingly, and recreate the StatefulSet automatically. If the volume driver supports `ExpandInUsePersistentVolumes`, the filesystem is resized online, without the need of restarting the Elasticsearch process, or re-creating the Pods. If the volume driver does not support `ExpandInUsePersistentVolumes`, Pods must be manually deleted after the resize, to be recreated automatically with the expanded filesystem. -Any other changes are forbidden in the volumeClaimTemplates, such as changing the storage class or decreasing the volume size. To make these changes, you can create a new nodeSet with different settings, and remove the existing nodeSet. In practice, that's equivalent to renaming the existing nodeSet while modifying its claim settings in a single update. Before removing Pods of the deleted nodeSet, ECK makes sure that data is migrated to other nodes. +Kubernetes forbids any other changes in the volumeClaimTemplates, such as link:https://kubernetes.io/docs/concepts/storage/storage-classes[changing the storage class] or link:https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/[decreasing the volume size]. To make these changes, you can create a new nodeSet with different settings, and remove the existing nodeSet. In practice, that's equivalent to renaming the existing nodeSet while modifying its claim settings in a single update. Before removing Pods of the deleted nodeSet, ECK makes sure that data is migrated to other nodes. [float] == EmptyDir diff --git a/docs/orchestrating-elastic-stack-applications/kibana.asciidoc b/docs/orchestrating-elastic-stack-applications/kibana.asciidoc index ddd733c368..98e2763592 100644 --- a/docs/orchestrating-elastic-stack-applications/kibana.asciidoc +++ b/docs/orchestrating-elastic-stack-applications/kibana.asciidoc @@ -5,18 +5,18 @@ link:https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-{page_id}.html[View **** endif::[] [id="{p}-{page_id}"] -= Run Kibana on ECK += Run {kib} on ECK -The <<{p}-deploy-kibana,quickstart>> is a good starting point to quickly setup a Kibana instance with ECK. -The following sections describe how to customize a Kibana deployment to suit your requirements. +The <<{p}-deploy-kibana,quickstart>> is a good starting point to quickly setup a {kib} instance with ECK. +The following sections describe how to customize a {kib} deployment to suit your requirements. -* <<{p}-kibana-es,Connect to an Elasticsearch cluster>> -** <<{p}-kibana-eck-managed-es,Connect to an Elasticsearch cluster managed by ECK>> -** <<{p}-kibana-external-es,Connect to an Elasticsearch cluster not managed by ECK>> +* <<{p}-kibana-es,Connect to an {es} cluster>> +** <<{p}-kibana-eck-managed-es,Connect to an {es} cluster managed by ECK>> +** <<{p}-kibana-external-es,Connect to an {es} cluster not managed by ECK>> * <<{p}-kibana-advanced-configuration,Advanced configuration>> ** <<{p}-kibana-pod-configuration,Pod Configuration>> -** <<{p}-kibana-configuration,Kibana Configuration>> -** <<{p}-kibana-scaling,Scaling out a Kibana deployment>> +** <<{p}-kibana-configuration,{kib} Configuration>> +** <<{p}-kibana-scaling,Scaling out a {kib} deployment>> * <<{p}-kibana-secure-settings,Secure settings>> * <<{p}-kibana-http-configuration,HTTP Configuration>> ** <<{p}-kibana-http-publish,Load balancer settings and TLS SANs>> @@ -25,15 +25,15 @@ The following sections describe how to customize a Kibana deployment to suit you ** <<{p}-kibana-plugins>> [id="{p}-kibana-es"] -== Connect to an Elasticsearch cluster +== Connect to an {es} cluster -You can connect an Elasticsearch cluster that is either managed by ECK or not managed by ECK. +You can connect an {es} cluster that is either managed by ECK or not managed by ECK. [id="{p}-kibana-eck-managed-es"] -=== Elasticsearch is managed by ECK +=== {es} is managed by ECK -It is quite straightforward to connect a Kibana instance to an Elasticsearch cluster managed by ECK: +It is quite straightforward to connect a {kib} instance to an {es} cluster managed by ECK: [source,yaml,subs="attributes"] ---- @@ -49,25 +49,22 @@ spec: namespace: default ---- -The use of `namespace` is optional if the Elasticsearch cluster is running in the same namespace as Kibana. An additional `serviceName` attribute can be specified to target a custom Kubernetes service. -Refer to <<{p}-traffic-splitting>> for more information. +The use of `namespace` is optional if the {es} cluster is running in the same namespace as {kib}. An additional `serviceName` attribute can be specified to target a custom Kubernetes service. Refer to <<{p}-traffic-splitting>> for more information. The {kib} configuration file is automatically setup by ECK to establish a secure connection to {es}. -NOTE: Any Kibana can reference (and thus access) any Elasticsearch instance as long as they are both in namespaces that are watched by the same ECK instance. ECK will copy the required Secret from Elasticsearch to Kibana namespace. Kibana cannot automatically connect to Elasticsearch (through `elasticsearchRef`) in a namespace managed by a different ECK instance. For more information, check <<{p}-restrict-cross-namespace-associations,Restrict cross-namespace resource associations>>. - -The Kibana configuration file is automatically setup by ECK to establish a secure connection to Elasticsearch. +NOTE: Any {kib} can reference (and thus access) any {es} instance as long as they are both in namespaces that are watched by the same ECK instance. ECK will copy the required Secret from {es} to {kib} namespace. {kib} cannot automatically connect to {es} (through `elasticsearchRef`) in a namespace managed by a different ECK instance. For more information, check <<{p}-restrict-cross-namespace-associations,Restrict cross-namespace resource associations>>. [id="{p}-kibana-external-es"] -=== Elasticsearch is not managed by ECK +=== {es} is not managed by ECK -You can also configure Kibana to connect to an Elasticsearch cluster that is managed by a different installation of ECK, or runs outside the Kubernetes cluster. In this case, you need the IP address or URL of the Elasticsearch cluster and a valid username and password pair to access the cluster. +You can also configure {kib} to connect to an {es} cluster that is managed by a different installation of ECK, or runs outside the Kubernetes cluster. In this case, you need the IP address or URL of the {es} cluster and a valid username and password pair to access the cluster. === Using a Secret -Refer to <<{p}-connect-to-unmanaged-resources>> to automatically configure Kibana using connection settings from a `Secret`. +Refer to <<{p}-connect-to-unmanaged-resources>> to automatically configure {kib} using connection settings from a link:https://kubernetes.io/docs/concepts/configuration/secret/[`Secret`]. === Using secure settings -Use the <<{p}-kibana-secure-settings,secure settings>> mechanism to securely store the credentials of the external Elasticsearch cluster: +For example, use the <<{p}-kibana-secure-settings,secure settings>> mechanism to securely store the default `elastic` user's `$PASSWORD` credential of the external {es} cluster as set under <<{p}-deploy-elasticsearch,Deploy an {es} cluster>>: [source,shell] ---- @@ -92,7 +89,7 @@ spec: ---- -If the external Elasticsearch cluster is using a self-signed certificate, create a Kubernetes Secret containing the CA certificate and mount it to the Kibana container as follows: +If the external {es} cluster is using a self-signed certificate, create a link:https://kubernetes.io/docs/concepts/configuration/secret/[`Secret`] containing the CA certificate and mount it to the {kib} container as follows: [source,yaml,subs="attributes"] ---- @@ -131,16 +128,17 @@ spec: If you already looked at the <<{p}-elasticsearch-specification,Elasticsearch on ECK>> documentation, some of these concepts might sound familiar to you. The resource definitions in ECK share the same philosophy when you want to: -* Customize the Pod configuration -* Customize the product configuration -* Manage HTTP settings -* Use secure settings +* <<{p}-kibana-pod-configuration,Customize the Pod configuration>> +* <<{p}-kibana-configuration,Customize the product configuration>> +* <<{p}-kibana-http-configuration,Manage HTTP settings>> +* <<{p}-kibana-secure-settings,Use secure settings>> +* <<{p}-kibana-plugins,Install {kib} plugins>> [id="{p}-kibana-pod-configuration"] === Pod configuration -You can <<{p}-customize-pods,customize the Kibana Pod>> using a Pod template. +You can <<{p}-customize-pods,customize the {kib} Pod>> using a link:https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates[Pod Template]. -The following example demonstrates how to create a Kibana deployment with custom node affinity, increased heap size, and resource limits. +The following example demonstrates how to create a {kib} deployment with custom node affinity, increased heap size, and resource limits. [source,yaml,subs="attributes"] ---- @@ -171,15 +169,15 @@ spec: type: frontend ---- -The name of the container in the Pod template must be `kibana`. +The name of the container in the link:https://kubernetes.io/docs/concepts/workloads/pods/#pod-templates[Pod Template] must be `kibana`. Check <<{p}-compute-resources-kibana-and-apm>> for more information. [id="{p}-kibana-configuration"] -=== Kibana configuration -You can add your own Kibana settings to the `spec.config` section. +=== {kib} configuration +You can add your own {kib} settings to the `spec.config` section. -The following example demonstrates how to set the `elasticsearch.requestHeadersWhitelist` configuration option. +The following example demonstrates how to set the link:{kibana-ref}/settings.html#elasticsearch-requestHeadersWhitelist[`elasticsearch.requestHeadersWhitelist`] configuration option. [source,yaml,subs="attributes"] ---- @@ -198,13 +196,13 @@ spec: ---- [id="{p}-kibana-scaling"] -=== Scale out a Kibana deployment +=== Scale out a {kib} deployment -To deploy more than one instance of Kibana, all the instances must share a same set of encryption keys. The following keys are automatically generated by the operator: +To deploy more than one instance of {kib}, the instances must share a matching set of encryption keys. The following keys are automatically generated by the operator: -* `xpack.security.encryptionKey` -* `xpack.reporting.encryptionKey` -* `xpack.encryptedSavedObjects.encryptionKey` +* link:{kibana-ref}/security-settings-kb.html#xpack-security-encryptionKey[`xpack.security.encryptionKey`] +* link:{kibana-ref}/reporting-settings-kb.html#encryption-keys[`xpack.reporting.encryptionKey`] +* link:{kibana-ref}/xpack-security-secure-saved-objects.html[`xpack.encryptedSavedObjects.encryptionKey`] [TIP] ==== @@ -220,14 +218,14 @@ kubectl get secret my-kibana-kb-config -o jsonpath='{ .data.kibana\.yml }' | bas You can provide your own encryption keys using a secure setting, as described in <<{p}-kibana-secure-settings,Secure settings>>. -NOTE: While most reconfigurations of your Kibana instances are carried out in rolling upgrade fashion, all version upgrades will cause Kibana downtime. This happens because you can only run a single version of Kibana at any given time. For more information, check link:https://www.elastic.co/guide/en/kibana/current/upgrade.html[Upgrade Kibana]. +NOTE: While most reconfigurations of your {kib} instances are carried out in rolling upgrade fashion, all version upgrades will cause {kib} downtime. This happens because you can only run a single version of {kib} at any given time. For more information, check link:https://www.elastic.co/guide/en/kibana/current/upgrade.html[Upgrade {kib}]. [id="{p}-kibana-secure-settings"] == Secure settings -<<{p}-es-secure-settings,Similar to Elasticsearch>>, you can use Kubernetes secrets to manage secure settings for Kibana. +<<{p}-es-secure-settings,Similar to {es}>>, you can use Kubernetes secrets to manage secure settings for {kib}. -For example, you can define a custom encryption key for Kibana as follows: +For example, you can define a custom encryption key for {kib} as follows: . Create a secret containing the desired setting: + @@ -260,8 +258,8 @@ spec: [id="{p}-kibana-http-publish"] === Load balancer settings and TLS SANs -By default a `ClusterIP` link:https://kubernetes.io/docs/concepts/services-networking/service/[service] is created and associated with the Kibana deployment. -If you want to expose Kibana externally with a link:https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer[load balancer], it is recommended to include a custom DNS name or IP in the self-generated certificate. +By default a `ClusterIP` link:https://kubernetes.io/docs/concepts/services-networking/service/[Service] is created and associated with the {kib} deployment. +If you want to expose {kib} externally with a link:https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer[load balancer], it is recommended to include a custom DNS name or IP in the self-generated certificate. [source,yaml,subs="attributes"] ---- @@ -288,12 +286,12 @@ spec: [id="{p}-kibana-http-custom-tls"] === Provide your own certificate -If you want to use your own certificate, the required configuration is identical to Elasticsearch. Check <<{p}-custom-http-certificate>>. +If you want to use your own certificate, the required configuration is identical to {es}. Check <<{p}-custom-http-certificate>>. [id="{p}-kibana-http-disable-tls"] === Disable TLS -You can disable the generation of the self-signed certificate and hence disable TLS. +You can disable the generation of the self-signed certificate and hence link:{kibana-ref}/using-kibana-with-security.html[disable TLS]. This is not recommended outside of testing clusters. [source,yaml,subs="attributes"] ---- @@ -313,9 +311,9 @@ spec: ---- [id="{p}-kibana-plugins"] -== Install Kibana plugins +== Install {kib} plugins -You can override the Kibana container image to use your own image with the plugins already installed, as described in the <<{p}-custom-images,Create custom images>>. You should run an `optimize` step as part of the build, otherwise it needs to run at startup which requires additional time and resources. +You can override the {kib} container image to use your own image with the plugins already installed, as described in the <<{p}-custom-images,Create custom images>>. You should run an `optimize` step as part of the build, otherwise it needs to run at startup which requires additional time and resources. This is a Dockerfile example: diff --git a/docs/overview.asciidoc b/docs/overview.asciidoc index f1bfeca4e1..c096f8c48e 100644 --- a/docs/overview.asciidoc +++ b/docs/overview.asciidoc @@ -20,6 +20,7 @@ With Elastic Cloud on Kubernetes you can streamline critical operations, such as . Setting up hot-warm-cold architectures with availability zone awareness -- +[id="{p}-supported"] == Supported versions include::supported-versions.asciidoc[] diff --git a/docs/quickstart.asciidoc b/docs/quickstart.asciidoc index 36c8880cab..2d65be27e4 100644 --- a/docs/quickstart.asciidoc +++ b/docs/quickstart.asciidoc @@ -9,20 +9,16 @@ endif::[] [partintro] -- -With Elastic Cloud on Kubernetes (ECK) you can extend the basic Kubernetes orchestration capabilities to easily deploy, secure, upgrade your Elasticsearch cluster, and much more. +With Elastic Cloud on Kubernetes (ECK) you can extend the basic Kubernetes orchestration capabilities to easily deploy, secure, upgrade your {es} cluster, and much more. -Eager to get started? This quick guide shows you how to: +Eager to get started? This quickstart guide shows you how to: * <<{p}-deploy-eck,Deploy ECK in your Kubernetes cluster>> -* <<{p}-deploy-elasticsearch,Deploy an Elasticsearch cluster>> -* <<{p}-deploy-kibana,Deploy a Kibana instance>> -* <<{p}-upgrade-deployment,Upgrade your deployment>> -* <<{p}-persistent-storage,Use persistent storage>> -* <<{p}-check-samples,Check out the samples>> +* <<{p}-deploy-elasticsearch,Deploy an {es} cluster>> +* <<{p}-deploy-kibana,Deploy a {kib} instance>> +* <<{p}-update-deployment,Update your deployment>> -**Supported versions** - -include::supported-versions.asciidoc[] +Afterwards, you can find further sample resources link:{eck_github}/tree/{eck_release_branch}/config/samples[in the project repository] or by checking out <<{p}-recipes,our recipes>>. -- @@ -31,7 +27,7 @@ include::supported-versions.asciidoc[] Things to consider before you start: -* For this quickstart guide, your Kubernetes cluster is assumed to be already up and running. Before you proceed with the ECK installation, make sure you check the link:k8s-quickstart.html[supported versions]. +* For this quickstart guide, your Kubernetes cluster is assumed to be already up and running. Before you proceed with the ECK installation, make sure you check the <<{p}-supported,supported versions>>. * If you are using GKE, make sure your user has `cluster-admin` permissions. For more information, check link:https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#iam-rolebinding-bootstrap[Prerequisites for using Kubernetes RBAC on GKE]. @@ -39,16 +35,18 @@ Things to consider before you start: * Refer to <<{p}-installing-eck>> for more information on installation options. -IMPORTANT: Check the <<{p}-upgrading-eck,upgrade notes>> if you are attempting to upgrade an existing ECK deployment. +* Check the <<{p}-upgrading-eck,upgrade notes>> if you are attempting to upgrade an existing ECK deployment. + +To deploy the ECK operator: -. Install link:https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[custom resource definitions]: +. Install link:https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/[custom resource definitions] with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_create/[`create`]: + [source,sh,subs="attributes"] ---- kubectl create -f https://download.elastic.co/downloads/eck/{eck_version}/crds.yaml ---- + -The following Elastic resources have been created: +This will output similar to the following upon Elastic resources' creation: + [source,sh] ---- @@ -62,7 +60,7 @@ customresourcedefinition.apiextensions.k8s.io/kibanas.kibana.k8s.elastic.co crea customresourcedefinition.apiextensions.k8s.io/logstashes.logstash.k8s.elastic.co created ---- -. Install the operator with its RBAC rules: +. Install the operator with its RBAC rules with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/[`apply`]: + [source,sh,subs="attributes"] ---- @@ -70,19 +68,28 @@ kubectl apply -f https://download.elastic.co/downloads/eck/{eck_version}/operato ---- NOTE: The ECK operator runs by default in the `elastic-system` namespace. It is recommended that you choose a dedicated namespace for your workloads, rather than using the `elastic-system` or the `default` namespace. -. Monitor the operator logs: +. Monitor the operator's setup from its logs through link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/[`logs`]: + [source,sh] ---- kubectl -n elastic-system logs -f statefulset.apps/elastic-operator ---- -[id="{p}-deploy-elasticsearch"] -== Deploy an Elasticsearch cluster +. Once ready, the operator will report as `Running` as shown with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`], replacing default `elastic-system` with applicable installation namespace as needed: +* +[source,sh] +---- +$ kubectl get -n elastic-system pods +NAME READY STATUS RESTARTS AGE +elastic-operator-0 1/1 Running 0 1m +---- -Apply a simple link:{ref}/getting-started.html[Elasticsearch] cluster specification, with one Elasticsearch node: +This completes the quickstart of the ECK operator. We recommend continuing to <<{p}-deploy-elasticsearch,Deploying an {es} cluster>>; but for more configuration options as needed, navigate to <<{p}-operating-eck,Operating ECK>>. -NOTE: If your Kubernetes cluster does not have any Kubernetes nodes with at least 2GiB of free memory, the pod will be stuck in `Pending` state. Check <<{p}-managing-compute-resources>> for more information about resource requirements and how to configure them. +[id="{p}-deploy-elasticsearch"] +== Deploy an {es} cluster + +To deploy a simple link:{ref}/getting-started.html[{es]}] cluster specification, with one {es} node: [source,yaml,subs="attributes,+macros"] ---- @@ -101,77 +108,100 @@ spec: EOF ---- -The operator automatically creates and manages Kubernetes resources to achieve the desired state of the Elasticsearch cluster. It may take up to a few minutes until all the resources are created and the cluster is ready for use. +The operator automatically creates and manages Kubernetes resources to achieve the desired state of the {es} cluster. It may take up to a few minutes until all the resources are created and the cluster is ready for use. CAUTION: Setting `node.store.allow_mmap: false` has performance implications and should be tuned for production workloads as described in the <<{p}-virtual-memory>> section. +NOTE: If your Kubernetes cluster does not have any Kubernetes nodes with at least 2GiB of free memory, the pod will be stuck in `Pending` state. Check <<{p}-managing-compute-resources>> for more information about resource requirements and how to configure them. + +NOTE: The cluster that you deployed in this quickstart guide only allocates a persistent volume of 1GiB for storage using the default link:https://kubernetes.io/docs/concepts/storage/storage-classes/[storage class] defined for the Kubernetes cluster. You will most likely want to have more control over this for production workloads. Refer to <<{p}-volume-claim-templates>> for more information. + +For a full description of each `CustomResourceDefinition` (CRD), refer to the <<{p}-api-reference>> or view the CRD files in the link:{eck_github}/tree/{eck_release_branch}/config/crds[project repository]. You can also retrieve information about a CRD from the cluster. For example, describe the {es} CRD specification with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/[`describe`]: + +[source,sh] +---- +kubectl describe crd elasticsearch +---- + [float] [id="{p}-elasticsearch-monitor-cluster-health"] === Monitor cluster health and creation progress -Get an overview of the current Elasticsearch clusters in the Kubernetes cluster, including health, version and number of nodes: +Get an overview of the current {es} clusters in the Kubernetes cluster with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`], including health, version and number of nodes: [source,sh] ---- kubectl get elasticsearch ---- +When you first create the Kubernetes cluster, there is no `HEALTH` status and the `PHASE` is empty. After the pod and service start-up, the `PHASE` turns into `Ready`, and `HEALTH` becomes `green`. The `HEALTH` status comes from {es}'s link:{ref}/cluster-health.html[cluster health API]. + [source,sh,subs="attributes"] ---- NAME HEALTH NODES VERSION PHASE AGE -quickstart green 1 {version} Ready 1m +quickstart 1 {version} 1s ---- -When you create the cluster, there is no `HEALTH` status and the `PHASE` is empty. After a while, the `PHASE` turns into `Ready`, and `HEALTH` becomes `green`. The `HEALTH` status comes from link:{ref}/cluster-health.html[Elasticsearch's cluster health API]. - -One Pod is in the process of being started: +While the {es} pod is in the process of being started it will report `Pending` as checked with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`]: [source,sh] ---- kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart' ---- +Which will output similar to: + [source,sh] ---- NAME READY STATUS RESTARTS AGE -quickstart-es-default-0 1/1 Running 0 79s +quickstart-es-default-0 0/1 Pending 0 9s ---- -Access the logs for that Pod: +During and after start-up, up that pod's link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/[`logs`] can be accessed: [source,sh] ---- kubectl logs -f quickstart-es-default-0 ---- +Once the pod has finished coming up, our original link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`] request will now report: + +[source,sh,subs="attributes"] +---- +NAME HEALTH NODES VERSION PHASE AGE +quickstart green 1 {version} Ready 1m +---- + [float] -=== Request Elasticsearch access +=== Request {es} access -A ClusterIP Service is automatically created for your cluster: +A `ClusterIP` Service is automatically created for your cluster as checked with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`]: [source,sh] ---- kubectl get service quickstart-es-http ---- +Which will output similar to: + [source,sh] ---- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quickstart-es-http ClusterIP 10.15.251.145 9200/TCP 34m ---- +In order to make requests to the link:{ref}/rest-apis.html[{es} API]: + . Get the credentials. + -A default user named `elastic` is created by default with the password stored in a Kubernetes secret: +By default, a user named `elastic` is created with the password stored inside a link:https://kubernetes.io/docs/concepts/configuration/secret/[Kubernetes secret]. This default user can be disabled if desired, refer to <<{p}-users-and-roles>> for more information. + [source,sh] ---- PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}') ---- -+ -NOTE: The `elastic` user creation can be disabled if desired. Check <<{p}-users-and-roles>> for more information. -. Request the Elasticsearch endpoint. You can do so from inside the Kubernetes cluster or from your local workstation. +. Request the link:{ref}/rest-api-root.html[{es} root API]. You can do so from inside the Kubernetes cluster or from your local workstation. For demonstration purposes, certificate verification is disabled using the `-k` curl flag; however, this is not recommended outside of testing purposes. Refer to <<{p}-setting-up-your-own-certificate>> for more information. * From inside the Kubernetes cluster: + [source,sh] @@ -193,25 +223,14 @@ kubectl port-forward service/quickstart-es-http 9200 curl -u "elastic:$PASSWORD" -k "https://localhost:9200" ---- -NOTE: Disabling certificate verification using the `-k` flag is not recommended and should be used for testing purposes only. Check <<{p}-setting-up-your-own-certificate>>. - -[source,json] ----- -{ - "name" : "quickstart-es-default-0", - "cluster_name" : "quickstart", - "cluster_uuid" : "XqWg0xIiRmmEBg4NMhnYPg", - "version" : {...}, - "tagline" : "You Know, for Search" -} ----- +This completes the quickstart of deploying an {es} cluster. We recommend continuing to <<{p}-deploy-kibana,Deploy a {kib} instance>> but for more configuration options as needed, navigate to <<{p}-elasticsearch-specification,Running {es} on ECK>>. [id="{p}-deploy-kibana"] -== Deploy a Kibana instance +== Deploy a {kib} instance -To deploy your link:{kibana-ref}/introduction.html#introduction[Kibana] instance go through the following steps. +To deploy a simple link:{kibana-ref}/introduction.html#introduction[{kib}] specification, with one {kib} instance: -. Specify a Kibana instance and associate it with your Elasticsearch cluster: +. Specify a {kib} instance and associate it with your {es} `quickstart` cluster created previously under <<{p}-deploy-elasticsearch,Deploying an {es} cluster>>: + [source,yaml,subs="attributes,+macros"] ---- @@ -228,9 +247,9 @@ spec: EOF ---- -. Monitor Kibana health and creation progress. +. Monitor {kib} health and creation progress. + -Similar to Elasticsearch, you can retrieve details about Kibana instances: +Similar to {es}, you can retrieve details about {kib} instances with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`]: + [source,sh] ---- @@ -243,17 +262,19 @@ And the associated Pods: ---- kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart' ---- ++ +{kib} will be status `available` once link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_get/[`get`] reports `green`. If it experiences issues starting up, use link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/[`logs`] against the pod in order to link:{kibana-ref}/access.html#not-ready[Troubleshoot {kib} start-up]. -. Access Kibana. +. Access {kib}. + -A `ClusterIP` Service is automatically created for Kibana: +A `ClusterIP` Service is automatically created for {kib}: + [source,sh] ---- kubectl get service quickstart-kb-http ---- + -Use `kubectl port-forward` to access Kibana from your local workstation: +Use `kubectl port-forward` to access {kib} from your local workstation: + [source,sh] ---- @@ -269,12 +290,21 @@ Login as the `elastic` user. The password can be obtained with the following com kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo ---- -[id="{p}-upgrade-deployment"] -== Upgrade your deployment +For a full description of each `CustomResourceDefinition` (CRD), refer to the <<{p}-api-reference>> or view the CRD files in the link:{eck_github}/tree/{eck_release_branch}/config/crds[project repository]. You can also retrieve information about a CRD from the instance. For example, describe the {kib} CRD specification with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/[`describe`]: -You can add and modify most elements of the original cluster specification provided that they translate to valid transformations of the underlying Kubernetes resources (for example <<{p}-volume-claim-templates, existing volume claims cannot be downsized>>). The operator will attempt to apply your changes with minimal disruption to the existing cluster. You should ensure that the Kubernetes cluster has sufficient resources to accommodate the changes (extra storage space, sufficient memory and CPU resources to temporarily spin up new pods, and so on). +[source,sh] +---- +kubectl describe crd kibana +---- -For example, you can grow the cluster to three Elasticsearch nodes: +This completes the quickstart of deploying an {kib} instance on top of <<{p}-deploy-eck,the ECK operator>> and <<{p}-deploy-elasticsearch,deployed {es} cluster>>. We recommend continuing to <<{p}-update-deployment,updating your deployment>>. For more {kib} configuration options, refer to <<{p}-kibana, Running {kib} on ECK>>. + +[id="{p}-update-deployment"] +== Update your deployment + +You can add and modify most elements of the original Kubernetes cluster specification provided that they translate to valid transformations of the underlying Kubernetes resources (for example <<{p}-volume-claim-templates, existing volume claims cannot be downsized>>). The ECK operator will attempt to apply your changes with minimal disruption to the existing cluster. You should ensure that the Kubernetes cluster has sufficient resources to accommodate the changes (extra storage space, sufficient memory and CPU resources to temporarily spin up new pods, and so on). + +For example, you can grow the cluster to three {es} nodes from the <<{p}-deploy-elasticsearch,deployed {es} cluster>> example by updating the `count` with link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_apply/[`apply`]: [source,yaml,subs="attributes,+macros"] ---- @@ -293,21 +323,6 @@ spec: EOF ---- -[id="{p}-persistent-storage"] -== Use persistent storage - -The cluster that you deployed in this quickstart guide only allocates a persistent volume of 1GiB for storage using the default link:https://kubernetes.io/docs/concepts/storage/storage-classes/[storage class] defined for the Kubernetes cluster. You will most likely want to have more control over this for production workloads. Refer to <<{p}-volume-claim-templates>> for more information. - - -[id="{p}-check-samples"] -== Check out the samples +ECK will automatically schedule the requested update. Changes can be monitored with the <<{p}-deploy-eck,ECK operator logs>>, link:https://kubernetes.io/docs/reference/kubernetes-api/cluster-resources/event-v1/[`events`], and applicable product's link:https://kubernetes.io/docs/reference/kubectl/generated/kubectl_logs/[pod `logs`]. These will either report successful application of changes or provide context for further troubleshooting. Kindly note, Kubernetes restricts some changes, for example refer to <<{p}-volume-claim-templates-update,Updating Volume Claims>>. -You can find a set of sample resources link:{eck_github}/tree/{eck_release_branch}/config/samples[in the project repository]. - -For a full description of each `CustomResourceDefinition` (CRD), refer to the <<{p}-api-reference>> or view the CRD files in the link:{eck_github}/tree/{eck_release_branch}/config/crds[project repository]. -You can also retrieve information about a CRD from the cluster. For example, describe the Elasticsearch CRD specification with: - -[source,sh] ----- -kubectl describe crd elasticsearch ----- +This completes our quickstart guide for deploying an {es} cluster and {kib} instance with our ECK operator. We recommend continuing to <<{p}-orchestrating-elastic-stack-applications,Orchestrating Elastic Stack applications>> for more configuration options