diff --git a/docs/en/logs/configuring.asciidoc b/docs/en/logs/configuring.asciidoc deleted file mode 100644 index 1a01dfd1c3..0000000000 --- a/docs/en/logs/configuring.asciidoc +++ /dev/null @@ -1,46 +0,0 @@ -[role="xpack"] -[[configure-logs-source]] - -:ecs-base-link: {ecs-ref}/ecs-base.html[base] - -=== Configure logs source data - -The default source configuration for logs is specified in the {kibana-ref}/logs-ui-settings-kb.html[{logs-app} settings] in the {kibana-ref}/settings.html[Kibana configuration file]. -The default configuration uses the `filebeat-*` index pattern to query the data. -The default configuration also defines field settings for things like timestamps and container names, and the default columns to show in the logs stream. - -If your logs have custom index patterns, use non-default field settings, or contain parsed fields which you want to expose as individual columns, you can override the default configuration settings. - -To change the configuration settings, click the *Settings* tab. - -NOTE: These settings are shared with metrics. Changes you make here may also affect the settings used by the {metrics-guide}/configure-metrics-source.html[{metrics-app}]. - -In the *Settings* tab, you can modify the following values: - -* *Name*: the name of the source configuration -* *Indices*: the index pattern or patterns in the Elasticsearch indices to read metrics data and log data from -* *Fields*: the names of specific fields in the indices that are used to query and interpret the data correctly -* *Log columns*: the columns that are shown in the logs stream - -By default the logs stream displays the following columns: - -* *Timestamp*: The timestamp of the log entry from the `timestamp` field. -* *Message*: The message extracted from the document. -The content of this field depends on the type of log message. -If no special log message type is detected, the Elastic Common Schema (ECS) {ecs-base-link} field, `message`, is used. - -To add a new column to the logs stream, in the *Settings* tab, click *Add column*. -In the list of available fields, select the field you want to add. -You can start typing a field name in the search box to filter the field list by that name. - -To remove an existing column, click the *Remove this column* icon -image:images/logs-configure-source-dialog-remove-column-button.png[Remove column]. - -When you have completed your changes, click *Apply*. - -If the fields are greyed out and cannot be edited, you may not have sufficient privileges to change the source configuration. -For more information see {kibana-ref}/xpack-security-authorization.html[Granting access to Kibana]. - -TIP: If {kibana-ref}/xpack-spaces.html[Spaces] are enabled in your Kibana instance, any configuration changes you make here are specific to the current space. -You can make different subsets of data available by creating multiple spaces with different data source configurations. - diff --git a/docs/en/logs/images/actions-menu.png b/docs/en/logs/images/actions-menu.png deleted file mode 100644 index cf0f23efd5..0000000000 Binary files a/docs/en/logs/images/actions-menu.png and /dev/null differ diff --git a/docs/en/logs/images/add-data.png b/docs/en/logs/images/add-data.png deleted file mode 100644 index a2032b7260..0000000000 Binary files a/docs/en/logs/images/add-data.png and /dev/null differ diff --git a/docs/en/logs/images/alert-actions-menu.png b/docs/en/logs/images/alert-actions-menu.png deleted file mode 100644 index 3f96a700a0..0000000000 Binary files a/docs/en/logs/images/alert-actions-menu.png and /dev/null differ diff --git a/docs/en/logs/images/alert-flyout.png b/docs/en/logs/images/alert-flyout.png deleted file mode 100644 index ce9ce30de2..0000000000 Binary files a/docs/en/logs/images/alert-flyout.png and /dev/null differ diff --git a/docs/en/logs/images/analysis-tab-create-ml-job.png b/docs/en/logs/images/analysis-tab-create-ml-job.png deleted file mode 100644 index 0f4115bb93..0000000000 Binary files a/docs/en/logs/images/analysis-tab-create-ml-job.png and /dev/null differ diff --git a/docs/en/logs/images/log-rate-anomalies.png b/docs/en/logs/images/log-rate-anomalies.png deleted file mode 100644 index 74ce8d682e..0000000000 Binary files a/docs/en/logs/images/log-rate-anomalies.png and /dev/null differ diff --git a/docs/en/logs/images/log-rate-entries.png b/docs/en/logs/images/log-rate-entries.png deleted file mode 100644 index efa693a2ac..0000000000 Binary files a/docs/en/logs/images/log-rate-entries.png and /dev/null differ diff --git a/docs/en/logs/images/log-time-filter.png b/docs/en/logs/images/log-time-filter.png deleted file mode 100644 index ffba6f972a..0000000000 Binary files a/docs/en/logs/images/log-time-filter.png and /dev/null differ diff --git a/docs/en/logs/images/logs-action-menu.png b/docs/en/logs/images/logs-action-menu.png deleted file mode 100644 index f1c79b6fa8..0000000000 Binary files a/docs/en/logs/images/logs-action-menu.png and /dev/null differ diff --git a/docs/en/logs/images/logs-add-data.png b/docs/en/logs/images/logs-add-data.png deleted file mode 100644 index 176c71466a..0000000000 Binary files a/docs/en/logs/images/logs-add-data.png and /dev/null differ diff --git a/docs/en/logs/images/logs-configure-source-dialog-remove-column-button.png b/docs/en/logs/images/logs-configure-source-dialog-remove-column-button.png deleted file mode 100644 index 995b7ac1f5..0000000000 Binary files a/docs/en/logs/images/logs-configure-source-dialog-remove-column-button.png and /dev/null differ diff --git a/docs/en/logs/images/logs-console.png b/docs/en/logs/images/logs-console.png deleted file mode 100644 index ddd3346475..0000000000 Binary files a/docs/en/logs/images/logs-console.png and /dev/null differ diff --git a/docs/en/logs/images/logs-monitoring-architecture.png b/docs/en/logs/images/logs-monitoring-architecture.png deleted file mode 100644 index 5b5e7d096b..0000000000 Binary files a/docs/en/logs/images/logs-monitoring-architecture.png and /dev/null differ diff --git a/docs/en/logs/images/logs-view-event-with-filter.png b/docs/en/logs/images/logs-view-event-with-filter.png deleted file mode 100644 index 4e378af39a..0000000000 Binary files a/docs/en/logs/images/logs-view-event-with-filter.png and /dev/null differ diff --git a/docs/en/logs/images/logs-view-in-context.png b/docs/en/logs/images/logs-view-in-context.png deleted file mode 100644 index 09a9e89fc3..0000000000 Binary files a/docs/en/logs/images/logs-view-in-context.png and /dev/null differ diff --git a/docs/en/logs/images/time-filter-calendar.png b/docs/en/logs/images/time-filter-calendar.png deleted file mode 100644 index 7487401ca2..0000000000 Binary files a/docs/en/logs/images/time-filter-calendar.png and /dev/null differ diff --git a/docs/en/logs/index.asciidoc b/docs/en/logs/index.asciidoc deleted file mode 100644 index ceffa5afe0..0000000000 --- a/docs/en/logs/index.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -:doctype: book -:metrics: metrics -:metrics-app: Metrics app -:logs: logs -:logs-app: Logs app - -= Logs Monitoring Guide - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] - -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -include::logs-overview.asciidoc[] - -include::logs-installation.asciidoc[] - -include::logs-ui-intro.asciidoc[] - -include::configuring.asciidoc[] - -include::using.asciidoc[] - -include::log-rate.asciidoc[] - -include::logs-alerting.asciidoc[] - -include::logs-field-reference.asciidoc[] diff --git a/docs/en/logs/log-rate.asciidoc b/docs/en/logs/log-rate.asciidoc deleted file mode 100644 index 452ddba39d..0000000000 --- a/docs/en/logs/log-rate.asciidoc +++ /dev/null @@ -1,94 +0,0 @@ -[role="xpack"] -[[detect-log-anomalies]] -=== Detect and inspect log anomalies - -beta::[] - -When the {ml} {anomaly-detect} features are enabled, -you can use the **Log rate** page in the {logs-app}. -**Log rate** helps you to detect and inspect log anomalies and the log partitions where the log anomalies occur. -This means you can easily spot anomalous behavior without significant human intervention -- -no more manually sampling log data, calculating rates, and determining if rates are normal. - -*Log rate* automatically highlights periods of time where the log rate is outside expected bounds, -and therefore may be anomalous. -You can use this information as a basis for further investigations. -For example: - -* A significant drop in the log rate might suggest that a piece of infrastructure stopped responding, -and thus we're serving less requests. -* A spike in the log rate could denote a DDoS attack. -This may lead to an investigation of IP addresses from incoming requests. - -You can also view log anomalies directly in the {kibana-ref}/xpack-ml-anomalies.html[Machine Learning app]. - -[float] -[[logs-analysis-create-ml-job]] -==== Enable log rate analysis and anomaly detection - -Create a machine learning job to enable log rate analysis and anomaly detection. - -1. To enable log rate analysis and anomaly detection, -you must first create your own {kibana-ref}/xpack-spaces.html[space]. -2. Within a space, navigate to the {logs-app} and select *Log rate*. -Here, you'll be prompted to create a machine learning job which will carry out the log rate analysis. -3. Choose a time range for the machine learning analysis. -4. Add the Indices that contain the logs you want to analyze. -5. Click *Create ML job*. -6. You're now ready to analyze your log partitions. - -[role="screenshot"] -image::images/analysis-tab-create-ml-job.png[Create machine learning job] - -Even though the machine learning job's time range is fixed, -you can still use the time filter to adjust the results that are shown in your analysis. - -[role="screenshot"] -image::images/log-time-filter.png[Log rate time filter] - -[float] -[[logs-analysis-entries-chart]] -==== Log entries chart - -The log entries chart shows an overall, color-coded visualization of the log entry rate, -partitioned according to the value of the Elastic Common Schema (ECS) -{ecs-ref}/ecs-event.html[`event.dataset`] field. -This chart helps you quickly spot increases or decreases in each partition's log rate. - -If you have a lot of log partitions, use the following to filter your data: - -* Hover over a time range to see the log rate for each partition. -* Click or hover on a partition name to show, hide, or highlight the partition values. - -[role="screenshot"] -image::images/log-rate-entries.png[Log rate entries chart] - -[float] -[[logs-analysis-anomalies-chart]] -==== Anomalies charts - -The Anomalies chart shows the time range where anomalies were detected. -The typical rate values are shown in grey, while the anomalous regions are color-coded and superimposed on top. - -When a time range is flagged as anomalous, -the machine learning algorithms have detected unusual log rate activity. -This might be because: - -* The log rate is significantly higher than usual. -* The log rate is significantly lower than usual. -* Other anomalous behavior has been detected. -For example, the log rate is within bounds, but not fluctuating when it is expected to. - -The level of anomaly detected in a time period is color-coded, from red, orange, yellow, to blue. -Red indicates a critical anomaly level, while blue is a warning level. - -To help you further drill down into a potential anomaly, -you can view an anomaly chart for each individual partition: - -Anomaly scores range from 0 (no anomalies) to 100 (critical). - -To analyze the anomalies in more detail, click *Analyze in ML*, which opens the -{kibana-ref}/xpack-ml.html[Anomaly Explorer in Machine Learning]. - -[role="screenshot"] -image::images/log-rate-anomalies.png[Log rate entries chart] diff --git a/docs/en/logs/logs-alerting.asciidoc b/docs/en/logs/logs-alerting.asciidoc deleted file mode 100644 index 1d237f18bb..0000000000 --- a/docs/en/logs/logs-alerting.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ -[role="xpack"] -[[create-log-alert]] -=== Create an alert - -[float] -==== Overview - -To use the alerting functionality you need to {kibana-ref}/alerting-getting-started.html#alerting-setup-prerequisites[set up alerting]. - -You can then select the *Create alert* option, from the *Alerts* actions dropdown. - -[float] -==== Fields and comparators - -The comparators available for conditions depend on the chosen field. The combinations available are: - -- Numeric fields: *more than*, *more than or equals*, *less than*, *less than or equals*, *equals*, and *does not equal*. -- Aggregatable fields: *is* and *is not*. -- Non-aggregatable fields: *matches*, *does not match*, *matches phrase*, *does not match phrase*. - -[role="screenshot"] -image::images/alert-flyout.png[Create alert flyout] diff --git a/docs/en/logs/logs-field-reference.asciidoc b/docs/en/logs/logs-field-reference.asciidoc deleted file mode 100644 index 6a08116f37..0000000000 --- a/docs/en/logs/logs-field-reference.asciidoc +++ /dev/null @@ -1,142 +0,0 @@ -[[logs-fields-reference]] -[chapter, role="xpack"] -= Logs fields reference - -This section lists the required fields the {logs-app} uses to display data. -Some of the fields listed are https://www.elastic.co/guide/en/ecs/current/ecs-reference.html#_what_is_ecs[ECS fields]. - -IMPORTANT: Beat modules (for example, {filebeat-ref}/filebeat-modules.html[{filebeat} modules]) -are ECS-compliant so manual field mapping is not required, and all {logs-app} -data is automatically populated. If you cannot use {beats}, map your data to -{ecs-ref}[ECS fields] (see {ecs-ref}/ecs-converting.html[how to map data to ECS]). -You can also try using the experimental https://github.com/elastic/ecs-mapper[ECS Mapper] tool. - -`@timestamp`:: - -Date/time when the event originated. -+ -This is the date/time extracted from the event, typically representing when the event was generated by the source. -If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. -Required field for all events. -+ -type: date -+ -required: True -+ -ECS field: True -+ -example: `May 27, 2020 @ 15:22:27.982` - - -`_doc`:: - -This field is used to break ties between two entries with the same timestamp. -+ -required: True -+ -ECS field: False - - -`container.id`:: - -Unique container id. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `data` - - -`event.dataset`:: - -Name of the dataset. -+ -If an event source publishes more than one type of log or events (e.g. access log, error log), the dataset is used to specify which one the event comes from. -+ -It’s recommended but not required to start the dataset name with the module name, followed by a dot, then the dataset name. -+ -type: keyword -+ -required: True, if you want to use the {ml-features}. -+ -ECS field: True -+ -example: `apache.access` - - -`host.hostname`:: - -Hostname of the host. -+ -It normally contains what the `hostname` command returns on the host machine. -+ -type: keyword -+ -required: True, if you want to enable and use the *View in Context* feature. -+ -ECS field: True -+ -example: `Elastic.local` - - -`host.name`:: - -Name of the host. -+ -It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `MacBook-Elastic.local` - - -`kubernetes.pod.uid`:: - -Kubernetes Pod UID. -+ -type: keyword -+ -required: True -+ -ECS field: False -+ -example: `8454328b-673d-11ea-7d80-21010a840123` - - -`log.file.path`:: - -Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. -+ -If the event wasn't read from a log file, do not populate this field. -+ -type: keyword -+ -required: True, if you want to use the *View in Context* feature. -+ -ECS field: False -+ -example: `/var/log/demo.log` - - -`message`:: - -For log events the message field contains the log message, optimized for viewing in a log viewer. -+ -For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. -+ -If multiple messages exist, they can be combined into one message. -+ -type: text -+ -required: True -+ -ECS field: True -+ -example: `Hello World` diff --git a/docs/en/logs/logs-installation.asciidoc b/docs/en/logs/logs-installation.asciidoc deleted file mode 100644 index 529a0e62ab..0000000000 --- a/docs/en/logs/logs-installation.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[[install-logs-monitoring]] -[role="xpack"] -== Install Logs - -The easiest way to get started with Elastic Logs is by using our hosted {es} Service on Elastic Cloud. -The {es} Service is available on both AWS and GCP, and automatically configures {es} and {kib}. - -NOTE: If your data uses nonstandard fields, you may need to modify some of the default <>. - -[float] -=== Hosted Elasticsearch Service - -Skip installing and managing your own {es} and {kib} instance by using our hosted {es} Service. -{ess-trial}[Try out the {es} Service for free]. - -[float] -=== Install the stack yourself - -If you'd rather install the stack yourself, -first see the https://www.elastic.co/support/matrix[Elastic Support Matrix] for information about supported operating systems and product compatibility. - -* <> -* <> (version 6.5 or later) with a basic license -* <> (version 6.5 or later) on each of the systems you want to -monitor - -[[install-elasticsearch-logs]] -=== Step 1: Install Elasticsearch - -Install an {es} cluster, start it up, and make sure it's running. - -. Verify that your system meets the -https://www.elastic.co/support/matrix#matrix_jvm[minimum JVM requirements] for {es}. -. {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[Install Elasticsearch]. -. {stack-gs}/get-started-elastic-stack.html#_make_sure_elasticsearch_is_up_and_running[Make sure elasticsearch is up and running]. - -[[install-kibana-logs]] -=== Step 2: Install Kibana - -Install {kib}, start it up, and open up the web interface: - -. {stack-gs}/get-started-elastic-stack.html#install-kibana[Install Kibana]. -. {stack-gs}/get-started-elastic-stack.html#_launch_the_kibana_web_interface[Launch the Kibana Web Interface]. - -[[install-shippers]] -=== Step 3: Set up and run {filebeat} - -IMPORTANT: This section describes using {filebeat} to ingest data. There are other available methods to ingest data, such as {logstash-ref}/introduction.html[{ls}] or Fluentd. - -To start collecting logs data, you need to install {filebeat} and configure the {filebeat} modules directly from Kibana. - -Alternatively, you can install {filebeat} and configure the {filebeat} modules yourself. - -[float] -==== Install {filebeat} from {kib} - -IMPORTANT: {filebeat-ref}/filebeat-modules.html[{filebeat} modules] -are ECS-compliant so manual <> mapping is not required, and all {logs-app} -data is automatically populated. - -To install a {filebeat} module from {kib}, on the machine where you want to collect the data, open a {kib} browser window. -In the *Observability* section displayed on the home page of {kib}, click *Add log data*. -Now follow the instructions for the type of data you want to collect. -The instructions include how to install and configure {filebeat}, and enable the appropriate {filebeat} module for your data. - -[role="screenshot"] -image::images/add-data.png[Add log data] - -[float] -==== Install {filebeat} yourself - -If you want to install {filebeat} the old fashioned way, follow the instructions in {filebeat-ref}/filebeat-installation-configuration.html[{filebeat} quick start] and enable modules for the logs you want to collect. -If there is no module for the logs you want to collect, see {filebeat-ref}/configuration-filebeat-options.html[Configure inputs]. - -[float] -=== Enable {filebeat} modules - -To start collecting logs data, you need to enable the appropriate modules in {filebeat}. - -To collect logs from your host system, enable: - -* {filebeat-ref}/filebeat-module-system.html[{filebeat} `system` module] -* {filebeat-ref}/filebeat-modules.html[Other {filebeat} modules] needed for your environment, such as `apache2`, `redis`, and so on - -To collect logs from Docker containers, enable: - -* {filebeat-ref}/filebeat-input-docker.html[{filebeat} `docker` input] -* {filebeat-ref}/add-docker-metadata.html[{filebeat} `add_docker_metadata` processor] - -To collect logs from Kubernetes pods, enable: - -* {filebeat-ref}/filebeat-input-docker.html[{filebeat} `docker` input] -* {filebeat-ref}/add-kubernetes-metadata.html[{filebeat} `add_kubernetes_metadata` processor] - -[float] -=== Configure your data sources - -If your logs data has nonstandard fields, you may need to modify some configuration settings in {kib}, such as the index pattern used to query the data, and the timestamp field used for sorting. -To modify configuration settings, use the <> in the {logs-app}. -Alternatively, see {kibana-ref}/logs-ui-settings-kb.html[{logs} settings] for a complete list of logs configuration settings. - -[float] -=== More about container monitoring - -If you're monitoring Docker containers or Kubernetes pods, you can use autodiscovery to automatically change the configuration settings in response to changes in your containers. -Autodiscovery ensures that even when your container configuration changes, data is still collected. -To learn how to do this, see {filebeat-ref}/configuration-autodiscover.html[{filebeat} autodiscover configuration]. diff --git a/docs/en/logs/logs-overview.asciidoc b/docs/en/logs/logs-overview.asciidoc deleted file mode 100644 index 007d061d62..0000000000 --- a/docs/en/logs/logs-overview.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[[logs-overview]] -[role="xpack"] -== Logs monitoring overview - -++++ -Overview -++++ - -Logs monitoring enables you to view logs from your infrastructure to help identify problems in real-time. -You can view logs from servers, containers, services, and so on. -Additionally, you can drill down to view detailed information about an individual log entry, or you can seamlessly switch to view corresponding metrics, uptime information, or APM traces where available. You can also use machine learning to detect specific log anomalies automatically. - -[float] -=== Logs monitoring components - -Logs monitoring requires the following {stack} components. - -*https://www.elastic.co/products/elasticsearch[{es}]* is a real-time, -distributed storage, search, and analytics engine. -{es} can store, search, and analyze large volumes of data in near real-time. -The {logs-app} uses {es} to store logs data in {es} documents which are queried on demand. - -*https://www.elastic.co/products/beats[{beats}]* are open source data shippers that you install as agents on your servers to send data to {es}. -The {logs-app} uses Filebeat to collect logs from the servers, containers, and other services in your infrastructure. -Filebeat modules are available for most typical servers, containers, and services. - -*https://www.elastic.co/products/kibana[{kib}]* is an open source analytics and visualization platform designed to work with {es}. -You use {kib} to search, view, and interact with the logs data stored in {es}. -You can perform advanced data analysis and visualize your data in a variety of charts, tables, -and maps. -The <> in {kib} provides a dedicated user interface to view logs from the servers, containers, and services in your infrastructure. - -image::images/logs-monitoring-architecture.png[Logs monitoring components] diff --git a/docs/en/logs/logs-ui-intro.asciidoc b/docs/en/logs/logs-ui-intro.asciidoc deleted file mode 100644 index 4e2944b8f9..0000000000 --- a/docs/en/logs/logs-ui-intro.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[logs-app-overview]] -[role="xpack"] -== {logs-app} - -After setting up logs ingestion and log data is streaming to {es}, you can view real-time and historical logs in a compact, customizable display. -The log data is correlated with metrics data in the {metrics-guide}/metrics-app-overview.html[{metrics-app}], making it easier for you to diagnose problems. -You can also view related application traces or uptime information where available. - -You can stream the logs in real time, or view historical logs from a specified time range. - -The search bar in the log viewer supports {kibana-ref}/kuery-query.html[Kibana Query Language]. -You can enter ad hoc or structured queries to filter the log entries. - -[role="screenshot"] -image::images/logs-console.png[Logs app in Kibana] diff --git a/docs/en/logs/using.asciidoc b/docs/en/logs/using.asciidoc deleted file mode 100644 index 9830db9027..0000000000 --- a/docs/en/logs/using.asciidoc +++ /dev/null @@ -1,93 +0,0 @@ -[role="xpack"] -[[explore-logs-data]] -=== Explore and filter logs data -Use the {logs-app} in {kib} to explore and filter your logs in real time. - -You can customize the output to focus on the data you want to see and to control how you see it. -You can also view related application traces or uptime information where available. - -[role="screenshot"] -image::images/logs-console.png[Logs Console in Kibana] - -[float] -[[logs-search]] -==== Use the power of search - -To perform ad hoc searches for specific text, use the search bar. -You can also create structured queries using {kibana-ref}/kuery-query.html[Kibana Query Language]. -For example, enter `host.hostname : "host1"` to see only the information for `host1`. - -[float] -[[logs-configure-source]] -==== Configure the data to use for your logs -Are you using a custom index pattern to store the log entries? -Do you want to limit the entries shown or change the fields displayed in the columns? -If so, <> to change the index pattern and other settings. - -[float] -[[logs-time]] -==== Specify the time and date - -Click image:images/time-filter-calendar.png[time filter calendar], then choose the time range for the logs. - -Log entries for the specified time appear in the middle of the page. To quickly jump to a nearby point in time, click the minimap timeline to the right. - -[float] -[[logs-customize]] -==== Customize your view -Click *Customize* to customize the view. -Here, you can choose whether to wrap long lines and select your preferred text size. - -[float] -==== Configuring the data to use for your logs - -If your logs have custom index patterns, use non-default field settings, or contain parsed fields which you want to expose as individual columns, you can <>. - -[[stream-logs]] -=== Stream or pause logs -Click *Stream live* to start streaming live log data or click *Stop streaming* to focus on historical data. - -When you are viewing historical data, you can scroll back through the entries as far as there is data available. - -When you are streaming live data, the most recent log appears at the bottom of the page. -In live streaming mode, you are not able to choose a different time in the time selector or use the minimap timeline. -To do either of these things, you need to stop live streaming first. - -[float] -[[logs-highlight]] -=== Highlight a phrase in the logs stream -To highlight a word or phrase in the logs stream, click *Highlights* and enter your search phrase. - -[[inspect-log-events]] -=== Inspect log events -To inspect a log event, hover over it, then click the *View actions for line* icon image:images/logs-action-menu.png[View actions for line icon]. On the menu that opens, select *View details*. This opens the *Log event document details* fly-out that shows the fields associated with the log event. - -To filter the logs stream by one of the field values, in the log event details, click the *View event with filter* icon image:images/logs-view-event-with-filter.png[View event icon] beside the field. -A search filter is automatically added to the logs stream to enable you to filter the entries by this field and value. - -[float] -[[log-view-in-context]] -=== View log line in context -To view a certain line in its context (for example, with other log lines from the same file, or the same cloud container), hover over it, then click the *View actions for line* image:images/logs-action-menu.png[View actions for line icon]. On the menu that opens, select *View in context*. This opens the *View log in context* modal, that shows the log line in its context. - -[role="screenshot"] -image::images/logs-view-in-context.png[View a log line in context] - -[float] -[[view-log-anomalies]] -=== View log anomalies - -When the machine learning anomaly detection features are enabled, click *Log rate*, which allows you to -<> in your log data. - -[[logs-integrations]] -=== Integrate with Uptime and APM - -To see other actions related to the event, click *Actions* in the log event document details. -Depending on the event and the features you have configured, you can: - -* Select *View status in Uptime* to {uptime-guide}/uptime-app-overview.html[view related uptime information] in the *Uptime* app. -* Select *View in APM* to {kibana-ref}/traces.html[view related APM traces] in the *APM* app. - -[role="screenshot"] -image::images/actions-menu.png[Integrate with Uptime and APM] diff --git a/docs/en/metrics/aws-ec2-metricset.asciidoc b/docs/en/metrics/aws-ec2-metricset.asciidoc deleted file mode 100644 index cb242ac708..0000000000 --- a/docs/en/metrics/aws-ec2-metricset.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[aws-ec2-metricset]] -[role="xpack"] - -== AWS ECS Instance metrics - -*CPU Usage*:: Average of `aws.ec2.cpu.total.pct` - -*Inbound Traffic*:: Average of `aws.ec2.network.in.bytes_per_sec` - -*Outbound Traffic*:: Average of `aws.ec2.network.out.bytes_per_sec` - -*Disk Reads (Bytes)*:: Average of `aws.ec2.diskio.read.bytes_per_sec` - -*Disk Writes (Bytes)*:: Average of `aws.ec2.diskio.write.bytes_per_sec` - - -For information about which required fields the {metrics-app} uses to display EC2 instance metrics, see the <>. - diff --git a/docs/en/metrics/aws-rds-metricset.asciidoc b/docs/en/metrics/aws-rds-metricset.asciidoc deleted file mode 100644 index cd794a9979..0000000000 --- a/docs/en/metrics/aws-rds-metricset.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[aws-rds-metricset]] -[role="xpack"] - -== AWS RDS database metrics - -*CPU Usage*:: Average of `aws.rds.cpu.total.pct` - -*Connections*:: Average of `aws.rds.database_connections` - -*Queries Executed*:: Average of `aws.rds.queries` - -*Active Transactions*:: Average of `aws.rds.transactions.active` - -*Latency*:: Average of `aws.rds.latency.dml` - - -For information about which required fields the {metrics-app} uses to display RDS database metrics, see the <>. - diff --git a/docs/en/metrics/aws-s3-metricset.asciidoc b/docs/en/metrics/aws-s3-metricset.asciidoc deleted file mode 100644 index 6f3a2d914b..0000000000 --- a/docs/en/metrics/aws-s3-metricset.asciidoc +++ /dev/null @@ -1,18 +0,0 @@ -[[aws-s3-metricset]] -[role="xpack"] - -== AWS S3 bucket metrics - -*Bucket Size*:: Average of `aws.s3_daily_storage.bucket.size.bytes` - -*Total Requests*:: Average of `aws.s3_request.requests.total` - -*Number of Objects*:: Average of `aws.s3_daily_storage.number_of_objects` - -*Downloads (Bytes)*:: Average of `aws.s3_request.downloaded.bytes` - -*Uploads (Bytes)*:: Average of `aws.s3_request.uploaded.bytes` - - -For information about which required fields the {metrics-app} uses to display S3 bucket metrics, see the <>. - diff --git a/docs/en/metrics/aws-sqs-metricset.asciidoc b/docs/en/metrics/aws-sqs-metricset.asciidoc deleted file mode 100644 index eda4f6a1ee..0000000000 --- a/docs/en/metrics/aws-sqs-metricset.asciidoc +++ /dev/null @@ -1,17 +0,0 @@ -[[aws-sqs-metricset]] -[role="xpack"] - -== AWS SQS queue metrics - -*Messages Available*:: Max of `aws.sqs.messages.visible` - -*Messages Delayed*:: Max of `aws.sqs.messages.delayed` - -*Messages Added*:: Max of `aws.sqs.messages.sent` - -*Messages Returned Empty*:: Max of `aws.sqs.messages.not_visible` - -*Oldest Message*:: Max of `aws.sqs.oldest_message_age.sec` - - -For information about which required fields the {metrics-app} uses to display SQS queue metrics, see the <>. \ No newline at end of file diff --git a/docs/en/metrics/configuring-metrics-source.asciidoc b/docs/en/metrics/configuring-metrics-source.asciidoc deleted file mode 100644 index ab56bc2ecc..0000000000 --- a/docs/en/metrics/configuring-metrics-source.asciidoc +++ /dev/null @@ -1,28 +0,0 @@ -[role="xpack"] -[[configure-metrics-source]] - -=== Configure metrics source data - -If your metrics have custom index patterns, or use non-default field settings, you can override the default configuration settings. - -The default source configuration for metrics is specified in the {kibana-ref}/infrastructure-ui-settings-kb.html[Metrics app settings] in the {kibana-ref}/settings.html[Kibana configuration file]. -The default configuration uses the `metricbeat-*` index pattern to query the data. -The default configuration also defines field settings for things like timestamps and container names. - -To change the configuration settings, click the *Settings* tab. - -NOTE: These settings are shared with logs. Changes you make here may also affect the settings used by the {logs-app}. - -In the *Settings* tab, you can change the values in these sections: - -* *Name*: the name of the source configuration -* *Indices*: the index pattern or patterns in the Elasticsearch indices to read metrics data and log data from -* *Fields*: the names of specific fields in the indices that are used to query and interpret the data correctly - -When you have completed your changes, click *Apply*. - -If the fields are greyed out and cannot be edited, you may not have sufficient privileges to change the source configuration. -For more information see {kibana-ref}/xpack-security-authorization.html[Granting access to Kibana]. - -TIP: If {kibana-ref}/xpack-spaces.html[Spaces] are enabled in your Kibana instance, any configuration changes you make here are specific to the current space. -You can make different subsets of data available by creating multiple spaces with different data source configurations. \ No newline at end of file diff --git a/docs/en/metrics/docker-metricset.asciidoc b/docs/en/metrics/docker-metricset.asciidoc deleted file mode 100644 index 4f6ca6912a..0000000000 --- a/docs/en/metrics/docker-metricset.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[docker-metricset]] -[role="xpack"] - -== Docker container metrics - -*CPU Usage*:: Average of `docker.cpu.total.pct` - -*Memory Usage*:: Average of `docker.memory.usage.pct` - -*Inbound Traffic*:: Derivative of the maximum of `docker.network.in.bytes` scaled to a 1 second rate - -*Outbound Traffic*:: Derivative of the maximum of `docker.network.out.bytes` scaled to a 1 second rate - - -For information about which required fields the {metrics-app} uses to display Docker metrics, see the <>. \ No newline at end of file diff --git a/docs/en/metrics/explore-metrics-data.asciidoc b/docs/en/metrics/explore-metrics-data.asciidoc deleted file mode 100644 index c613401ceb..0000000000 --- a/docs/en/metrics/explore-metrics-data.asciidoc +++ /dev/null @@ -1,114 +0,0 @@ -[role="xpack"] -[[explore-metrics-data]] -=== Explore and filter metrics data - -Use the Metrics app in {kib} to monitor your infrastructure metrics and identify problems in real time. -You can explore metrics for hosts, containers, and services. -You can also drill down to view more detailed metrics, or seamlessly switch to view the corresponding logs, application traces, and uptime information. - -Initially, the *Inventory* tab shows an overview of the hosts in of your infrastructure and the current CPU usage for each host. -From here, you can view other metrics or drill down into areas of interest. - -[role="screenshot"] -image::images/infra-sysmon.png[Infrastructure Overview in Kibana] - -[float] -[[infra-cat]] -==== Choose the high-level view of your infrastructure - -Select the high-level view from *Hosts*, *Kubernetes*, or *Docker*. -When you change views, you see the same data through the perspective of a different category. - -The default representation is the *Map view*, which shows your components in a _waffle map_ of one or more rectangular grids. -If the view you select has a large number of components, you can hover over a component to see the details for that component. Alternatively, if you would prefer to see your infrastructure as a table, click *Table view*. - -[float] -[[infra-metric]] -==== Select the metric to view - -Select the metric to view from the *Metric* dropdown list. -The available metrics are those that are most relevant for the high-level view you selected. - -[float] -[[infra-group]] -==== Group components - -Select the way you want to group the infrastructure components from the *Group By* dropdown list. -The available options are specific to your physical, virtual, or container-based infrastructure. -Examples of grouping options include *Availability Zone*, *Machine Type*, *Project ID*, and *Cloud Provider* for hosts, and *Namespace* and *Node* for Kubernetes. - -[float] -[[infra-search]] -==== Use the power of search - -Use the search bar to perform ad hoc searches for specific text. -You can also create structured searches using {kibana-ref}/kuery-query.html[Kibana Query Language]. -For example, enter `host.hostname : "host1"` to see only the information for `host1`. - -[float] -[[infra-date]] -==== Specify the time and date - -Click the time filter image:images/infra-time-selector.png[time filter icon] to choose the timeframe for the metrics. -The values shown are the values for the last minute at the specified time and date. - -[float] -[[infra-refresh]] -==== Auto-refresh metrics - -Select *Auto-refresh* to keep up-to-date metrics information coming in, or *Stop refreshing* to focus on historical data without new distractions. - -[float] -[[infra-metrics-explorer]] -==== Visualize multiple metrics in Metrics Explorer - -<> allows you to visualize and analyze metrics for multiple components in a powerful and configurable way. Click the *Metrics Explorer* tab to get started. - -[[view-infrastructure-metrics]] - -=== View infrastructure metrics - -When you select *View Metrics* for a component in your infrastructure, you can view detailed metrics for that component, and for any related components. -You can also view additional component metadata. - -[role="screenshot"] -image::images/infra-view-metrics.png[Infrastructure View Metrics in Kibana] - -[[infra-view-metrics-date]] -==== Specify the time and date range - -Use the time filter to select the time and date range for the metrics. - -To quickly select some popular time range options, click the calendar dropdown image:images/time-filter-calendar.png[]. In this popup you can choose from: - -* *Quick select* to choose a recent time range, and use the back and forward arrows to move through the time ranges -* *Commonly used* to choose a time range from some commonly used options such as *Last 15 minutes*, *Today*, or *Week to date* -* *Refresh every* to specify an auto-refresh rate - -NOTE: When you start auto-refresh from within this dialog, the calendar dropdown changes to a clock image:images/time-filter-clock.png[]. - -For complete control over the start and end times, click the start time or end time shown in the bar beside the calendar dropdown. In this popup, you can choose from the *Absolute*, *Relative* or *Now* tabs, then specify the required options. - -[float] -[[infra-view-refresh-metrics-date]] -==== Refresh the metrics - -You can click *Refresh* to manually refresh the metrics. - -[[metrics-integrations]] -=== Integrate with Uptime, Logs, and APM - -Hover over a component to see more information about that component. - -Click a component to see the other actions available for that component. - -Depending on the features you have installed and configured, you can also: - -* View {uptime-guide}/uptime-app-overview.html[Uptime Information] in the *Uptime* app. - -* View {logs-guide}/inspect-log-events.html[Logs Information] in the *Logs* app. - -* View {kibana-ref}/traces.html[APM Traces] in the *APM* app. - -[role="screenshot"] -image::images/metrics-integrations.png[Metrics integrations] diff --git a/docs/en/metrics/host-metricset.asciidoc b/docs/en/metrics/host-metricset.asciidoc deleted file mode 100644 index 791e0119f8..0000000000 --- a/docs/en/metrics/host-metricset.asciidoc +++ /dev/null @@ -1,19 +0,0 @@ -[[host-metricset]] -[role="xpack"] - -== Host metrics - -*CPU Usage*:: Average of `system.cpu.user.pct` added to the average of `system.cpu.system.pct` divided by `system.cpu.cores` - -*Memory Usage*:: Average of `system.memory.actual.used.pct` - -*Load*:: Average of `system.load.5` - -*Inbound Traffic*:: Derivative of the maximum of `system.network.in.bytes` scaled to a 1 second rate - -*Outbound Traffic*:: Derivative of the maximum of `system.network.out.bytes` scaled to a 1 second rate - -*Log Rate*:: Derivative of the cumulative sum of the document count scaled to a 1 second rate. -This metric relies on the same indices as the logs. - -For information about which required fields the {metrics-app} uses to display host metrics, see the <>. diff --git a/docs/en/metrics/images/add-data.png b/docs/en/metrics/images/add-data.png deleted file mode 100644 index a2032b7260..0000000000 Binary files a/docs/en/metrics/images/add-data.png and /dev/null differ diff --git a/docs/en/metrics/images/create-metrics-alert.png b/docs/en/metrics/images/create-metrics-alert.png deleted file mode 100644 index 3d83cc17ec..0000000000 Binary files a/docs/en/metrics/images/create-metrics-alert.png and /dev/null differ diff --git a/docs/en/metrics/images/infra-sysmon.png b/docs/en/metrics/images/infra-sysmon.png deleted file mode 100644 index cb1cb7a21c..0000000000 Binary files a/docs/en/metrics/images/infra-sysmon.png and /dev/null differ diff --git a/docs/en/metrics/images/infra-time-selector.png b/docs/en/metrics/images/infra-time-selector.png deleted file mode 100644 index 181fac4c7b..0000000000 Binary files a/docs/en/metrics/images/infra-time-selector.png and /dev/null differ diff --git a/docs/en/metrics/images/infra-view-metrics.png b/docs/en/metrics/images/infra-view-metrics.png deleted file mode 100644 index 6001f18d28..0000000000 Binary files a/docs/en/metrics/images/infra-view-metrics.png and /dev/null differ diff --git a/docs/en/metrics/images/metrics-alert-message.png b/docs/en/metrics/images/metrics-alert-message.png deleted file mode 100644 index a6ffec324c..0000000000 Binary files a/docs/en/metrics/images/metrics-alert-message.png and /dev/null differ diff --git a/docs/en/metrics/images/metrics-explorer-screen.png b/docs/en/metrics/images/metrics-explorer-screen.png deleted file mode 100644 index 6d56491f7d..0000000000 Binary files a/docs/en/metrics/images/metrics-explorer-screen.png and /dev/null differ diff --git a/docs/en/metrics/images/metrics-integrations.png b/docs/en/metrics/images/metrics-integrations.png deleted file mode 100644 index 25e6f54be2..0000000000 Binary files a/docs/en/metrics/images/metrics-integrations.png and /dev/null differ diff --git a/docs/en/metrics/images/metrics-monitoring-architecture.png b/docs/en/metrics/images/metrics-monitoring-architecture.png deleted file mode 100644 index 8d0b997b98..0000000000 Binary files a/docs/en/metrics/images/metrics-monitoring-architecture.png and /dev/null differ diff --git a/docs/en/metrics/images/time-filter-calendar.png b/docs/en/metrics/images/time-filter-calendar.png deleted file mode 100644 index d0019c99fe..0000000000 Binary files a/docs/en/metrics/images/time-filter-calendar.png and /dev/null differ diff --git a/docs/en/metrics/images/time-filter-clock.png b/docs/en/metrics/images/time-filter-clock.png deleted file mode 100644 index fe8542aad4..0000000000 Binary files a/docs/en/metrics/images/time-filter-clock.png and /dev/null differ diff --git a/docs/en/metrics/index.asciidoc b/docs/en/metrics/index.asciidoc deleted file mode 100644 index 62d55c10aa..0000000000 --- a/docs/en/metrics/index.asciidoc +++ /dev/null @@ -1,30 +0,0 @@ -:doctype: book -:metrics: metrics -:metrics-app: Metrics app -:logs: logs -:logs-app: Logs app - -= Metrics Monitoring Guide - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] - -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -include::metrics-overview.asciidoc[] - -include::metrics-installation.asciidoc[] - -include::metrics-app-overview.asciidoc[] - -include::configuring-metrics-source.asciidoc[] - -include::explore-metrics-data.asciidoc[] - -include::metrics-explorer.asciidoc[] - -include::metrics-alerting.asciidoc[] - -include::metrics-fields-reference.asciidoc[] - -include::infrastructure-metrics.asciidoc[] - diff --git a/docs/en/metrics/infrastructure-metrics.asciidoc b/docs/en/metrics/infrastructure-metrics.asciidoc deleted file mode 100644 index c668195b39..0000000000 --- a/docs/en/metrics/infrastructure-metrics.asciidoc +++ /dev/null @@ -1,31 +0,0 @@ -[[infrastructure-metrics]] -[role="xpack"] - -= Infrastructure metrics - -This section contains detailed information about each of the metricsets the {metrics-app} supports. - -The metrics listed below are provided by the {beats} shippers. - -* <> -* <> -* <> -* <> -* <> -* <> -* <> - - -include::host-metricset.asciidoc[] - -include::docker-metricset.asciidoc[] - -include::kubernetes-metricset.asciidoc[] - -include::aws-ec2-metricset.asciidoc[] - -include::aws-s3-metricset.asciidoc[] - -include::aws-sqs-metricset.asciidoc[] - -include::aws-rds-metricset.asciidoc[] diff --git a/docs/en/metrics/kubernetes-metricset.asciidoc b/docs/en/metrics/kubernetes-metricset.asciidoc deleted file mode 100644 index cc2097b0c9..0000000000 --- a/docs/en/metrics/kubernetes-metricset.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[[kubernetes-metricset]] -[role="xpack"] - -== Kubernetes pod metrics - -*CPU Usage*:: Average of `kubernetes.pod.cpu.usage.node.pct` - -*Memory Usage*:: Average of `kubernetes.pod.memory.usage.node.pct` - -*Inbound Traffic*:: Derivative of the maximum of `kubernetes.pod.network.rx.bytes` scaled to a 1 second rate - -*Outbound Traffic*:: Derivative of the maximum of `kubernetes.pod.network.tx.bytes` scaled to a 1 second rate - - -For information about which required fields the {metrics-app} uses to display Kubernetes metrics, see the <>. \ No newline at end of file diff --git a/docs/en/metrics/metrics-alerting.asciidoc b/docs/en/metrics/metrics-alerting.asciidoc deleted file mode 100644 index a5496d8e69..0000000000 --- a/docs/en/metrics/metrics-alerting.asciidoc +++ /dev/null @@ -1,54 +0,0 @@ -[role="xpack"] -[[create-metric-alert]] -=== Create an alert - -[float] -==== Overview - -To use the alerting functionality you need to {kibana-ref}/alerting-getting-started.html#alerting-setup-prerequisites[set up alerting]. - -You can then select the *Create alert* option, from the *Alerts* actions dropdown. - -[float] -==== Fields and comparators - -The comparators available for conditions depend on the chosen field. The combinations available are: - -- Numeric fields: *more than*, *more than or equals*, *less than*, *less than or equals*, *equals*, and *does not equal*. -- Aggregatable fields: *is* and *is not*. -- Non-aggregatable fields: *matches*, *does not match*, *matches phrase*, *does not match phrase*. - -[role="screenshot"] -image::images/create-metrics-alert.png[Create metrics alert] - -[float] -==== Action messages - -To provide additional information about an alert, you can include basic and advanced variables in your action message. - -[role="screenshot"] -image::images/metrics-alert-message.png[Example alert message] - -**Basic variables** - -- `alertName`: The name of the alert -- `context.alertState`: The current state of the alert. This value is usually **Alert**. However, if you selected *Alert me if there's no data*, the value can also be **No Data** -- `context.group`: The *group* that the alert message concerns, if you've specified a value in *Create alert per* -- `context.reason`: A verbose description of why the alert is in the reported `alertState`. For example, *my.metric is above a threshold of 1.0 (current value is 1.5)* -- `context.timestamp`: The time at which the message was sent - -**Advanced variables** - -If you'd like more customization than `context.reason` provides, you can also construct a message with the following advanced variables. - -Using the structure of `context.{advancedVariableName}.condition{n}`, each of these variables is a record containing information about each alert condition. For example, if your alert has two conditions, you can access the value of both using: - -[source,text] ----- -{{context.value.condition0}} -{{context.value.condition1}} ----- - -- `context.metric.condition{n}`: The metrics that each `condition{n}` is reporting on -- `context.value.condition{n}`: The current value of the `context.metric.condition{n}` -- `context.threshold.condition{n}`: The threshold value of this condition diff --git a/docs/en/metrics/metrics-app-overview.asciidoc b/docs/en/metrics/metrics-app-overview.asciidoc deleted file mode 100644 index ecbf6a1347..0000000000 --- a/docs/en/metrics/metrics-app-overview.asciidoc +++ /dev/null @@ -1,9 +0,0 @@ -[[metrics-app-overview]] -[role="xpack"] -== {metrics-app} - -Use the {metrics-app} to view metrics from your infrastructure and identify problems in real-time. -You can also seamlessly switch to view the corresponding logs, application traces, or uptime information for a component. - -[role="screenshot"] -image::images/infra-sysmon.png[Metrics app in Kibana] diff --git a/docs/en/metrics/metrics-explorer.asciidoc b/docs/en/metrics/metrics-explorer.asciidoc deleted file mode 100644 index 027568b83f..0000000000 --- a/docs/en/metrics/metrics-explorer.asciidoc +++ /dev/null @@ -1,84 +0,0 @@ -[role="xpack"] -[[metrics-explorer]] -=== View multiple customizable metrics - -Metrics Explorer in the Metrics app in Kibana allows you to group and visualise multiple customizable metrics for one or more components in a graphical format. -This can be a starting point for further investigations. -You can also save your views and add them to {kibana-ref}/dashboard.html[dashboards]. - -[role="screenshot"] -image::images/metrics-explorer-screen.png[Metrics Explorer in Kibana] - -[float] -[[metrics-explorer-requirements]] -==== Metrics Explorer requirements and considerations - -* Metrics Explorer uses data collected from {metricbeat-ref}/metricbeat-overview.html[Metricbeat]. -* You need read permissions on `metricbeat-*` or the metric index specified in the Metrics configuration. -* Metrics Explorer uses the timestamp field from the *Settings* tab. -By default that is set to `@timestamp`. -* The interval for the X Axis is set to `auto`. -The bucket size is determined by the time range. -* To use *Open in Visualize* you need access to the Visualize app. -* To use *Create alert* you need to {kibana-ref}/alerting-getting-started.html#alerting-setup-prerequisites[set up alerting]. - -[float] -[[metrics-explorer-tutorial]] -==== Metrics Explorer tutorial - -In this tutorial we'll use Metrics Explorer to view the system load metrics for each host we're monitoring with Metricbeat. -After that, we'll filter down to a specific host and explore the outbound traffic for each network interface. - -Before we start, if you don't have any Metricbeat data, you'll need to head over to our -{metricbeat-ref}/metricbeat-overview.html[Metricbeat documentation] to install Metricbeat and start collecting data. - -1. When you have Metricbeat running and collecting data, open Kibana and navigate to *Metrics*. -The *Inventory* tab shows the host or hosts you are monitoring. - - -2. Select the *Metrics Explorer* tab. -The initial configuration has the *Average* aggregation selected, the *of* field populated with some default metrics, and the *graph per* dropdown set to `Everything`. - - -3. To select the metrics to view, firstly delete all the metrics currently shown in the *of* field by clicking the *X* by each metric name. -Then, in this field, start typing `system.load.1` and select this metric. -Also add metrics for `system.load.5` and `system.load.15`. -You will see a graph showing the average values of the metrics you selected. -In this step we'll leave the aggregation dropdown set to *Average* but you can try different values later if you like. - - -4. In the *graph per* dropdown, enter `host.name` and select this field. -You will see a separate graph for each host you are monitoring. -If you are collecting metrics for multiple hosts, multiple graphics are displayed. -If you only have metrics for a single host, you will see a single graph. -Congratulations! Either way, you've explored your first metric. - - -5. Let's explore a bit further. -In the upper right hand corner of the graph for one of the hosts, select the *Actions* dropdown and click *Add Filter* to show only the metrics for that host. -This adds a {kibana-ref}/kuery-query.html[Kibana Query Language] filter for `host.name` in the second row of the Metrics Explorer configuration. -If you only have one host, the graph will not change as you are already exploring metrics for a single host. - - -6. Now you can start exploring some host-specific metrics. -First, delete each of the system load metrics in the *of* field by clicking the *X* by the metric name. -Then enter the metric `system.network.out.bytes` to explore the outbound network traffic. -This is a monotonically increasing value, so change the aggregation dropdown to `Rate`. - - -7. Since hosts have multiple network interfaces, it is more meaningful to display one graph for each network interface. -To do this, select the *graph per* dropdown, start typing `system.network.name` and select this field. -You will now see a separate graph for each network interface. - - -8. If you like, you can put one of these graphs in a dashboard. -Choose a graph, click the *Actions* dropdown and select *Open In Visualize*. -This opens the graph in {kibana-ref}/TSVB.html[TSVB]. -From here you can save the graph and add it to a dashboard as usual. - - -9. You can also create an alert based on the metrics in a graph. -Choose a graph, click the *Actions* dropdown and select *Create alert*. -This opens the {kibana-ref}/defining-alerts.html[alert flyout] prefilled with mertrics from the chart. - - diff --git a/docs/en/metrics/metrics-fields-reference.asciidoc b/docs/en/metrics/metrics-fields-reference.asciidoc deleted file mode 100644 index 4e82279398..0000000000 --- a/docs/en/metrics/metrics-fields-reference.asciidoc +++ /dev/null @@ -1,403 +0,0 @@ -[[metrics-fields-reference]] -[role="xpack"] -= Metrics fields reference - -The following sections list the required fields the {metrics-app} uses to display data. -Some of the fields listed are https://www.elastic.co/guide/en/ecs/current/ecs-reference.html#_what_is_ecs[ECS fields]. - -The fields are grouped in the following categories: - -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> -* <> - -[float] -== Additional field details - -To display data properly in some views, the `event.dataset` field is required. This field is a combination of `metricset.module`, which is the Metricbeat module name, and `metricset.name`, which is the metricset name. - -To determine the optimal time interval for each metric, all of the charts use the `metricset.period`. If `metricset.period` is not available then it falls back to 1 minute intervals. - -[[base-fields]] -== Base fields - -The `base` field set contains all fields which are on the top level. These fields are common across all types of events. - -`@timestamp`:: - -Date/time when the event originated. -+ -This is the date/time extracted from the event, typically representing when the event was generated by the source. -If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. -Required field for all events. -+ -type: date -+ -required: True -+ -ECS field: True -+ -example: `May 27, 2020 @ 15:22:27.982` - -`message`:: - -For log events the message field contains the log message, optimized for viewing in a log viewer. -+ -For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. -+ -If multiple messages exist, they can be combined into one message. -+ -type: text -+ -required: True -+ -ECS field: True -+ -example: `Hello World` - - -[[host-fields]] -== Hosts fields - -These fields must be mapped to display host data in the {metrics-app}. - -`host.name`:: - -Name of the host. -+ -It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `MacBook-Elastic.local` - -`host.ip`:: - -IP of the host that records the event. -+ -type: ip -+ -required: True -+ -ECS field: True - -[[docker-fields]] -== Docker container fields - -These fields must be mapped to display Docker container data in the {metrics-app}. - -`container.id`:: - -Unique container id. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `data` - -`container.name`:: - -Container name. -+ -type: keyword -+ -required: True -+ -ECS field: True - -`container.ip_address`:: - -IP of the container. -+ -type: ip -+ -required: True -+ -ECS field: False - -[[kubernetes-fields]] -== Kubernetes pod fields - -These fields must be mapped to display Kubernetes pod data in the {metrics-app}. - -`kubernetes.pod.uid`:: - -Kubernetes Pod UID. -+ -type: keyword -+ -required: True -+ -ECS field: False -+ -example: `8454328b-673d-11ea-7d80-21010a840123` - -`kubernetes.pod.name`:: - -Kubernetes pod name. -+ -type: keyword -+ -required: True -+ -ECS field: False -+ -example: `nginx-demo` - -`kubernetes.pod.ip`:: - -IP of the Kubernetes pod. -+ -type: keyword -+ -required: True -+ -ECS field: False - -[[aws-ec2-fields]] -== AWS EC2 instance fields - -These fields must be mapped to display EC2 instance data in the {metrics-app}. - -`cloud.instance.id`:: - -Instance ID of the host machine. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `i-1234567890abcdef0` - -`cloud.instance.name`:: - -Instance name of the host machine. -+ -type: keyword -+ -required: True -+ -ECS field: True - -`aws.ec2.instance.public.ip`:: - -Instance public IP of the host machine. -+ -type: keyword -+ -required: True -+ -ECS field: False - -[[aws-s3-fields]] -== AWS S3 bucket fields - -These fields must be mapped to display S3 bucket data in the {metrics-app}. - -`aws.s3.bucket.name`:: - -The name or ID of the AWS S3 bucket. -+ -type: keyword -+ -required: True -+ -ECS field: False - -[[aws-sqs-fields]] -== AWS SQS queue fields - -These fields must be mapped to display SQS queue data in the {metrics-app}. - -`aws.sqs.queue.name`:: - -The name or ID of the AWS SQS queue. -+ -type: keyword -+ -required: True -+ -ECS field: False - -[[aws-rds-fields]] -== AWS RDS database fields - -These fields must be mapped to display RDS database data in the {metrics-app}. - -`aws.rds.db_instance.arn`:: - -Amazon Resource Name(ARN) for each rds. -+ -type: keyword -+ -required: True -+ -ECS field: False - -`aws.rds.db_instance.identifier`:: - -Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance. -+ -type: keyword -+ -required: True -+ -ECS field: False - -[[group-inventory-fields]] -== Additional grouping fields - -Depending on which entity you select in the *Inventory* view, these additional fields can be mapped to group entities by. - -`cloud.availability_zone`:: - -Availability zone in which this host is running. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `us-east-1c` - -`cloud.machine.type`:: - -Machine type of the host machine. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `t2.medium` - -`cloud.region`:: - -Region in which this host is running. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `us-east-1` - -`cloud.instance.id`:: - -Instance ID of the host machine. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `i-1234567890abcdef0` - -`cloud.provider`:: - -Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `aws` - -`cloud.instance.name`:: - -Instance name of the host machine. -+ -type: keyword -+ -required: True -+ -ECS field: True - -`cloud.project.id`:: - -Name of the project in Google Cloud. -+ -type: keyword -+ -required: True -+ -ECS field: False - -`service.type`:: - -The type of the service data is collected from. -+ -The type can be used to group and correlate logs and metrics from one service type. -+ -Example: If metrics are collected from Elasticsearch, service.type would be elasticsearch. -+ -type: keyword -+ -required: True -+ -ECS field: False -+ -example: `elasticsearch` - -`host.hostname`:: - -Hostname of the host. -+ -It normally contains what the `hostname` command returns on the host machine. -+ -type: keyword -+ -required: True, if you want to use the {ml-features}. -+ -ECS field: True -+ -example: `Elastic.local` - -`host.os.name`:: - -Operating system name, without the version. -+ -Multi-fields: -+ -* os.name.text (type: text) -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `Mac OS X` - -`host.os.kernel`:: - -Operating system kernel version as a raw string. -+ -type: keyword -+ -required: True -+ -ECS field: True -+ -example: `4.4.0-112-generic` \ No newline at end of file diff --git a/docs/en/metrics/metrics-installation.asciidoc b/docs/en/metrics/metrics-installation.asciidoc deleted file mode 100644 index f4825e6121..0000000000 --- a/docs/en/metrics/metrics-installation.asciidoc +++ /dev/null @@ -1,107 +0,0 @@ -[[install-metrics-monitoring]] -[role="xpack"] -== Get started - -To use the {metrics-app}, you need {es} for storing and searching your data, and {kib} -for visualizing and managing it. - -To ingest data, you can use {metricbeat} installed on each server you want to monitor, or -third-party collectors that are configured to ship ECS-compliant data. The <> -provides a list of all fields used in the {metrics-app}. - -[float] -[[before-you-begin-metricbeat]] -=== Before you begin - -To get started quickly, spin up a deployment of our {ess-product}[hosted {ess}]. The deployment includes -{es} and {kib}, and is available on AWS, GCP, and Azure. {ess-trial}[Try {ess} for free]. - -To install {metricbeat} from {kib}, on the machine where you want to collect the data, open a {kib} browser window. -In the *Observability* section displayed on the home page of {kib}, click *Add metric data*. -Now follow the instructions for the type of data you want to collect. -The instructions include how to install and configure {metricbeat}, and enable the appropriate {metricbeat} integration for your data. - -[role="screenshot"] -image::images/add-data.png[Add metrics data] - -Alternatively, you can install and self manage {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[{es}] -and {stack-gs}/get-started-elastic-stack.html#install-kibana[{kib}]. First see the -https://www.elastic.co/support/matrix[Elastic Support Matrix] -for information about supported operating systems and product compatibility. - -[float] -[[download-install-metricbeat]] -=== Step 1: Download and install {metricbeat} - -Install {metricbeat} as close as possible to the service you want to monitor. For example, if you have four servers with -MySQL running, it’s recommended that you run {metricbeat} on each server. This allows {metricbeat} to access your service from -localhost and does not cause any additional network traffic or prevent {metricbeat} from collecting metrics when there are -network problems. Metrics from multiple {metricbeat} instances will be combined on the Elasticsearch server. - -To download and install {metricbeat}, see {metricbeat-ref}/metricbeat-installation-configuration.html#install[Installing {metricbeat}] -and use the commands that work with your system. - -[float] -[[configuring-metricbeat]] -=== Step 2: Configure {metricbeat} - -Now that you have completed the Metricbeat download and installation process, the next step is to configure {metricbeat}. - -. Connect to {es} and {kib}. -+ -Connections to {es} and {kib} are required to set up Metricbeat. Set the connection information in `metricbeat.yml`. -To locate this configuration file, see {metricbeat-ref}/directory-layout.html[Directory layout]. -+ -For information on how to connect to the {es} and {kib}, see {metricbeat-ref}/metricbeat-installation-configuration.html#set-connection[Connecting -to Elastic Stack]. - -. Enable {metricbeat} integrations. -+ -{metricbeat} uses integrations to collect metrics for populating the {metrics-app} with data. Each integration defines the basic -logic for collecting data from a specific service, such as Redis or MySQL. An -integration consists of metricsets that fetch and structure the data. Read -{metricbeat-ref}/how-metricbeat-works.html[How Metricbeat works] to learn more. -+ -See {metricbeat-ref}/metricbeat-installation-configuration.html#enable-modules[Enabling and configuring metrics collection modules] -for information on how to identify which integrations are available, how to enable them, and how to -configure them. -+ -[TIP] -========= -If you're monitoring Docker containers or Kubernetes pods, you can use autodiscovery to automatically change the configuration settings in response to changes in your containers. -Autodiscovery ensures that even when your container configuration changes, data is still collected. -To learn how to do this, see {metricbeat-ref}/configuration-autodiscover.html[{metricbeat} autodiscover configuration] -========= - -. Set up assets. -+ -{metricbeat} comes with predefined assets for parsing, indexing, and visualizing your data. For information on how to load these assets, see -{metricbeat-ref}/metricbeat-installation-configuration.html#setup-assets[Setting up assets]. - -[float] -[[starting-metricbeat]] -=== Step 3: Start {metricbeat} - -Before starting {metricbeat}, modify the user credentials in `metricbeat.yml` and specify a user who is {metricbeat-ref}/privileges-to-publish-events.html[authorized to publish events]. - -To start {metricbeat}, see {metricbeat-ref}/metricbeat-installation-configuration.html#start[Starting {metricbeat}] -and use the commands that work with your system. - -[float] -[[verify-metricbeat-data]] -=== Step 4: Verify your data in {kib} - -{metricbeat} comes with pre-built {kib} dashboards and UIs for visualizing log data. You loaded the dashboards earlier when you -ran the `setup` command as part of setting up assets. The dashboards are provided as examples. We recommend that you {kibana-ref}/dashboard.html[customize them] -to meet your needs. - -For more information, see {metricbeat-ref}/metricbeat-installation-configuration.html#view-data[Viewing your data in {kib}]. - -[NOTE] -========== -If your metrics have custom index patterns, or use non-default fields, you can override the default <>. -To modify configurations, use the <> in the {metrics-app}. -Alternatively, see {kibana-ref}/infrastructure-ui-settings-kb.html[{metrics} settings] for -a complete list of metrics configuration settings. -========== - diff --git a/docs/en/metrics/metrics-overview.asciidoc b/docs/en/metrics/metrics-overview.asciidoc deleted file mode 100644 index 4223068165..0000000000 --- a/docs/en/metrics/metrics-overview.asciidoc +++ /dev/null @@ -1,32 +0,0 @@ -[[metrics-overview]] -[role="xpack"] -== Metrics monitoring overview - -++++ -Overview -++++ - -The {metrics-app} enables you to monitor metrics for your infrastructure to help identify problems in real-time. -You can view metrics for servers, containers, services, and so on. -Additionally, you can drill down to view detailed metrics, or you can seamlessly switch to view corresponding logs, uptime information, or APM traces where available. - -[float] -=== Metrics monitoring components - -Metrics monitoring requires the following {stack} components. - -*https://www.elastic.co/products/elasticsearch[{es}]* is a real-time, -distributed storage, search, and analytics engine. {es} can store, search, and analyze large volumes of data in near real-time. -The {metrics-app} uses {es} to store metrics data in {es} documents which are queried on demand. - -*https://www.elastic.co/products/beats[{beats}]* are open source data shippers that you install as agents on your servers to send data to {es}. -The {metrics-app} uses Metricbeat to collect metrics from the servers, containers, and other services in your infrastructure. -Metricbeat modules are available for most typical servers, containers, and services. - -*https://www.elastic.co/products/kibana[{kib}]* is an open source analytics and visualization platform designed to work with {es}. -You use {kib} to search, view, and interact with the metrics data stored in {es}. -You can perform advanced data analysis and visualize your data in a variety of charts, tables, -and maps. -The {metrics-app} in {kib} provides a dedicated user interface to view metrics for your infrastructure. - -image::images/metrics-monitoring-architecture.png[Metrics monitoring components] diff --git a/docs/en/uptime/alerting.asciidoc b/docs/en/uptime/alerting.asciidoc deleted file mode 100644 index 72d82cc198..0000000000 --- a/docs/en/uptime/alerting.asciidoc +++ /dev/null @@ -1,33 +0,0 @@ -[role="xpack"] -[[uptime-alerting]] - -=== Uptime alerting - -The Uptime app integrates with Kibana's {kibana-ref}/alerting-getting-started.html[alerting and actions] -feature. It provides a set of built-in actions and Uptime specific threshold alerts for you to use -and enables central management of all alerts from {kibana-ref}/management.html[Kibana Management]. - -[role="screenshot"] -image::images/create-alert.png[Create alert] - -[float] -==== Monitor status alerts - -To receive alerts when a monitor goes down or goes below a given availability threshold, -use the alerting menu at the top of the overview page. Use a query in the alert flyout -to determine which monitors to check with your alert. If you already have a query in -the overview page search bar it will be carried over into this box. - -[role="screenshot"] -image::images/monitor-status-alert.png[Create monitor status alert flyout] - -[float] -==== TLS alerts - -Uptime also provides the ability to create an alert that will notify you when one or -more of your monitors have a TLS certificate that will expire within some threshold, -or when its age exceeds a limit. The values for these thresholds are configurable on -the <>. - -[role="screenshot"] -image::images/tls-alert.png[Create TLS alert flyout] diff --git a/docs/en/uptime/app-overview.asciidoc b/docs/en/uptime/app-overview.asciidoc deleted file mode 100644 index 692489a7ad..0000000000 --- a/docs/en/uptime/app-overview.asciidoc +++ /dev/null @@ -1,70 +0,0 @@ -[role="xpack"] -[[uptime-app]] -== Uptime app - -The Uptime app in {kib} enables you to monitor the status of network endpoints via HTTP/S, TCP, and ICMP. -You can explore endpoint status over time, drill down into specific monitors, -and view a high-level snapshot of your environment at any point in time. - -[role="screenshot"] -image::images/uptime-overview.png[Uptime app overview] - -[role="xpack"] -[[uptime-app-overview]] -=== Overview - -The Uptime overview helps you quickly identify and diagnose outages and -other connectivity issues within your network or environment. You can use the date range -selection that is global to the Uptime app, to highlight -an absolute date range, or a relative one, similar to other areas of {kib}. - -[float] -=== Filter bar - -The Filter bar enables you to quickly view specific groups of monitors, or even -an individual monitor if you have defined many. - -This control allows you to use automated filter options, as well as input custom filter -text to select specific monitors by field, URL, ID, and other attributes. - -[role="screenshot"] -image::images/filter-bar.png[Filter bar] - -[float] -=== Snapshot panel - -The Snapshot panel displays the overall -status of the environment you're monitoring or a subset of those monitors. -You can see the total number of detected monitors within the selected -Uptime date range, along with the number of monitors -in an `up` or `down` state, which is based on the last check reported by Heartbeat -for each monitor. - -Next to the counts, there is a histogram displaying the change over time throughout the -selected date range. - -[role="screenshot"] -image::images/snapshot-view.png[Snapshot view] - -[float] -=== Monitor list - -Information about individual monitors is displayed in the monitor list and provides a quick -way to navigate to a more in-depth visualization for interesting hosts or endpoints. - -The information displayed includes the recent status of a host or endpoint, when the monitor was last checked, its -ID and URL, and its IP address. There is also sparkline showing its check status over time. - -[role="screenshot"] -image::images/monitor-list.png[Monitor list] - -[float] -=== Observability integrations - -The Monitor list also contains a menu of available integrations. When Uptime detects Kubernetes or -Docker related host information, it provides links to open the Metrics app or Logs app pre-filtered -for this host. Additionally, to help you quickly determine if these solutions contain data relevant to you, -this feature contains links to filter the other views on the host's IP address. - -[role="screenshot"] -image::images/observability_integrations.png[Observability integrations] diff --git a/docs/en/uptime/certificates.asciidoc b/docs/en/uptime/certificates.asciidoc deleted file mode 100644 index 58db91aa08..0000000000 --- a/docs/en/uptime/certificates.asciidoc +++ /dev/null @@ -1,15 +0,0 @@ -[role="xpack"] -[[uptime-certificates]] - -=== Certificates - -The certificates page enables you to visualize TLS certificate data in your indices. In addition to the -common name, associated monitors, issuer information, and SHA fingerprints, Uptime also assigns a status -derived from the threshold values in the <>. - -Several of the columns on this page are sortable. You can use the search bar at the top of the view -to find values in most of the TLS-related fields in your Uptime indices. Additionally, using the `Alerts` -dropdown at the top of the page you can create a TLS alert. - -[role="screenshot"] -image::images/certificates-page.png[Certificates] diff --git a/docs/en/uptime/deployment-arch.asciidoc b/docs/en/uptime/deployment-arch.asciidoc deleted file mode 100644 index c1b2f596c6..0000000000 --- a/docs/en/uptime/deployment-arch.asciidoc +++ /dev/null @@ -1,27 +0,0 @@ -[role="xpack"] -[[uptime-deployment-arch]] -== Deployment Architecture - -There are multiple ways to deploy Uptime and Heartbeat. -Use the information in this section to determine the best deployment for you. -A guiding principle is that when an outage takes down the service being monitored it should not also take down Heartbeat. -You want Heartbeat to be functioning even when your service is not, so the guidelines here help you maximize this possibility. - -Heartbeat is commonly run as a centralized service within a data center. -While it is possible to run it as a separate "sidecar" process paired with each process/container, we recommend against it. -Running Heartbeat centrally ensures you will still be able to see monitoring data in the event of an overloaded, disconnected, or otherwise malfunctioning server. - -For further redundancy, you may want to deploy multiple Heartbeats across geographic and network boundaries to provide more data. -To do so, specify Heartbeat's observer {heartbeat-ref}/configuration-observer-options.html[geo options]. - -Some examples might be: - -* **A site served from a content delivery network (CDN) with points of presence (POPs) around the globe:** -To check if your site is reachable via CDN POPS, you may want to have multiple Heartbeat instances at different data centers around the world. -* **A service within a single data center that is accessed across multiple VPNs:** -Set up one Heartbeat instance within the VPN the service operates from, and another within an additional VPN that users access the service from. -Having both instances helps pinpoint network errors in the event of an outage. -* **A single service running primarily in a US east coast data center, with a hot failover located in a US west coast data center:** -In each data center, run a Heartbeat instance that checks both the local copy of the service and its counterpart across the country. -Set up two monitors in each region, one for the local service and one for the remote service. -In the event of a data center failure it will be immediately apparent if the service had a connectivity issue to the outside world or if the failure was only internal. diff --git a/docs/en/uptime/images/cert-exp.png b/docs/en/uptime/images/cert-exp.png deleted file mode 100644 index cd87668db9..0000000000 Binary files a/docs/en/uptime/images/cert-exp.png and /dev/null differ diff --git a/docs/en/uptime/images/certificates-page.png b/docs/en/uptime/images/certificates-page.png deleted file mode 100644 index 598aae982c..0000000000 Binary files a/docs/en/uptime/images/certificates-page.png and /dev/null differ diff --git a/docs/en/uptime/images/check-history.png b/docs/en/uptime/images/check-history.png deleted file mode 100644 index 91565bf59a..0000000000 Binary files a/docs/en/uptime/images/check-history.png and /dev/null differ diff --git a/docs/en/uptime/images/create-alert.png b/docs/en/uptime/images/create-alert.png deleted file mode 100644 index 54a0c400ca..0000000000 Binary files a/docs/en/uptime/images/create-alert.png and /dev/null differ diff --git a/docs/en/uptime/images/crosshair-example.png b/docs/en/uptime/images/crosshair-example.png deleted file mode 100644 index a4559eac1c..0000000000 Binary files a/docs/en/uptime/images/crosshair-example.png and /dev/null differ diff --git a/docs/en/uptime/images/filter-bar.png b/docs/en/uptime/images/filter-bar.png deleted file mode 100644 index dee735d0f4..0000000000 Binary files a/docs/en/uptime/images/filter-bar.png and /dev/null differ diff --git a/docs/en/uptime/images/indices.png b/docs/en/uptime/images/indices.png deleted file mode 100644 index 4090747b67..0000000000 Binary files a/docs/en/uptime/images/indices.png and /dev/null differ diff --git a/docs/en/uptime/images/monitor-charts.png b/docs/en/uptime/images/monitor-charts.png deleted file mode 100644 index 522f346626..0000000000 Binary files a/docs/en/uptime/images/monitor-charts.png and /dev/null differ diff --git a/docs/en/uptime/images/monitor-list.png b/docs/en/uptime/images/monitor-list.png deleted file mode 100644 index 0c8ad47342..0000000000 Binary files a/docs/en/uptime/images/monitor-list.png and /dev/null differ diff --git a/docs/en/uptime/images/monitor-status-alert.png b/docs/en/uptime/images/monitor-status-alert.png deleted file mode 100644 index 652a6bc431..0000000000 Binary files a/docs/en/uptime/images/monitor-status-alert.png and /dev/null differ diff --git a/docs/en/uptime/images/observability_integrations.png b/docs/en/uptime/images/observability_integrations.png deleted file mode 100644 index 6589c0c556..0000000000 Binary files a/docs/en/uptime/images/observability_integrations.png and /dev/null differ diff --git a/docs/en/uptime/images/settings.png b/docs/en/uptime/images/settings.png deleted file mode 100644 index dd36f0a6d7..0000000000 Binary files a/docs/en/uptime/images/settings.png and /dev/null differ diff --git a/docs/en/uptime/images/snapshot-view.png b/docs/en/uptime/images/snapshot-view.png deleted file mode 100644 index 1fce2e9592..0000000000 Binary files a/docs/en/uptime/images/snapshot-view.png and /dev/null differ diff --git a/docs/en/uptime/images/status-bar-map.png b/docs/en/uptime/images/status-bar-map.png deleted file mode 100644 index e15bfe1521..0000000000 Binary files a/docs/en/uptime/images/status-bar-map.png and /dev/null differ diff --git a/docs/en/uptime/images/status-bar.png b/docs/en/uptime/images/status-bar.png deleted file mode 100644 index 8d242789cd..0000000000 Binary files a/docs/en/uptime/images/status-bar.png and /dev/null differ diff --git a/docs/en/uptime/images/tls-alert.png b/docs/en/uptime/images/tls-alert.png deleted file mode 100644 index 19efe07838..0000000000 Binary files a/docs/en/uptime/images/tls-alert.png and /dev/null differ diff --git a/docs/en/uptime/images/uptime-multi-deployment.png b/docs/en/uptime/images/uptime-multi-deployment.png deleted file mode 100644 index 5440d91e48..0000000000 Binary files a/docs/en/uptime/images/uptime-multi-deployment.png and /dev/null differ diff --git a/docs/en/uptime/images/uptime-overview.png b/docs/en/uptime/images/uptime-overview.png deleted file mode 100644 index 25c88b2d14..0000000000 Binary files a/docs/en/uptime/images/uptime-overview.png and /dev/null differ diff --git a/docs/en/uptime/images/uptime-setup.png b/docs/en/uptime/images/uptime-setup.png deleted file mode 100644 index 398125202f..0000000000 Binary files a/docs/en/uptime/images/uptime-setup.png and /dev/null differ diff --git a/docs/en/uptime/images/uptime-simple-deployment.png b/docs/en/uptime/images/uptime-simple-deployment.png deleted file mode 100644 index f46dfdb2b8..0000000000 Binary files a/docs/en/uptime/images/uptime-simple-deployment.png and /dev/null differ diff --git a/docs/en/uptime/index.asciidoc b/docs/en/uptime/index.asciidoc deleted file mode 100644 index 01a93cb454..0000000000 --- a/docs/en/uptime/index.asciidoc +++ /dev/null @@ -1,22 +0,0 @@ - -include::{asciidoc-dir}/../../shared/versions/stack/{source_branch}.asciidoc[] -include::{asciidoc-dir}/../../shared/attributes.asciidoc[] - -= Uptime monitoring guide - -include::overview.asciidoc[] - -include::install.asciidoc[] - -include::deployment-arch.asciidoc[] - -include::app-overview.asciidoc[] - -include::monitor.asciidoc[] - -include::settings.asciidoc[] - -include::certificates.asciidoc[] - -include::alerting.asciidoc[] - diff --git a/docs/en/uptime/install.asciidoc b/docs/en/uptime/install.asciidoc deleted file mode 100644 index 5c999c753f..0000000000 --- a/docs/en/uptime/install.asciidoc +++ /dev/null @@ -1,74 +0,0 @@ -[[install-uptime]] -== Install Uptime - -The easiest way to get started with Elastic Uptime is by using our hosted {es} Service on Elastic Cloud. -The {es} Service is available on both AWS and GCP, -and automatically configures {es} and {kib}. - -[float] -=== Hosted Elasticsearch Service - -Skip managing your own {es} and {kib} instance by using our -https://www.elastic.co/cloud/elasticsearch-service[hosted {es} Service] on -Elastic Cloud. - -{ess-trial}[Try out the {es} Service for free], -then jump straight to <>. - -[float] -[[before-installation]] -=== Install the stack yourself - -If you'd rather install the stack yourself, -first see the https://www.elastic.co/support/matrix[Elastic Support Matrix] for information about supported operating systems and product compatibility. - -* <> -* <> -* <> - -[[install-elasticsearch]] -=== Step 1: Install Elasticsearch - -Install an {es} cluster, start it up, and make sure it's running. - -. Verify that your system meets the -https://www.elastic.co/support/matrix#matrix_jvm[minimum JVM requirements] for {es}. -. {stack-gs}/get-started-elastic-stack.html#install-elasticsearch[Install Elasticsearch]. -. {stack-gs}/get-started-elastic-stack.html#_make_sure_elasticsearch_is_up_and_running[Make sure elasticsearch is up and running]. - -[[install-kibana]] -=== Step 2: Install Kibana - -Install {kib}, start it up, and open up the web interface: - -. {stack-gs}/get-started-elastic-stack.html#install-kibana[Install Kibana]. -. {stack-gs}/get-started-elastic-stack.html#_launch_the_kibana_web_interface[Launch the Kibana Web Interface]. - -[[install-heartbeat]] -=== Step 3: Install and configure Heartbeat - -Uptime requires the setup of monitors in Heartbeat. -These monitors provide the data you'll be visualizing in the {observability-guide}/monitor-uptime.html[Uptime app]. - -For instructions on installing and configuring Heartbeat, see the *Setup Instructions* in {kib}. -Additional information is available in {heartbeat-ref}/heartbeat-installation-configuration.html[{heartbeat} quick start]. - -[role="screenshot"] -image::images/uptime-setup.png[Installation instructions on the Uptime page in Kibana] - -[[setup-security]] -=== Step 4: Set up Security - -Secure your installation by following the {heartbeat-ref}/securing-heartbeat.html[Secure Heartbeat] documentation. - -[float] -==== Important considerations - -* Make sure you're using the same major versions of Heartbeat and {kib}. - -* Index patterns tell {kib} which {es} indices you want to explore. -The Uptime app requires a +heartbeat-{major-version-only}*+ index pattern. -If you have configured a different index pattern, you can use {ref}/indices-aliases.html[index aliases] to ensure data is recognized by the Uptime app. - -After you install and configure Heartbeat, -the {observability-guide}/monitor-uptime.html[Uptime app] is automatically populated with the Heartbeat monitors. diff --git a/docs/en/uptime/monitor.asciidoc b/docs/en/uptime/monitor.asciidoc deleted file mode 100644 index 6c3b21167e..0000000000 --- a/docs/en/uptime/monitor.asciidoc +++ /dev/null @@ -1,65 +0,0 @@ -[role="xpack"] -[[uptime-monitor]] -=== Monitor - -The Monitor page helps you gain insights into the performance -of a specific network endpoint. A detailed visualization of -the monitor's request duration over time, as well as the `up`/`down` -status over time, is displayed. By configuring Machine Learning jobs -on this page, you can also also detect anomalies in response time data. - - -==== Status panel - -The Status panel displays a quick summary of the latest information -regarding your monitor. You can view its latest status, click a link to -visit the targeted URL, see its most recent request duration, and determine the -amount of time that has elapsed since the last check. - -On the right-hand side, service availability is shown per monitoring location. -The availability percentage displayed is the percentage of successful checks -made during the time period selected. - -You can toggle the availability view to show a geographic map of -each location as a pinpoint on the map, along with the amount of time -elapsed since data was last received from that location. - -[role="screenshot"] -image::images/status-bar.png[Status bar] -image::images/status-bar-map.png[Status with map] - - -[float] -==== Monitor charts - -The Monitor charts visualize information over the time specified in the -date range. These charts help you gain insights into how quickly requests are being resolved -by the targeted endpoint, and give you a sense of how frequently a host or endpoint -was down in your selected timespan. - -[role="screenshot"] -image::images/monitor-charts.png[Monitor charts] - -The Monitor duration chart displays request duration information for your monitor. -The area surrounding the line is the range of request time for the corresponding -bucket. The line is the average time. In the upper right hand of this panel -you can enable and disable anomaly detection using Machine Learning. When response times change -in an unexpected way the time range in which they occurred are highlighted with a color. -After enabling anomaly detection you can use the same menu to enable alerts for anomaly detection. - -The pings over time chart is a graphical representation of the check statuses over time. -Hover over the charts to display crosshairs with specific numeric data. - -[role="screenshot"] -image::images/crosshair-example.png[Chart crosshair] - -[float] -==== Check history - -The Check history table lists the total count of this monitor's checks for the selected -date range. To help find recent problems on a per-check basis, you can filter the checks -by status and location. This table can help you gain some insight into more granular details -about recent individual data points that Heartbeat is logging about your host or endpoint. - -[role="screenshot"] -image::images/check-history.png[Check history view] diff --git a/docs/en/uptime/overview.asciidoc b/docs/en/uptime/overview.asciidoc deleted file mode 100644 index a929675b15..0000000000 --- a/docs/en/uptime/overview.asciidoc +++ /dev/null @@ -1,57 +0,0 @@ -[role="xpack"] -[[uptime-overview]] -== Elastic Uptime overview - -++++ -Overview -++++ - -Elastic Uptime enables you to monitor the availability and response times of applications and services in real time and to detect problems before they affect users. - -Elastic Uptime helps you to understand uptime and response time characteristics for your services and applications. -It can be deployed both inside and outside your organization's network, so that you can analyze problems from multiple vantage points. - -Elastic Uptime uses these components: *Heartbeat*, *Elasticsearch* and *Kibana*. - -[float] -=== Heartbeat - -{heartbeat-ref}/index.html[Heartbeat] is an open source data shipper that performs uptime monitoring. -Elastic Uptime uses Heartbeat to collect monitoring data from your target applications and services, and ship it to Elasticsearch. - -[float] -=== Elasticsearch - -{ref}/index.html[Elasticsearch] is a highly scalable, open source, search and analytics engine. -Elasticsearch can store, search, and analyze large volumes of data in near real-time. -Elastic Uptime uses Elasticsearch to store monitoring data from Heartbeat in Elasticsearch documents. - -[float] -=== Kibana - -{kibana-ref}/index.html[Kibana] is an open source analytics and visualization platform designed to work with Elasticsearch. -You can use Kibana to search, view, and interact with data stored in Elasticsearch. -You can easily perform advanced data analysis and visualize your data in a variety of charts, tables, and maps. - -The {observability-guide}/monitor-uptime.html[Uptime app] in Kibana provides a dedicated user interface for viewing uptime data and identifying problem areas. - -[float] -=== Example deployments -// ++ I like the Infra/logging diagram which shows Metrics and Logging apps as separate components inside Kibana -// ++ In diagram, should be Uptime app, not Uptime UI, possibly even Elastic Uptime? Also applies to Metrics/Logging/APM. -// ++ Need more whitespace around components. - -In this simple deployment, a single instance of Heartbeat is deployed at a single monitoring location to monitor a single service. -The Heartbeat instance sends the monitoring data to Elasticsearch. -Then you can use the Uptime app in Kibana to view the data from Heartbeat and determine the status of the service. - -image::images/uptime-simple-deployment.png[Uptime simple deployment] - -In this deployment, two instances of Heartbeat are deployed at two different monitoring locations. -Both instances monitor the same service. -The Heartbeat instances send the monitoring data to Elasticsearch. -As before, you can use the Uptime app in Kibana to view the Heartbeat data and determine the status of the service. -When a failure occurs, the multiple monitoring locations enable you to pinpoint the area in which the failure has occurred. - -image::images/uptime-multi-deployment.png[Uptime multiple server deployment] - diff --git a/docs/en/uptime/settings.asciidoc b/docs/en/uptime/settings.asciidoc deleted file mode 100644 index 59f9af631b..0000000000 --- a/docs/en/uptime/settings.asciidoc +++ /dev/null @@ -1,51 +0,0 @@ -[role="xpack"] -[[uptime-settings]] - -=== Settings - -The Uptime settings page lets you change which Heartbeat indices are displayed -by the uptime app. Users must have the 'all' permission to modify items on this page. -Uptime settings apply to the current space only. Use different settings in different -spaces to segment different uptime use cases and domains. - -==== Indices - -Imagine your organization has one team for internal IT services, and another -for public services. Each team operates independently and is only responsible for its -own services. In this scenario, you might set up separate Heartbeat instances for each team, -writing out to index patterns named `it-heartbeat-\*`, and `external-heartbeat-\*`. You would -create separate roles and users for each in Elasticsearch, each with access to their own spaces, -named `it` and `external` respectively. Within each space you would navigate to the settings page -and set the correct index pattern to match only the indices that space is allowed to access. - -Note: The pattern set here only restricts what the Uptime app shows. Users may still be able -to manually query Elasticsearch for data outside this pattern. - -[role="screenshot"] -image::images/indices.png[Heartbeat indices] - -See the {kibana-ref}/uptime-security.html[Uptime security] and {heartbeat-ref}/securing-heartbeat.html[Heartbeat security] -docs for more information. - -==== Certificate thresholds - -You can modify settings in this section to control how Uptime will visualize your TLS values in -the <>. These settings also determine which certificates will be -selected by any TLS alert you define. - -There are two fields, `age` and `expiration`. Use the `age` threshold to specify when Uptime should warn -you about certificates that have been valid for too long. Use the `expiration` threshold to specify when Uptime should warn you -about certificates that have approaching expiration dates. - -For example, a common security requirement is to make sure that none of your organization's TLS certificates have been -valid for longer than one year. Modifying the `Age limit` field's value to 365 days will help you keep track of which -certificates you may want to refresh. - -Likewise, to see which of your TLS certificates are close to expiring ahead of time, specify -an `Expiration threshold` on this page. When the count of a certificate's remaining valid days falls -below this threshold, Uptime will consider it in a warning state. When you define a TLS alert, you receive a -notification from Uptime about the certificate. - -[role="screenshot"] -image::images/cert-exp.png[Certification expiration thresholds] -