Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions manage-data/_snippets/create-lifecycle-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,17 @@ The `min_age` value is relative to the rollover time, not the index creation tim
::::


You can create the policy in {{kib}} or with the [create or update policy](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle) API.
You can create the policy in {{kib}} or with the [create or update policy](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle) API.

::::{tab-set}
:group: kibana-api
:::{tab-item} {{kib}}
:sync: kibana
To create the policy from {{kib}}, open the menu and go to **Stack Management > Index Lifecycle Policies**. Click **Create policy**.

To create the policy from {{kib}}:

1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. Click **Create policy**.

By default, only the hot index lifecycle phase is enabled. Enable each additional lifecycle phase that you'd like.

Expand Down
26 changes: 20 additions & 6 deletions manage-data/data-store/data-streams/set-up-data-stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,10 @@ For {{fleet}} and {{agent}}, refer to [](/reference/fleet/data-streams.md). For

While optional, we recommend using {{ilm-init}} to automate the management of your data stream’s backing indices. {{ilm-init}} requires an index lifecycle policy.

To create an index lifecycle policy in {{kib}}, open the main menu and go to **Stack Management > Index Lifecycle Policies**. Click **Create policy**.
To create an index lifecycle policy in {{kib}}:

1. Go to the **Index Lifecycle Policies** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. Click **Create policy**.

You can also use the [create lifecycle policy API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ilm-put-lifecycle).

Expand Down Expand Up @@ -104,8 +107,10 @@ If you’re unsure how to map your fields, use [runtime fields](../mapping/defin

::::

To create a component template in {{kib}}:

To create a component template in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Index Templates** view, click **Create component template**.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. In the **Index Templates** tab, click **Create component template**.

You can also use the [create component template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-component-template).

Expand Down Expand Up @@ -157,7 +162,10 @@ Use your component templates to create an index template. Specify:
* Any component templates that contain your mappings and index settings.
* A priority higher than `200` to avoid collisions with built-in templates. See [Avoid index pattern collisions](../templates.md#avoid-index-pattern-collisions).

To create an index template in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Index Templates** view, click **Create template**.
To create an index template in {{kib}}:

1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. In the **Index Templates** tab, click **Create template**.

You can also use the [create index template API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-index-template). Include the `data_stream` object to enable data streams.

Expand Down Expand Up @@ -202,7 +210,7 @@ You can also manually create the stream using the [create data stream API](https
PUT _data_stream/my-data-stream
```

After it's been created, you can view and manage this and other data streams from the **Stack Management > Index Management** view. Refer to [Manage a data stream](./manage-data-stream.md) for details.
After it's been created, you can view and manage this and other data streams from the **Index Management** view. Refer to [Manage a data stream](./manage-data-stream.md) for details.

## Secure the data stream [secure-data-stream]

Expand All @@ -224,7 +232,10 @@ POST _data_stream/_migrate/my-time-series-data

## Get information about a data stream [get-info-about-data-stream]

To get information about a data stream in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Data Streams** view, click the data stream’s name.
To get information about a data stream in {{kib}}:

1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. In the **Data Streams** tab, click the data stream’s name.

You can also use the [get data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-stream).

Expand All @@ -235,7 +246,10 @@ GET _data_stream/my-data-stream

## Delete a data stream [delete-data-stream]

To delete a data stream and its backing indices in {{kib}}, open the main menu and go to **Stack Management > Index Management**. In the **Data Streams** view, click the trash icon. The icon only displays if you have the `delete_index` [security privilege](elasticsearch://reference/elasticsearch/security-privileges.md) for the data stream.
To delete a data stream and its backing indices in {{kib}}:

1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. In the **Data Streams** view, click the trash icon. The icon only displays if you have the `delete_index` [security privilege](elasticsearch://reference/elasticsearch/security-privileges.md) for the data stream.

You can also use the [delete data stream API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-stream).

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ The logs you want to parse look similar to this:

These logs contain a timestamp, IP address, and user agent. You want to give these three items their own field in {{es}} for faster searches and visualizations. You also want to know where the request is coming from.

1. In {{kib}}, open the main menu and click **Stack Management** > **Ingest Pipelines**.
1. In {{kib}}, go to the **Ingest Pipelines** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).

:::{image} /manage-data/images/elasticsearch-reference-ingest-pipeline-list.png
:alt: Kibana's Ingest Pipelines list view
Expand Down
16 changes: 9 additions & 7 deletions manage-data/ingest/transform-enrich/ingest-pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,9 @@ You can create and manage ingest pipelines using {{kib}}'s **Ingest Pipelines**

## Create and manage pipelines [create-manage-ingest-pipelines]

In {{kib}}, open the main menu and click **Stack Management > Ingest Pipelines**. From the list view, you can:
In {{kib}}, go to the **Ingest Pipelines** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).

From the list view, you can:

* View a list of your pipelines and drill down into details
* Edit or clone existing pipelines
Expand Down Expand Up @@ -457,7 +459,7 @@ PUT _ingest/pipeline/my-pipeline
}
```
1. All processors in this pipeline will use the `classic` access pattern.
2. The logic for resolving field paths used by processors to read and write values to ingest documents is based on the access pattern.
2. The logic for resolving field paths used by processors to read and write values to ingest documents is based on the access pattern.

### Classic field access pattern [access-source-pattern-classic]

Expand Down Expand Up @@ -503,7 +505,7 @@ POST /_ingest/pipeline/_simulate
1. Explicitly declaring to use the `classic` access pattern in the pipeline. This is the default value.
2. We are reading a value from the field `foo.bar`.
3. We are writing its value to the field `a.b.c.d`.
4. This document uses nested json objects in its structure.
4. This document uses nested json objects in its structure.
5. This document uses dotted field names in its structure.

```console-result
Expand All @@ -514,7 +516,7 @@ POST /_ingest/pipeline/_simulate
"_id": "id",
"_index": "index",
"_version": "-3",
"_source": {
"_source": {
"foo": {
"bar": "baz" <1>
},
Expand Down Expand Up @@ -551,14 +553,14 @@ POST /_ingest/pipeline/_simulate
2. The value from the `foo.bar` field is written to a nested json structure at field `a.b.c.d`. The processor creates objects for each field in the path.
3. The second document uses a dotted field name for `foo.bar`. The `classic` access pattern does not recognize dotted field names, and so nothing is copied.

If the documents you are ingesting contain dotted field names, to read them with the `classic` access pattern, you must use the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor. This approach is not always reasonable though. Consider the following document:
If the documents you are ingesting contain dotted field names, to read them with the `classic` access pattern, you must use the [`dot_expander`](elasticsearch://reference/enrich-processor/dot-expand-processor.md) processor. This approach is not always reasonable though. Consider the following document:

```json
{
"event": {
"tags": {
"http.host": "localhost:9200",
"http.host.name": "localhost",
"http.host.name": "localhost",
"http.host.port": 9200
}
}
Expand Down Expand Up @@ -623,7 +625,7 @@ POST /_ingest/pipeline/_simulate
"_id": "id",
"_index": "index",
"_version": "-3",
"_source": {
"_source": {
"foo": {
"bar": "baz" <1>
},
Expand Down
6 changes: 4 additions & 2 deletions manage-data/ingest/transform-enrich/logstash-pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ products:

This content applies to: [![Elasticsearch](/manage-data/images/serverless-es-badge.svg "")](../../../solutions/search.md) [![Observability](/manage-data/images/serverless-obs-badge.svg "")](../../../solutions/observability.md) [![Security](/manage-data/images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md)

In **{{project-settings}} → {{manage-app}} → {{ls-pipelines-app}}**, you can control multiple {{ls}} instances and pipeline configurations.
On the **{{ls-pipelines-app}}** management page, you can control multiple {{ls}} instances and pipeline configurations.

:::{image} /manage-data/images/serverless-logstash-pipelines-management.png
:alt: {{ls-pipelines-app}}"
Expand All @@ -31,7 +31,9 @@ After you configure {{ls}} to use centralized pipeline management, you can no lo
## Manage pipelines [logstash-pipelines-manage-pipelines]

1. [Configure centralized pipeline management](logstash://reference/configuring-centralized-pipelines.md).
2. To add a new pipeline, go to **{{project-settings}} → {{manage-app}} → {{ls-pipelines-app}}** and click **Create pipeline**. Provide the following details, then click **Create and deploy**.
1. To add a new pipeline, go to the **{{ls-pipelines-app}}** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. Click **Create pipeline**.
1. Provide the following details, then click **Create and deploy**.

Pipeline ID
: A name that uniquely identifies the pipeline. This is the ID that you used when you configured centralized pipeline management and specified a list of pipeline IDs in the `xpack.management.pipeline.id` setting.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Follow these steps to configure or remove data stream lifecycle settings for an

Note that these steps are for data stream lifecycle only. For the steps to configure {{ilm}}, refer to the [{{ilm-init}} documentation](/manage-data/lifecycle/index-lifecycle-management.md). For a comparison between the two, refer to [](/manage-data/lifecycle.md).

## Set a data stream’s lifecycle [set-lifecycle]
## Set a data stream’s lifecycle [set-lifecycle]

To add or to change the retention period of your data stream you can use the **Index Management** tools in {{kib}} or the {{es}} [lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-put-data-lifecycle).

Expand All @@ -29,7 +29,8 @@ To add or to change the retention period of your data stream you can use the **I

To change the data retention settings for a data stream:

1. Go to **Stack Management > Index Management** and open the **Data Streams** tab.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. Open the **Data Streams** tab.
1. Use the search tool to find the data stream you're looking for.
1. Select the data stream to view its details.
1. In the data stream details pane, select **Manage > Edit data retention** to adjust the settings. You can do any of the following:
Expand Down Expand Up @@ -70,7 +71,7 @@ To change the data retention settings for a data stream:
:::
:::::

The changes in the lifecycle are applied on all backing indices of the data stream.
The changes in the lifecycle are applied on all backing indices of the data stream.

You can see the effect of the change in {{kib}} or using the {{es}} [explain API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-explain-data-lifecycle):

Expand All @@ -80,7 +81,8 @@ You can see the effect of the change in {{kib}} or using the {{es}} [explain API
:sync: kibana
To check the data retention settings for a data stream:

1. Go to **Stack Management > Index Management** and open the **Data Streams** tab.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. Open the **Data Streams** tab.
1. Use the search tool to find the data stream you're looking for.
1. Select the data stream to view its details. The flyout shows the data retention settings for the data stream. Note that if the data stream is currently managed by an [{{ilm-init}} policy](/manage-data/lifecycle/index-lifecycle-management.md), the **Effective data retention** may differ from the retention value that you've set in the data stream, as indicated by the **Data retention**.

Expand Down Expand Up @@ -144,9 +146,9 @@ The response will look like:
:::
:::::

## Remove the lifecycle for a data stream [delete-lifecycle]
## Remove the lifecycle for a data stream [delete-lifecycle]

To remove the lifecycle of a data stream you can use the **Index Management** tools in {{kib}} or the {{es}} [delete lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-lifecycle).
To remove the lifecycle of a data stream you can use the **Index Management** tools in {{kib}} or the {{es}} [delete lifecycle API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-delete-data-lifecycle).


:::::{tab-set}
Expand All @@ -156,7 +158,8 @@ To remove the lifecycle of a data stream you can use the **Index Management** to

To remove a data stream's lifecycle:

1. Go to **Stack Management > Index Management** and open the **Data Streams** tab.
1. Go to the **Index Management** page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md).
1. Open the **Data Streams** tab.
1. Use the search tool to find the data stream you're looking for.
1. Select the data stream to view its details.
1. In the data stream details pane, select **Manage > Edit data retention**.
Expand Down
8 changes: 4 additions & 4 deletions manage-data/lifecycle/data-tiers.md
Original file line number Diff line number Diff line change
Expand Up @@ -428,7 +428,7 @@ When data reaches the `cold` or `frozen` phases, it is automatically converted t

9. Delete the searchable snapshots by following these steps:

1. Open Kibana and navigate to Management > Data > Snapshot and Restore > Snapshots (or go to `<kibana-endpoint>/app/management/data/snapshot_restore/snapshots`)
1. Open Kibana, go to the **Snapshot and Restore** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md), and go to the **Snapshots** tab. (Alternatively, go to `<kibana-endpoint>/app/management/data/snapshot_restore/snapshots`.)
2. Search for `*<ilm-policy-name>*`
3. Bulk select the snapshots and delete them

Expand Down Expand Up @@ -494,7 +494,7 @@ This setting will not unallocate a currently allocated shard, but might prevent

### Automatic data tier migration [data-tier-migration]

{{ilm-init}} automatically transitions managed indices through the available data tiers using the [migrate](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md) action. By default, this action is automatically injected in every phase.
{{ilm-init}} automatically transitions managed indices through the available data tiers using the [migrate](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md) action. By default, this action is automatically injected in every phase.

### Disable data tier allocation [data-tier-allocation]
You can explicitly disable data allocation for data tier migration in an ILM policy with the following setting:
Expand All @@ -520,7 +520,7 @@ For example:
},
```

Defining the `migrate` action with `"enabled": false` for a data tier [disables automatic {{ilm-init}} shard migration](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md#ilm-disable-migrate-ex). This is useful if, for example, you’re using the [allocate action](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-allocate.md) to manually specify allocation rules.
Defining the `migrate` action with `"enabled": false` for a data tier [disables automatic {{ilm-init}} shard migration](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-migrate.md#ilm-disable-migrate-ex). This is useful if, for example, you’re using the [allocate action](elasticsearch://reference/elasticsearch/index-lifecycle-actions/ilm-allocate.md) to manually specify allocation rules.

#### Important Note:
#### Important Note:
Do not disable automatic {{ilm-init}} migration without manually defining {{ilm-init}} allocation rules. If data migration is disabled without allocation rules defined, this can prevent data from moving to the specified data tier, even though the data has successfully moved through the {{ilm-init}} policy with a status of `complete`.
Loading
Loading