diff --git a/serverless/pages/action-connectors.mdx b/serverless/pages/action-connectors.mdx index 856f1a32..98c109cc 100644 --- a/serverless/pages/action-connectors.mdx +++ b/serverless/pages/action-connectors.mdx @@ -46,7 +46,7 @@ The list of available connectors varies by project type. { "title": "Google Gemini", "description": "Send a request to Google Gemini.", - "href": "https://www.elastic.co/guide/en/kibana/master/gemini-action-type.html", + "href": "((kibana-ref))/gemini-action-type.html", "target": "_blank" }, { diff --git a/serverless/pages/clients-ruby-getting-started.mdx b/serverless/pages/clients-ruby-getting-started.mdx index cffd0338..fc14b07d 100644 --- a/serverless/pages/clients-ruby-getting-started.mdx +++ b/serverless/pages/clients-ruby-getting-started.mdx @@ -15,7 +15,6 @@ client for ((es3)), shows you how to initialize the client, and how to perform b * Ruby 3.0 or higher installed on your system. * To use the `elasticsearch-serverless` gem, you must have an API key and Elasticsearch Endpoint for an ((es3)) project. -* ## Installation diff --git a/serverless/pages/elasticsearch-developer-tools.mdx b/serverless/pages/elasticsearch-developer-tools.mdx index 2c58bf30..929e0dce 100644 --- a/serverless/pages/elasticsearch-developer-tools.mdx +++ b/serverless/pages/elasticsearch-developer-tools.mdx @@ -6,7 +6,6 @@ tags: [ 'serverless', 'elasticsearch', 'overview' ] --- -## Developer tools A number of developer tools are available in your project's UI under the **Dev Tools** section. diff --git a/serverless/pages/explore-your-data-discover-your-data.mdx b/serverless/pages/explore-your-data-discover-your-data.mdx index 8fe1a34a..428051ca 100644 --- a/serverless/pages/explore-your-data-discover-your-data.mdx +++ b/serverless/pages/explore-your-data-discover-your-data.mdx @@ -32,34 +32,31 @@ Tell ((kib)) where to find the data you want to explore, and then specify the ti 1. Once the book sample data has been ingested, navigate to **Explore → Discover** and click **Create data view**. -2. Give your data view a name. +1. Give your data view a name. - + -3. Start typing in the **Index pattern** field, and the names of indices, data streams, and aliases that match your input will be displayed. +1. Start typing in the **Index pattern** field, and the names of indices, data streams, and aliases that match your input will be displayed. -- To match multiple sources, use a wildcard (*), for example, `b*` and any indices starting with the letter `b` display. + - To match multiple sources, use a wildcard (*), for example, `b*` and any indices starting with the letter `b` display. + - To match multiple sources, enter their names separated by a comma. Do not include a space after the comma. For example `books,magazines` would match two indices: `books` and `magazines`. + - To exclude a source, use a minus sign (-), for example `-books`. -- To match multiple sources, enter their names separated by a comma. Do not include a space after the comma. For example `books,magazines` would match two indices: `books` and `magazines`. +1. In the **Timestamp** field dropdown, and then select `release_date`. -- To exclude a source, use a minus sign (-), for example `-books`. + - If you don't set a time field, you can't use global time filters on your dashboards. Leaving the time field unset might be useful if you have multiple time fields and want to create dashboards that combine visualizations based on different timestamps. + - If your index doesn't have time-based data, choose **I don't want to use the time filter**. -4. In the **Timestamp** field dropdown, and then select `release_date`. +1. Click **Show advanced settings** to: -- If you don't set a time field, you can't use global time filters on your dashboards. Leaving the time field unset might be useful if you have multiple time fields and want to create dashboards that combine visualizations based on different timestamps. + - Display hidden and system indices. + - Specify your own data view name. For example, enter your Elasticsearch index alias name. -- If your index doesn't have time-based data, choose **I don't want to use the time filter**. +1. Click **Save data view to ((kib))**. -5. Click **Show advanced settings** to: +1. Adjust the time range to view data for the **Last 40 years** to view all your book data. -- Display hidden and system indices. -- Specify your own data view name. For example, enter your Elasticsearch index alias name. - -6. Click **Save data view to ((kib))**. - -7. Adjust the time range to view data for the **Last 40 years** to view all your book data. - - +
@@ -69,11 +66,11 @@ Tell ((kib)) where to find the data you want to explore, and then specify the ti 1. In the sidebar, enter `au` in the search field to find the `author` field. -2. In the **Available fields** list, click `author` to view its most popular values. +1. In the **Available fields** list, click `author` to view its most popular values. -**Discover** shows the top 10 values and the number of records used to calculate those values. + **Discover** shows the top 10 values and the number of records used to calculate those values. -3. Click to toggle the field into the document table. You can also drag the field from the **Available fields** list into the document table. +1. Click to toggle the field into the document table. You can also drag the field from the **Available fields** list into the document table. ## Add a field to your ((data-source)) @@ -85,21 +82,21 @@ the same way you do with other fields. 1. In the sidebar, click **Add a field**. -2. In the **Create field** form, enter `hello` for the name. +1. In the **Create field** form, enter `hello` for the name. -3. Turn on **Set value**. +1. Turn on **Set value**. -4. Define the script using the Painless scripting language. Runtime fields require an `emit()`. +1. Define the script using the Painless scripting language. Runtime fields require an `emit()`. ```ts emit("Hello World!"); ``` -5. Click **Save**. +1. Click **Save**. -6. In the sidebar, search for the **hello** field, and then add it to the document table. +1. In the sidebar, search for the **hello** field, and then add it to the document table. -7. Create a second field named `authorabbrev` that combines the authors last name and first initial. +1. Create a second field named `authorabbrev` that combines the authors last name and first initial. ```ts String str = doc['author.keyword'].value; @@ -107,7 +104,7 @@ the same way you do with other fields. emit(doc['author.keyword'].value + ", " + ch1); ``` -8. Add `authorabbrev` to the document table. +1. Add `authorabbrev` to the document table. @@ -122,7 +119,7 @@ To search particular fields and build more complex queries, use the ((kib)) Quer Search the book data to find out which books have more than 500 pages: 1. Enter `p`, and then select **page_count**. -2. Select **>** for greater than and enter **500**, then click the refresh button or press the Enter key to see which books have more than 500 pages. +1. Select **>** for greater than and enter **500**, then click the refresh button or press the Enter key to see which books have more than 500 pages.
@@ -136,10 +133,10 @@ and more. Exclude documents where the author is not Terry Pratchett: 1. Click next to the query bar. -2. In the **Add filter** pop-up, set the field to **author**, the operator to **is not**, and the value to **Terry Pratchett**. -3. Click **Add filter**. -4. Continue your exploration by adding more filters. -5. To remove a filter, click the close icon (x) next to its name in the filter bar. +1. In the **Add filter** pop-up, set the field to **author**, the operator to **is not**, and the value to **Terry Pratchett**. +1. Click **Add filter**. +1. Continue your exploration by adding more filters. +1. To remove a filter, click the close icon (x) next to its name in the filter bar.
@@ -149,11 +146,11 @@ Dive into an individual document to view its fields and the documents that occur 1. In the document table, click the expand icon to show document details. -2. Scan through the fields and their values. If you find a field of interest, hover your mouse over the **Actions** column for filters and other options. +1. Scan through the fields and their values. If you find a field of interest, hover your mouse over the **Actions** column for filters and other options. -3. To create a view of the document that you can bookmark and share, click **Single document**. +1. To create a view of the document that you can bookmark and share, click **Single document**. -4. To view documents that occurred before or after the event you are looking at, click **Surrounding documents**. +1. To view documents that occurred before or after the event you are looking at, click **Surrounding documents**. @@ -163,26 +160,26 @@ Save your search so you can use it later to generate a CSV report, create visual 1. In the upper right toolbar, click **Save**. -2. Give your search a title. +1. Give your search a title. -3. Optionally store tags and the time range with the search. +1. Optionally store tags and the time range with the search. -4. Click **Save**. +1. Click **Save**. ## Visualize your findings If a field can be [aggregated](((ref))/search-aggregations.html), you can quickly visualize it from **Discover**. 1. In the sidebar, find and then click `release_date`. -2. In the popup, click **Visualize**. +1. In the popup, click **Visualize**. - - ((kib)) creates a visualization best suited for this field. - + + ((kib)) creates a visualization best suited for this field. + -3. From the **Available fields** list, drag and drop `page_count` onto the workspace. +1. From the **Available fields** list, drag and drop `page_count` onto the workspace. -4. Save your visualization for use on a dashboard. +1. Save your visualization for use on a dashboard. For geographical point fields, if you click **Visualize**, your data appears in a map. @@ -201,12 +198,12 @@ From **Discover**, you can create a rule to periodically check when data goes ab 1. Ensure that your data view, query, and filters fetch the data for which you want an alert. -2. In the toolbar, click **Alerts → Create search threshold rule**. +1. In the toolbar, click **Alerts → Create search threshold rule**. The **Create rule** form is pre-filled with the latest query sent to ((es)). -3. Configure your ((es)) query and select a connector type. +1. Configure your ((es)) query and select a connector type. -4. Click **Save**. +1. Click **Save**. For more about this and other rules provided in ((alert-features)), go to Alerting. diff --git a/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.mdx b/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.mdx index 78e5013d..715305a9 100644 --- a/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.mdx +++ b/serverless/pages/explore-your-data-visualize-your-data-create-visualizations.mdx @@ -391,7 +391,7 @@ To personalize your dashboards, add your own logos and graphics with the **Image 1. To save the new image panel to your dashboard click **Save**. -To manage your uploaded image files, open the main menu, then click ** Management → Files**. +To manage your uploaded image files, open the main menu, then click **Management → Files**. diff --git a/serverless/pages/ingest-your-data-ingest-data-through-integrations-connector-client.mdx b/serverless/pages/ingest-your-data-ingest-data-through-integrations-connector-client.mdx index b70440ba..7f958dde 100644 --- a/serverless/pages/ingest-your-data-ingest-data-through-integrations-connector-client.mdx +++ b/serverless/pages/ingest-your-data-ingest-data-through-integrations-connector-client.mdx @@ -2,12 +2,12 @@ slug: /serverless/elasticsearch/ingest-data-through-integrations-connector-client title: Connector clients description: Set up and deploy self-managed connectors that run on your own infrastructure. -tags: [ 'serverless', 'elasticsearch', 'ingest', 'connector', how to' ] +tags: [ 'serverless', 'elasticsearch', 'ingest', 'connector', 'how to' ] status: in review --- - This page contains high-level instructions about setting up connector clients in your project's UI. + This page contains high-level instructions about setting up connector clients in your project's UI. Because prerequisites and configuration details vary by data source, you'll need to refer to the individual connector documentation for specific details. @@ -94,7 +94,7 @@ You'll need to update these values in your [`config.yml`](https://github.com/ela ## Step 2: Deploy your self-managed connector -To use connector clients, you must deploy the connector service so your connector can talk to your ((es)) instance. +To use connector clients, you must deploy the connector service so your connector can talk to your ((es)) instance. The source code is hosted in the `elastic/connectors` repository. You have two deployment options: @@ -168,7 +168,7 @@ Find all available Docker images in the [official Elastic Docker registry](https ### Run from source -Running from source requires cloning the repository and running the code locally. +Running from source requires cloning the repository and running the code locally. Use this approach if you're actively customizing connectors. Follow these steps: diff --git a/serverless/pages/knn-search.mdx b/serverless/pages/knn-search.mdx index 96282a3a..8c699002 100644 --- a/serverless/pages/knn-search.mdx +++ b/serverless/pages/knn-search.mdx @@ -331,7 +331,7 @@ shards. The score of each hit is the sum of the `knn` and `query` scores. You can specify a `boost` value to give a weight to each score in the sum. In the example above, the scores will be calculated as -``` +```txt score = 0.9 * match_score + 0.1 * knn_score ``` @@ -446,7 +446,7 @@ all index shards. The scoring for a doc with the above configured boosts would be: -``` +```txt score = 0.9 * match_score + 0.1 * knn_score_image-vector + 0.5 * knn_score_title-vector ``` diff --git a/serverless/pages/maintenance-windows.mdx b/serverless/pages/maintenance-windows.mdx index 06e7cf8a..e215d362 100644 --- a/serverless/pages/maintenance-windows.mdx +++ b/serverless/pages/maintenance-windows.mdx @@ -49,7 +49,7 @@ For example, you can suppress notifications for alerts from specific rules: - You can select only a single category when you turn on filters. - Some rules are not affected by maintenance window filters because their alerts do not contain requisite data. -In particular, [((stack-monitor-app))](((kibana-ref))/kibana-alerts.html), [tracking containment](((kibana-ref))geo-alerting.html), [((anomaly-jobs)) health](((ml-docs))/ml-configuring-alerts.html), and [transform health](((ref))/transform-alerts.html) rules are not affected by the filters. +In particular, [((stack-monitor-app))](((kibana-ref))/kibana-alerts.html), [tracking containment](((kibana-ref))/geo-alerting.html), [((anomaly-jobs)) health](((ml-docs))/ml-configuring-alerts.html), and [transform health](((ref))/transform-alerts.html) rules are not affected by the filters. A maintenance window can have any one of the following statuses: diff --git a/serverless/pages/search-your-data-semantic-search-elser.mdx b/serverless/pages/search-your-data-semantic-search-elser.mdx index a6699e74..b2e95859 100644 --- a/serverless/pages/search-your-data-semantic-search-elser.mdx +++ b/serverless/pages/search-your-data-semantic-search-elser.mdx @@ -309,11 +309,11 @@ search results.
-# Optimizing performance +## Optimizing performance
-## Saving disk space by excluding the ELSER tokens from document source +### Saving disk space by excluding the ELSER tokens from document source The tokens generated by ELSER must be indexed for use in the [sparse_vector query](((ref))/query-dsl-sparse-vector-query.html). However, it is not diff --git a/serverless/pages/tags.mdx b/serverless/pages/tags.mdx index 9bb184d1..465c7a02 100644 --- a/serverless/pages/tags.mdx +++ b/serverless/pages/tags.mdx @@ -55,6 +55,7 @@ To assign and remove tags, you must have `write` permission on the objects to wh 1. Click the actions icon and then select **Manage assignments**. 1. Select the objects to which you want to assign or remove tags. + ![Assign tags to saved objects](../images/tag-assignment.png) 1. Click **Save tag assignments**. diff --git a/serverless/pages/welcome-to-serverless.mdx b/serverless/pages/welcome-to-serverless.mdx index 96171d8d..71202d54 100644 --- a/serverless/pages/welcome-to-serverless.mdx +++ b/serverless/pages/welcome-to-serverless.mdx @@ -7,15 +7,15 @@ layout: landing # Elastic Cloud Serverless -Elastic Cloud Serverless products allow you to deploy and use Elastic for your use cases without managing the underlying Elastic cluster, -such as nodes, data tiers, and scaling. Serverless instances are fully-managed, autoscaled, and automatically upgraded by Elastic so you can -focus more on gaining value and insight from your data. +Elastic Cloud Serverless products allow you to deploy and use Elastic for your use cases without managing the underlying Elastic cluster, +such as nodes, data tiers, and scaling. Serverless instances are fully-managed, autoscaled, and automatically upgraded by Elastic so you can +focus more on gaining value and insight from your data. Elastic provides three serverless solutions available on ((ecloud)): - **((es))** — Build powerful applications and search experiences using a rich ecosystem of vector search capabilities, APIs, and libraries. -- **Elastic ((observability))** — Monitor your own platforms and services using powerful machine learning and analytics tools with your logs, metrics, traces, and APM data. -- **Elastic ((security))** — Detect, investigate, and respond to threats, with SIEM, endpoint protection, and AI-powered analytics capabilities. +- **((observability))** — Monitor your own platforms and services using powerful machine learning and analytics tools with your logs, metrics, traces, and APM data. +- **((security))** — Detect, investigate, and respond to threats, with SIEM, endpoint protection, and AI-powered analytics capabilities. Serverless instances of the Elastic Stack that you create in ((ecloud)) are called **serverless projects**.