Skip to content

Commit

Permalink
Merge branch 'main' into issue-102704-change-apm-default
Browse files Browse the repository at this point in the history
  • Loading branch information
mshustov committed Nov 5, 2021
2 parents 626cbe0 + a2296c5 commit 95e4199
Show file tree
Hide file tree
Showing 1,461 changed files with 15,858 additions and 13,866 deletions.
7 changes: 7 additions & 0 deletions .buildkite/pipelines/hourly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,13 @@ steps:
key: linting
timeout_in_minutes: 90

- command: .buildkite/scripts/steps/lint_with_types.sh
label: 'Linting (with types)'
agents:
queue: c2-16
key: linting_with_types
timeout_in_minutes: 90

- command: .buildkite/scripts/steps/checks.sh
label: 'Checks'
agents:
Expand Down
7 changes: 7 additions & 0 deletions .buildkite/pipelines/pull_request/base.yml
Original file line number Diff line number Diff line change
Expand Up @@ -145,6 +145,13 @@ steps:
key: linting
timeout_in_minutes: 90

- command: .buildkite/scripts/steps/lint_with_types.sh
label: 'Linting (with types)'
agents:
queue: c2-16
key: linting_with_types
timeout_in_minutes: 90

- command: .buildkite/scripts/steps/checks.sh
label: 'Checks'
agents:
Expand Down
12 changes: 11 additions & 1 deletion .buildkite/scripts/bootstrap.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,17 @@ source .buildkite/scripts/common/util.sh
source .buildkite/scripts/common/setup_bazel.sh

echo "--- yarn install and bootstrap"
retry 2 15 yarn kbn bootstrap
if ! yarn kbn bootstrap; then
echo "bootstrap failed, trying again in 15 seconds"
sleep 15

# Most bootstrap failures will result in a problem inside node_modules that does not get fixed on the next bootstrap
# So, we should just delete node_modules in between attempts
rm -rf node_modules

echo "--- yarn install and bootstrap, attempt 2"
yarn kbn bootstrap
fi

###
### upload ts-refs-cache artifacts as quickly as possible so they are available for download
Expand Down
12 changes: 12 additions & 0 deletions .buildkite/scripts/steps/lint_with_types.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/usr/bin/env bash

set -euo pipefail

source .buildkite/scripts/common/util.sh

export BUILD_TS_REFS_DISABLE=false
.buildkite/scripts/bootstrap.sh

echo '--- Lint: eslint (with types)'
checks-reporter-with-killswitch "Lint: eslint (with types)" \
node scripts/eslint_with_types
17 changes: 6 additions & 11 deletions .eslintrc.js
Original file line number Diff line number Diff line change
Expand Up @@ -902,17 +902,6 @@ module.exports = {
},
},

/**
* Cases overrides
*/
{
files: ['x-pack/plugins/cases/**/*.{js,mjs,ts,tsx}'],
rules: {
'no-duplicate-imports': 'off',
'@typescript-eslint/no-duplicate-imports': ['error'],
},
},

/**
* Security Solution overrides. These rules below are maintained and owned by
* the people within the security-solution-platform team. Please see ping them
Expand All @@ -928,6 +917,8 @@ module.exports = {
'x-pack/plugins/security_solution/common/**/*.{js,mjs,ts,tsx}',
'x-pack/plugins/timelines/public/**/*.{js,mjs,ts,tsx}',
'x-pack/plugins/timelines/common/**/*.{js,mjs,ts,tsx}',
'x-pack/plugins/cases/public/**/*.{js,mjs,ts,tsx}',
'x-pack/plugins/cases/common/**/*.{js,mjs,ts,tsx}',
],
rules: {
'import/no-nodejs-modules': 'error',
Expand All @@ -949,10 +940,12 @@ module.exports = {
files: [
'x-pack/plugins/security_solution/**/*.{ts,tsx}',
'x-pack/plugins/timelines/**/*.{ts,tsx}',
'x-pack/plugins/cases/**/*.{ts,tsx}',
],
excludedFiles: [
'x-pack/plugins/security_solution/**/*.{test,mock,test_helper}.{ts,tsx}',
'x-pack/plugins/timelines/**/*.{test,mock,test_helper}.{ts,tsx}',
'x-pack/plugins/cases/**/*.{test,mock,test_helper}.{ts,tsx}',
],
rules: {
'@typescript-eslint/no-non-null-assertion': 'error',
Expand All @@ -963,6 +956,7 @@ module.exports = {
files: [
'x-pack/plugins/security_solution/**/*.{ts,tsx}',
'x-pack/plugins/timelines/**/*.{ts,tsx}',
'x-pack/plugins/cases/**/*.{ts,tsx}',
],
rules: {
'@typescript-eslint/no-this-alias': 'error',
Expand All @@ -985,6 +979,7 @@ module.exports = {
files: [
'x-pack/plugins/security_solution/**/*.{js,mjs,ts,tsx}',
'x-pack/plugins/timelines/**/*.{js,mjs,ts,tsx}',
'x-pack/plugins/cases/**/*.{js,mjs,ts,tsx}',
],
plugins: ['eslint-plugin-node', 'react'],
env: {
Expand Down
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -232,6 +232,7 @@
/src/core/ @elastic/kibana-core
/src/plugins/saved_objects_tagging_oss @elastic/kibana-core
/config/kibana.yml @elastic/kibana-core
/typings/ @elastic/kibana-core
/x-pack/plugins/banners/ @elastic/kibana-core
/x-pack/plugins/features/ @elastic/kibana-core
/x-pack/plugins/licensing/ @elastic/kibana-core
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ target
.idea
*.iml
*.log
types.eslint.config.js

# Ignore example plugin builds
/examples/*/build
Expand Down
22 changes: 22 additions & 0 deletions dev_docs/contributing/how_we_use_github.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,28 @@ Pull requests are made into the master branch and then backported when it is saf
- Resolve merge conflicts by rebasing the target branch over your feature branch, and force-pushing (see below for instructions).
- When merging, we’ll squash your commits into a single commit.

### Commit using your `@elastic.co` email address

In order to assist with developer tooling we ask that all Elastic engineers use their `@elastic.co` email address when committing to the Kibana repo. We have implemented a CI check that validates any PR opened by a member of the `@elastic` organization has at least one commit that is attributed to an `@elastic.co` email address. If you have a PR that is failing because of this check you can fix your PR by following these steps:

1. Ensure that you don't have any staged changes
1. Checkout the branch for your PR
1. Update the git config for your current repository to commit with your `@elastic.co` email:

```bash
git config --local user.email YOUR_ELASTIC_EMAIL@elastic.co
```

1. Create a commit using the new email address

```bash
git commit -m 'commit using @elastic.co' --allow-empty
```

1. Push the new commit to your PR and the status should now be green

**Note:** If doing this prevents your commits from being attributed to your Github account then make sure to add your `@elastic.co` address at [https://github.com/settings/emails](https://github.com/settings/emails).

### Rebasing and fixing merge conflicts

Rebasing can be tricky, and fixing merge conflicts can be even trickier because it involves force pushing. This is all compounded by the fact that attempting to push a rebased branch remotely will be rejected by git, and you’ll be prompted to do a pull, which is not at all what you should do (this will really mess up your branch’s history).
Expand Down
2 changes: 1 addition & 1 deletion docs/CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -846,7 +846,7 @@ The default values for the session timeout `xpack.security.session.{lifespan|idl
*Impact* +
Use the following default values:
* `xpack.security.session.idleTimeout: 1h`
* `xpack.security.session.idleTimeout: 8h`
* `xpack.security.session.lifespan: 30d`
====

Expand Down
2 changes: 1 addition & 1 deletion docs/api/dashboard-api.asciidoc
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[[dashboard-api]]
== Import and export dashboard APIs

deprecated::[7.15.0,These experimental APIs have been deprecated in favor of <<saved-objects-api-import>> and <<saved-objects-api-export>>.]
deprecated::[7.15.0,Both of these APIs have been deprecated in favor of <<saved-objects-api-import>> and <<saved-objects-api-export>>.]

Import and export dashboards with the corresponding saved objects, such as visualizations, saved
searches, and index patterns.
Expand Down
2 changes: 1 addition & 1 deletion docs/api/dashboard/export-dashboard.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

deprecated::[7.15.0,Use <<saved-objects-api-export>> instead.]

experimental[] Export dashboards and corresponding saved objects.
Export dashboards and corresponding saved objects.

[[dashboard-api-export-request]]
==== Request
Expand Down
2 changes: 1 addition & 1 deletion docs/api/dashboard/import-dashboard.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

deprecated::[7.15.0,Use <<saved-objects-api-import>> instead.]

experimental[] Import dashboards and corresponding saved objects.
Import dashboards and corresponding saved objects.

[[dashboard-api-import-request]]
==== Request
Expand Down
22 changes: 22 additions & 0 deletions docs/developer/contributing/development-github.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,28 @@ explanation of _why_ you made the changes that you did.
feature branch, and force-pushing (see below for instructions).
* When merging, we'll squash your commits into a single commit.

[discrete]
==== Commit using your `@elastic.co` email address

In order to assist with developer tooling we ask that all Elastic engineers use their `@elastic.co` email address when committing to the Kibana repo. We have implemented a CI check that validates any PR opened by a member of the `@elastic` organization has at least one commit that is attributed to an `@elastic.co` email address. If you have a PR that is failing because of this check you can fix your PR by following these steps:

1. Ensure that you don't have any staged changes
2. Checkout the branch for your PR
3. Update the git config for your current repository to commit with your `@elastic.co` email:
+
["source","shell"]
-----------
git config --local user.email YOUR_ELASTIC_EMAIL@elastic.co
-----------
4. Create a commit using the new email address
+
["source","shell"]
-----------
git commit -m 'commit using @elastic.co' --allow-empty
-----------
+
5. Push the new commit to your PR and the status should now be green

[discrete]
==== Rebasing and fixing merge conflicts

Expand Down
18 changes: 9 additions & 9 deletions docs/maps/asset-tracking-tutorial.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -156,16 +156,16 @@ image::maps/images/asset-tracking-tutorial/logstash_output.png[]
. Leave the terminal window open and Logstash running throughout this tutorial.

[float]
==== Step 3: Create a {kib} index pattern for the tri_met_tracks {es} index
==== Step 3: Create a data view for the tri_met_tracks {es} index

. In Kibana, open the main menu, and click *Stack Management > Index Patterns*.
. Click *Create index pattern*.
. Give the index pattern a name: *tri_met_tracks**.
. In {kib}, open the main menu, and click *Stack Management > Data Views*.
. Click *Create data view*.
. Give the data view a name: *tri_met_tracks**.
. Click *Next step*.
. Set the *Time field* to *time*.
. Click *Create index pattern*.
. Click *Create data view*.

{kib} shows the fields in your index pattern.
{kib} shows the fields in your data view.

[role="screenshot"]
image::maps/images/asset-tracking-tutorial/index_pattern.png[]
Expand All @@ -174,7 +174,7 @@ image::maps/images/asset-tracking-tutorial/index_pattern.png[]
==== Step 4: Explore the Portland bus data

. Open the main menu, and click *Discover*.
. Set the index pattern to *tri_met_tracks**.
. Set the data view to *tri_met_tracks**.
. Open the <<set-time-filter, time filter>>, and set the time range to the last 15 minutes.
. Expand a document and explore some of the fields that you will use later in this tutorial: `bearing`, `in_congestion`, `location`, and `vehicle_id`.

Expand Down Expand Up @@ -202,7 +202,7 @@ Add a layer to show the bus routes for the last 15 minutes.

. Click *Add layer*.
. Click *Tracks*.
. Select the *tri_met_tracks** index pattern.
. Select the *tri_met_tracks** data view.
. Define the tracks:
.. Set *Entity* to *vehicle_id*.
.. Set *Sort* to *time*.
Expand All @@ -225,7 +225,7 @@ image::maps/images/asset-tracking-tutorial/tracks_layer.png[]
Add a layer that uses attributes in the data to set the style and orientation of the buses. You’ll see the direction buses are headed and what traffic is like.

. Click *Add layer*, and then select *Top Hits per entity*.
. Select the *tri_met_tracks** index pattern.
. Select the *tri_met_tracks** data view.
. To display the most recent location per bus:
.. Set *Entity* to *vehicle_id*.
.. Set *Documents per entity* to 1.
Expand Down
4 changes: 2 additions & 2 deletions docs/maps/geojson-upload.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ a preview of the data on the map.
. Use the default *Index type* of {ref}/geo-point.html[geo_point] for point data,
or override it and select {ref}/geo-shape.html[geo_shape].
All other shapes will default to a type of `geo_shape`.
. Leave the default *Index name* and *Index pattern* names (the name of the uploaded
. Leave the default *Index name* and *Data view* names (the name of the uploaded
file minus its extension). You might need to change the index name if it is invalid.
. Click *Import file*.
+
Upon completing the indexing process and creating the associated index pattern,
Upon completing the indexing process and creating the associated data view,
the Elasticsearch responses are shown on the *Layer add panel* and the indexed data
appears on the map. The geospatial data on the map
should be identical to the locally-previewed data, but now it's indexed data from Elasticsearch.
Expand Down
4 changes: 2 additions & 2 deletions docs/maps/indexing-geojson-data-tutorial.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@ auto-populate *Index type* with either {ref}/geo-point.html[geo_point] or
. Click *Import file*.
+
You'll see activity as the GeoJSON Upload utility creates a new index
and index pattern for the data set. When the process is complete, you should
receive messages that the creation of the new index and index pattern
and data view for the data set. When the process is complete, you should
receive messages that the creation of the new index and data view
were successful.

. Click *Add layer*.
Expand Down
4 changes: 2 additions & 2 deletions docs/maps/maps-aggregations.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ To enable a grid aggregation layer:
To enable a blended layer that dynamically shows clusters or documents:

. Click *Add layer*, then select the *Documents* layer.
. Configure *Index pattern* and the *Geospatial field*.
. Configure *Data view* and the *Geospatial field*.
. In *Scaling*, select *Show clusters when results exceed 10000*.


Expand All @@ -77,7 +77,7 @@ then accumulates the most relevant documents based on sort order for each entry
To enable top hits:

. Click *Add layer*, then select the *Top hits per entity* layer.
. Configure *Index pattern* and *Geospatial field*.
. Configure *Data view* and *Geospatial field*.
. Set *Entity* to the field that identifies entities in your documents.
This field will be used in the terms aggregation to group your documents into entity buckets.
. Set *Documents per entity* to configure the maximum number of documents accumulated per entity.
Expand Down
6 changes: 3 additions & 3 deletions docs/maps/maps-getting-started.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ and lighter shades will symbolize countries with less traffic.
. From the **Layer** dropdown menu, select **World Countries**.

. In **Statistics source**, set:
** **Index pattern** to **kibana_sample_data_logs**
** **Data view** to **kibana_sample_data_logs**
** **Join field** to **geo.dest**

. Click **Add layer**.
Expand Down Expand Up @@ -95,7 +95,7 @@ The layer is only visible when users zoom in.

. Click **Add layer**, and then click **Documents**.

. Set **Index pattern** to **kibana_sample_data_logs**.
. Set **Data view** to **kibana_sample_data_logs**.

. Set **Scaling** to *Limits results to 10000.*

Expand Down Expand Up @@ -129,7 +129,7 @@ more total bytes transferred, and smaller circles will symbolize
grids with less bytes transferred.

. Click **Add layer**, and select **Clusters and grids**.
. Set **Index pattern** to **kibana_sample_data_logs**.
. Set **Data view** to **kibana_sample_data_logs**.
. Click **Add layer**.
. In **Layer settings**, set:
** **Name** to `Total Requests and Bytes`
Expand Down
6 changes: 3 additions & 3 deletions docs/maps/reverse-geocoding-tutorial.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ PUT kibana_sample_data_logs/_settings
----------------------------------

. Open the main menu, and click *Discover*.
. Set the index pattern to *kibana_sample_data_logs*.
. Set the data view to *kibana_sample_data_logs*.
. Open the <<set-time-filter, time filter>>, and set the time range to the last 30 days.
. Scan through the list of *Available fields* until you find the `csa.GEOID` field. You can also search for the field by name.
. Click image:images/reverse-geocoding-tutorial/add-icon.png[Add icon] to toggle the field into the document table.
Expand All @@ -162,10 +162,10 @@ Now that our web traffic contains CSA region identifiers, you'll visualize CSA r
. Click *Choropleth*.
. For *Boundaries source*:
.. Select *Points, lines, and polygons from Elasticsearch*.
.. Set *Index pattern* to *csa*.
.. Set *Data view* to *csa*.
.. Set *Join field* to *GEOID*.
. For *Statistics source*:
.. Set *Index pattern* to *kibana_sample_data_logs*.
.. Set *Data view* to *kibana_sample_data_logs*.
.. Set *Join field* to *csa.GEOID.keyword*.
. Click *Add layer*.
. Scroll to *Layer Style* and Set *Label* to *Fixed*.
Expand Down
12 changes: 6 additions & 6 deletions docs/maps/trouble-shooting.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,18 +21,18 @@ image::maps/images/inspector.png[]
=== Solutions to common problems

[float]
==== Index not listed when adding layer
==== Data view not listed when adding layer

* Verify your geospatial data is correctly mapped as {ref}/geo-point.html[geo_point] or {ref}/geo-shape.html[geo_shape].
** Run `GET myIndexPatternTitle/_field_caps?fields=myGeoFieldName` in <<console-kibana, Console>>, replacing `myIndexPatternTitle` and `myGeoFieldName` with your index pattern title and geospatial field name.
** Run `GET myIndexName/_field_caps?fields=myGeoFieldName` in <<console-kibana, Console>>, replacing `myIndexName` and `myGeoFieldName` with your index and geospatial field name.
** Ensure response specifies `type` as `geo_point` or `geo_shape`.
* Verify your geospatial data is correctly mapped in your <<managing-fields,index pattern>>.
** Open your index pattern in <<management, Stack Management>>.
* Verify your geospatial data is correctly mapped in your <<managing-fields, data view>>.
** Open your data view in <<management, Stack Management>>.
** Ensure your geospatial field type is `geo_point` or `geo_shape`.
** Ensure your geospatial field is searchable and aggregatable.
** If your geospatial field type does not match your Elasticsearch mapping, click the *Refresh* button to refresh the field list from Elasticsearch.
* Index patterns with thousands of fields can exceed the default maximum payload size.
Increase <<settings, `server.maxPayload`>> for large index patterns.
* Data views with thousands of fields can exceed the default maximum payload size.
Increase <<settings, `server.maxPayload`>> for large data views.

[float]
==== Features are not displayed
Expand Down
Loading

0 comments on commit 95e4199

Please sign in to comment.