Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[7.17][EMS] Update to ems-client@7.17.2 #187672

Closed
wants to merge 2 commits into from

Merge branch '7.17' into update/ems-client

2f0a91b
Select commit
Loading
Failed to load commit list.
Closed

[7.17][EMS] Update to ems-client@7.17.2 #187672

Merge branch '7.17' into update/ems-client
2f0a91b
Select commit
Loading
Failed to load commit list.
checks-reporter / X-Pack Chrome Functional tests / Group 22 succeeded Jul 9, 2024 in 20m 56s

node scripts/functional_tests --bail --kibana-install-dir /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana-build-xpack --include-tag ciGroup22

Details

[truncated]
st/saved_object_api_integration/spaces_only/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/timeline/security_and_spaces/config_trial.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/ui_capabilities/security_and_spaces/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/ui_capabilities/security_only/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/ui_capabilities/spaces_only/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/upgrade_assistant_integration/config.js
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/licensing_plugin/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/licensing_plugin/config.public.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/endpoint_api_integration_no_ingest/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/functional_embedded/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/reporting_api_integration/reporting_and_security.config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/reporting_api_integration/reporting_without_security.config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/security_solution_endpoint_api_int/config.ts
   │ info Package registry URL for tests: --xpack.fleet.registryUrl=http://localhost:6104
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/fleet_api_integration/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/functional_enterprise_search/without_host_configured.config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/functional_vis_wizard/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/search_sessions_integration/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/saved_object_tagging/functional/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/saved_object_tagging/api_integration/security_and_spaces/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/saved_object_tagging/api_integration/tagging_api/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/usage_collection/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/fleet_functional/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/examples/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
 info testing x-pack/test/functional_execution_context/config.ts
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
--- [1/1] Running x-pack/test/functional/config.js
 info Installing from snapshot
   │ info version: 7.17.23
   │ info install path: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr
   │ info license: trial
   │ info Downloading snapshot manifest from https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240708-131119_42b93a53/manifest.json
   │ info downloading artifact from https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240708-131119_42b93a53/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz
   │ info downloading artifact checksum from https://storage.googleapis.com/kibana-ci-es-snapshots-daily/7.17.23/archives/20240708-131119_42b93a53/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz.sha512
   │ info checksum verified
   │ info extracting /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/cache/elasticsearch-7.17.23-SNAPSHOT-linux-x86_64.tar.gz
   │ info extracted to /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr
   │ info created /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/ES_TMPDIR
   │ info setting secure setting bootstrap.password to changeme
 info [es] starting node ftr on port 9220
 info Starting
   │ERROR Jul 09, 2024 3:49:54 PM sun.util.locale.provider.LocaleProviderAdapter <clinit>
   │      WARNING: COMPAT locale provider will be removed in a future release
   │      
   │ info [o.e.n.Node] [ftr] version[7.17.23-SNAPSHOT], pid[4387], build[default/tar/42b93a534929add031e668becc4565463f2c4b32/2024-07-08T13:06:16.104506372Z], OS[Linux/5.15.0-1062-gcp/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/22.0.1/22.0.1+8-16]
   │ info [o.e.n.Node] [ftr] JVM home [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/jdk], using bundled JDK [true]
   │ info [o.e.n.Node] [ftr] JVM arguments [-Xshare:auto, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -XX:+ShowCodeDetailsInExceptionMessages, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dio.netty.allocator.numDirectArenas=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j2.formatMsgNoLookups=true, -Djava.locale.providers=SPI,COMPAT, --add-opens=java.base/java.io=ALL-UNNAMED, -Djava.security.manager=allow, -XX:+UseG1GC, -Djava.io.tmpdir=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/ES_TMPDIR, -XX:+HeapDumpOnOutOfMemoryError, -XX:+ExitOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m, -XX:+UnlockDiagnosticVMOptions, -XX:G1NumCollectionsKeepPinned=10000000, -Xms1536m, -Xmx1536m, -XX:MaxDirectMemorySize=805306368, -XX:G1HeapRegionSize=4m, -XX:InitiatingHeapOccupancyPercent=30, -XX:G1ReservePercent=15, -Des.path.home=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr, -Des.path.conf=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/config, -Des.distribution.flavor=default, -Des.distribution.type=tar, -Des.bundled_jdk=true]
   │ info [o.e.n.Node] [ftr] version [7.17.23-SNAPSHOT] is a pre-release version of Elasticsearch and is not suitable for production
   │ info [o.e.p.PluginsService] [ftr] loaded module [aggs-matrix-stats]
   │ info [o.e.p.PluginsService] [ftr] loaded module [analysis-common]
   │ info [o.e.p.PluginsService] [ftr] loaded module [constant-keyword]
   │ info [o.e.p.PluginsService] [ftr] loaded module [frozen-indices]
   │ info [o.e.p.PluginsService] [ftr] loaded module [ingest-common]
   │ info [o.e.p.PluginsService] [ftr] loaded module [ingest-geoip]
   │ info [o.e.p.PluginsService] [ftr] loaded module [ingest-user-agent]
   │ info [o.e.p.PluginsService] [ftr] loaded module [kibana]
   │ info [o.e.p.PluginsService] [ftr] loaded module [lang-expression]
   │ info [o.e.p.PluginsService] [ftr] loaded module [lang-mustache]
   │ info [o.e.p.PluginsService] [ftr] loaded module [lang-painless]
   │ info [o.e.p.PluginsService] [ftr] loaded module [legacy-geo]
   │ info [o.e.p.PluginsService] [ftr] loaded module [mapper-extras]
   │ info [o.e.p.PluginsService] [ftr] loaded module [mapper-version]
   │ info [o.e.p.PluginsService] [ftr] loaded module [parent-join]
   │ info [o.e.p.PluginsService] [ftr] loaded module [percolator]
   │ info [o.e.p.PluginsService] [ftr] loaded module [rank-eval]
   │ info [o.e.p.PluginsService] [ftr] loaded module [reindex]
   │ info [o.e.p.PluginsService] [ftr] loaded module [repositories-metering-api]
   │ info [o.e.p.PluginsService] [ftr] loaded module [repository-encrypted]
   │ info [o.e.p.PluginsService] [ftr] loaded module [repository-url]
   │ info [o.e.p.PluginsService] [ftr] loaded module [runtime-fields-common]
   │ info [o.e.p.PluginsService] [ftr] loaded module [search-business-rules]
   │ info [o.e.p.PluginsService] [ftr] loaded module [searchable-snapshots]
   │ info [o.e.p.PluginsService] [ftr] loaded module [snapshot-repo-test-kit]
   │ info [o.e.p.PluginsService] [ftr] loaded module [spatial]
   │ info [o.e.p.PluginsService] [ftr] loaded module [test-delayed-aggs]
   │ info [o.e.p.PluginsService] [ftr] loaded module [test-die-with-dignity]
   │ info [o.e.p.PluginsService] [ftr] loaded module [test-error-query]
   │ info [o.e.p.PluginsService] [ftr] loaded module [transform]
   │ info [o.e.p.PluginsService] [ftr] loaded module [transport-netty4]
   │ info [o.e.p.PluginsService] [ftr] loaded module [unsigned-long]
   │ info [o.e.p.PluginsService] [ftr] loaded module [vector-tile]
   │ info [o.e.p.PluginsService] [ftr] loaded module [vectors]
   │ info [o.e.p.PluginsService] [ftr] loaded module [wildcard]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-aggregate-metric]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-analytics]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-async-search]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-autoscaling]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ccr]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-core]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-data-streams]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-deprecation]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-enrich]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-eql]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-fleet]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-graph]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-identity-provider]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ilm]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-logstash]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ml]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-monitoring]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-ql]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-rollup]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-security]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-shutdown]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-sql]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-stack]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-text-structure]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-voting-only-node]
   │ info [o.e.p.PluginsService] [ftr] loaded module [x-pack-watcher]
   │ info [o.e.p.PluginsService] [ftr] no plugins loaded
   │ info [o.e.e.NodeEnvironment] [ftr] using [1] data paths, mounts [[/opt/local-ssd (/dev/nvme0n1)]], net usable_space [343.5gb], net total_space [368gb], types [ext4]
   │ info [o.e.e.NodeEnvironment] [ftr] heap size [1.5gb], compressed ordinary object pointers [true]
   │ info [o.e.n.Node] [ftr] node name [ftr], node ID [YPy9BaiLSg21ax0sXCy9Rg], cluster name [job-kibana-default-ciGroup22-cluster-ftr], roles [transform, data_frozen, master, remote_cluster_client, data, ml, data_content, data_hot, data_warm, data_cold, ingest]
   │ info [o.e.x.m.p.l.CppLogMessageHandler] [ftr] [controller/4555] [Main.cc@122] controller (64 bit): Version 7.17.23-SNAPSHOT (Build 3e4489a02bea5d) Copyright (c) 2024 Elasticsearch BV
   │ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
   │ info [o.e.x.s.a.s.FileRolesStore] [ftr] parsed [0] roles from file [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/config/roles.yml]
   │ info [o.e.i.g.ConfigDatabases] [ftr] initialized default databases [[GeoLite2-Country.mmdb, GeoLite2-City.mmdb, GeoLite2-ASN.mmdb]], config databases [[]] and watching [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/config/ingest-geoip] for changes
   │ info [o.e.i.g.DatabaseNodeService] [ftr] initialized database registry, using geoip-databases directory [/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/.es/job-kibana-default-ciGroup22-cluster-ftr/ES_TMPDIR/geoip-databases/YPy9BaiLSg21ax0sXCy9Rg]
   │ info [o.e.t.NettyAllocator] [ftr] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
   │ info [o.e.i.r.RecoverySettings] [ftr] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
   │ info [o.e.d.DiscoveryModule] [ftr] using discovery type [single-node] and seed hosts providers [settings]
   │ info [o.e.g.DanglingIndicesState] [ftr] gateway.auto_import_dangling_indices is disabled, dangling indices will not be automatically detected or imported and must be managed manually
   │ info [o.e.n.Node] [ftr] initialized
   │ info [o.e.n.Node] [ftr] starting ...
   │ info [o.e.x.s.c.f.PersistentCache] [ftr] persistent cache index loaded
   │ info [o.e.x.d.l.DeprecationIndexingComponent] [ftr] deprecation component started
   │ info [o.e.t.TransportService] [ftr] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-alerts-7] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-es] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-kibana] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-logstash] with version [7]
   │ info [o.e.x.m.Monitoring] [ftr] creating template [.monitoring-beats] with version [7]
   │ info [o.e.c.c.Coordinator] [ftr] setting initial configuration to VotingConfiguration{YPy9BaiLSg21ax0sXCy9Rg}
   │ info [o.e.c.s.MasterService] [ftr] elected-as-master ([1] nodes joined)[{ftr}{YPy9BaiLSg21ax0sXCy9Rg}{H7Uq_7bYQn6BP7PEjqsunQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw} elect leader, _BECOME_MASTER_TASK_, _FINISH_ELECTION_], term: 1, version: 1, delta: master node changed {previous [], current [{ftr}{YPy9BaiLSg21ax0sXCy9Rg}{H7Uq_7bYQn6BP7PEjqsunQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}
   │ info [o.e.c.c.CoordinationState] [ftr] cluster UUID set to [gHZf4GNkRNG3VQ2K8rOnmg]
   │ info [o.e.c.s.ClusterApplierService] [ftr] master node changed {previous [], current [{ftr}{YPy9BaiLSg21ax0sXCy9Rg}{H7Uq_7bYQn6BP7PEjqsunQ}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}]}, term: 1, version: 1, reason: Publication{term=1, version=1}
   │ info [o.e.h.AbstractHttpServerTransport] [ftr] publish_address {127.0.0.1:9220}, bound_addresses {[::1]:9220}, {127.0.0.1:9220}
   │ info [o.e.n.Node] [ftr] started
   │ info [o.e.g.GatewayService] [ftr] recovered [0] indices into cluster_state
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-notifications-000002] for index patterns [.ml-notifications-000002]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-stats] for index patterns [.ml-stats-*]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.x.s.s.SecurityIndexManager] [ftr] security index does not exist, creating [.security-7] with alias [.security]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-anomalies-] for index patterns [.ml-anomalies-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.ml-state] for index patterns [.ml-state*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [data-streams-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [logs-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [metrics-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [synthetics-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.watch-history-13] for index patterns [.watcher-history-13*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [ilm-history] for index patterns [ilm-history-5*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.deprecation-indexing-settings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.slm-history] for index patterns [.slm-history-5*]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.security-7] creating index, cause [api], templates [], shards [1]/[0]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [logs] for index patterns [logs-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [metrics] for index patterns [metrics-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [synthetics] for index patterns [synthetics-*-*]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.deprecation-indexing-template] for index patterns [.logs-deprecation.*]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security-7][0]]]).
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ml-size-based-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [logs]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [metrics]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [synthetics]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [180-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [7-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [30-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [90-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [365-days-default]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [watch-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [ilm-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [slm-history-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.deprecation-indexing-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.fleet-actions-results-ilm-policy]
   │ info [o.e.l.LicenseService] [ftr] license [8a68bd13-4e41-493a-90e3-57c7a616fb54] mode [trial] - valid
   │ info [o.e.x.s.a.Realms] [ftr] license mode is [trial], currently licensed security realms are [reserved/reserved,file/default_file,native/default_native]
   │ info [o.e.x.s.s.SecurityStatusChangeListener] [ftr] Active license is now [TRIAL]; Security is enabled
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [system_indices_superuser]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [system_indices_superuser]
   │ info starting [kibana] > /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana-build-xpack/bin/kibana --logging.json=false --server.port=5620 --elasticsearch.hosts=http://localhost:9220 --elasticsearch.username=kibana_system --elasticsearch.password=changeme --data.search.aggs.shardDelay.enabled=true --security.showInsecureClusterWarning=false --telemetry.banner=false --telemetry.optIn=false --telemetry.sendUsageTo=staging --server.maxPayload=1679958 --plugin-path=/opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana/test/common/fixtures/plugins/newsfeed --newsfeed.service.urlRoot=http://localhost:5620 --newsfeed.service.pathTemplate=/api/_newsfeed-FTS-external-service-simulators/kibana/v{VERSION}.json --logging.appenders.deprecation.type=console --logging.appenders.deprecation.layout.type=json --logging.loggers[0].name=elasticsearch.deprecation --logging.loggers[0].level=all --logging.loggers[0].appenders[0]=deprecation --status.allowAnonymous=true --server.uuid=5b2de169-2785-441b-ae8c-186a1936b17d --xpack.maps.showMapsInspectorAdapter=true --xpack.maps.preserveDrawingBuffer=true --xpack.security.encryptionKey="wuGNaIhoMpk5sO4UBxgr3NyW1sFcLgIf" --xpack.encryptedSavedObjects.encryptionKey="DkdXazszSCYexXqz4YktBGHCRkV6hyNK" --xpack.discoverEnhanced.actions.exploreDataInContextMenu.enabled=true --savedObjects.maxImportPayloadBytes=10485760 --xpack.siem.enabled=true
   │ proc [kibana] Kibana is currently running with legacy OpenSSL providers enabled! For details and instructions on how to disable see https://www.elastic.co/guide/en/kibana/7.17/production.html#openssl-legacy-provider
   │ proc [kibana]   log   [15:50:21.079] [info][plugins-service] Plugin "metricsEntities" is disabled.
   │ proc [kibana]   log   [15:50:21.159] [info][server][Preboot][http] http server running at http://localhost:5620
   │ proc [kibana]   log   [15:50:21.203] [warning][config][deprecation] Starting in 8.0, the Kibana logging format will be changing. This may affect you if you are doing any special handling of your Kibana logs, such as ingesting logs into Elasticsearch for further analysis. If you are using the new logging configuration, you are already receiving logs in both old and new formats, and the old format will simply be going away. If you are not yet using the new logging configuration, the log format will change upon upgrade to 8.0. Beginning in 8.0, the format of JSON logs will be ECS-compatible JSON, and the default pattern log format will be configurable with our new logging system. Please refer to the documentation for more information about the new logging format.
   │ proc [kibana]   log   [15:50:21.204] [warning][config][deprecation] The default mechanism for Reporting privileges will work differently in future versions, which will affect the behavior of this cluster. Set "xpack.reporting.roles.enabled" to "false" to adopt the future behavior before upgrading.
   │ proc [kibana]   log   [15:50:21.204] [warning][config][deprecation] Setting "security.showInsecureClusterWarning" has been replaced by "xpack.security.showInsecureClusterWarning"
   │ proc [kibana]   log   [15:50:21.205] [warning][config][deprecation] User sessions will automatically time out after 8 hours of inactivity starting in 8.0. Override this value to change the timeout.
   │ proc [kibana]   log   [15:50:21.205] [warning][config][deprecation] Users are automatically required to log in again after 30 days starting in 8.0. Override this value to change the timeout.
   │ proc [kibana]   log   [15:50:21.206] [warning][config][deprecation] Setting "xpack.siem.enabled" has been replaced by "xpack.securitySolution.enabled"
   │ proc [kibana]   log   [15:50:21.329] [info][plugins-system][standard] Setting up [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
   │ proc [kibana]   log   [15:50:21.347] [info][plugins][taskManager] TaskManager is identified by the Kibana UUID: 5b2de169-2785-441b-ae8c-186a1936b17d
   │ proc [kibana]   log   [15:50:21.461] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
   │ proc [kibana]   log   [15:50:21.486] [warning][config][plugins][security] Session cookies will be transmitted over insecure connections. This is not recommended.
   │ proc [kibana]   log   [15:50:21.500] [warning][config][plugins][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command.
   │ proc [kibana]   log   [15:50:21.518] [info][encryptedSavedObjects][plugins] Hashed 'xpack.encryptedSavedObjects.encryptionKey' for this instance: nnkvE7kjGgidcjXzmLYBbIh4THhRWI1/7fUjAEaJWug=
   │ proc [kibana]   log   [15:50:21.558] [info][plugins][ruleRegistry] Installing common resources shared between all indices
   │ proc [kibana]   log   [15:50:21.999] [info][config][plugins][reporting] Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 20.04 OS. Automatically enabling Chromium sandbox.
   │ proc [kibana]   log   [15:50:22.269] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
   │ proc [kibana]   log   [15:50:22.270] [info][savedobjects-service] Starting saved objects migrations
   │ proc [kibana]   log   [15:50:22.336] [info][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 16ms.
   │ proc [kibana]   log   [15:50:22.353] [info][savedobjects-service] [.kibana] INIT -> CREATE_NEW_TARGET. took: 37ms.
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_task_manager_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_task_manager_7.17.23_001]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_7.17.23_001] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana_7.17.23_001]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_task_manager_7.17.23_001][0], [.kibana_7.17.23_001][0]]]).
   │ proc [kibana]   log   [15:50:22.747] [info][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 411ms.
   │ proc [kibana]   log   [15:50:22.750] [info][savedobjects-service] [.kibana] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 397ms.
   │ proc [kibana]   log   [15:50:22.845] [info][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 95ms.
   │ proc [kibana]   log   [15:50:22.845] [info][savedobjects-service] [.kibana] Migration completed after 529ms
   │ proc [kibana]   log   [15:50:22.879] [info][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 132ms.
   │ proc [kibana]   log   [15:50:22.880] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 560ms
   │ proc [kibana]   log   [15:50:22.887] [info][plugins-system][standard] Starting [114] plugins: [newsfeedFixtures,translations,licensing,globalSearch,globalSearchProviders,features,licenseApiGuard,code,usageCollection,xpackLegacy,taskManager,telemetryCollectionManager,telemetryCollectionXpack,kibanaUsageCollection,share,embeddable,uiActionsEnhanced,screenshotMode,banners,telemetry,newsfeed,mapsEms,mapsLegacy,kibanaLegacy,fieldFormats,expressions,dataViews,charts,esUiShared,bfetch,data,savedObjects,presentationUtil,expressionShape,expressionRevealImage,expressionRepeatImage,expressionMetric,expressionImage,customIntegrations,home,searchprofiler,painlessLab,grokdebugger,management,watcher,licenseManagement,advancedSettings,spaces,security,savedObjectsTagging,reporting,canvas,lists,ingestPipelines,fileUpload,encryptedSavedObjects,dataEnhanced,cloud,snapshotRestore,eventLog,actions,alerting,triggersActionsUi,transform,stackAlerts,ruleRegistry,visualizations,visTypeXy,visTypeVislib,visTypeVega,visTypeTimelion,visTypeTagcloud,visTypeTable,visTypePie,visTypeMetric,visTypeMarkdown,tileMap,regionMap,expressionTagcloud,expressionMetricVis,console,graph,fleet,indexManagement,remoteClusters,crossClusterReplication,indexLifecycleManagement,dashboard,maps,dashboardMode,dashboardEnhanced,visualize,visTypeTimeseries,rollup,indexPatternFieldEditor,lens,cases,timelines,discover,osquery,observability,discoverEnhanced,dataVisualizer,ml,uptime,securitySolution,infra,upgradeAssistant,monitoring,logstash,enterpriseSearch,apm,savedObjectsManagement,indexPatternManagement]
   │ proc [kibana]   log   [15:50:23.942] [info][monitoring][monitoring][plugins] config sourced from: production cluster
   │ proc [kibana]   log   [15:50:25.196] [info][server][Kibana][http] http server running at http://localhost:5620
   │ proc [kibana]   log   [15:50:25.268] [info][status] Kibana is now degraded
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_task_manager_7.17.23_001/4vR8uA4_SO-8BFkGcP2Tjw] update_mapping [_doc]
   │ proc [kibana]   log   [15:50:25.465] [info][kibana-monitoring][monitoring][monitoring][plugins] Starting monitoring stats collection
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-ecs-mappings]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-custom-link] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-custom-link]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-technical-mappings]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.apm-agent-configuration] creating index, cause [api], templates [], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.apm-agent-configuration]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.apm-custom-link][0], [.apm-agent-configuration][0]]]).
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana_security_session_index_template_1] for index patterns [.kibana_security_session_1]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/EKFgh-H5Ssi6DuK3RmYHnQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/EKFgh-H5Ssi6DuK3RmYHnQ] update_mapping [_doc]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana_security_session_1] creating index, cause [api], templates [.kibana_security_session_index_template_1], shards [1]/[0]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_security_session_1][0]]]).
   │ info [o.e.c.m.MetadataMappingService] [ftr] [.kibana_7.17.23_001/EKFgh-H5Ssi6DuK3RmYHnQ] update_mapping [_doc]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [.alerts-ilm-policy]
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-event-log-policy]
   │ proc [kibana]   log   [15:50:26.744] [info][plugins][ruleRegistry] Installed common resources shared between all indices
   │ proc [kibana]   log   [15:50:26.745] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.uptime.alerts
   │ proc [kibana]   log   [15:50:26.745] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.logs.alerts
   │ proc [kibana]   log   [15:50:26.746] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.metrics.alerts
   │ proc [kibana]   log   [15:50:26.746] [info][plugins][ruleRegistry] Installing resources for index .alerts-observability.apm.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.metrics.alerts-mappings]
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.uptime.alerts-mappings]
   │ proc [kibana]   log   [15:50:26.894] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.metrics.alerts
   │ proc [kibana]   log   [15:50:26.903] [info][plugins][securitySolution] Dependent plugin setup complete - Starting ManifestTask
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.logs.alerts-mappings]
   │ proc [kibana]   log   [15:50:26.924] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.uptime.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding component template [.alerts-observability.apm.alerts-mappings]
   │ proc [kibana]   log   [15:50:26.961] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.logs.alerts
   │ proc [kibana]   log   [15:50:27.030] [info][plugins][ruleRegistry] Installed resources for index .alerts-observability.apm.alerts
   │ info [o.e.c.m.MetadataIndexTemplateService] [ftr] adding index template [.kibana-event-log-7.17.23-snapshot-template] for index patterns [.kibana-event-log-7.17.23-snapshot-*]
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.kibana-event-log-7.17.23-snapshot-000001] creating index, cause [api], templates [.kibana-event-log-7.17.23-snapshot-template], shards [1]/[1]
   │ info [o.e.c.r.a.AllocationService] [ftr] updating number_of_replicas to [0] for indices [.kibana-event-log-7.17.23-snapshot-000001]
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana-event-log-7.17.23-snapshot-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"new","action":"complete","name":"complete"}] to [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] in policy [kibana-event-log-policy]
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.kibana-event-log-7.17.23-snapshot-000001] from [{"phase":"hot","action":"unfollow","name":"branch-check-unfollow-prerequisites"}] to [{"phase":"hot","action":"rollover","name":"check-rollover-ready"}] in policy [kibana-event-log-policy]
   │ proc [kibana]   log   [15:50:27.810] [info][chromium][plugins][reporting] Browser executable: /opt/local-ssd/buildkite/builds/bk-agent-prod-gcp-1720539735706293441/elastic/kibana-pull-request/kibana-build-xpack/x-pack/plugins/reporting/chromium/headless_shell-linux_x64/headless_shell
   │ proc [kibana]   log   [15:50:27.829] [info][plugins][reporting][store] Creating ILM policy for managing reporting indices: kibana-reporting
   │ info [o.e.x.i.a.TransportPutLifecycleAction] [ftr] adding index lifecycle policy [kibana-reporting]
   │ proc [kibana]   log   [15:50:28.516] [info][0][1][endpoint:metadata-check-transforms-task:0][plugins][securitySolution] no endpoint metadata transforms found
   │ info [o.e.c.m.MetadataCreateIndexService] [ftr] [.ds-ilm-history-5-2024.07.09-000001] creating index, cause [initialize_data_stream], templates [ilm-history], shards [1]/[0]
   │ info [o.e.c.m.MetadataCreateDataStreamService] [ftr] adding data stream [ilm-history-5] with write index [.ds-ilm-history-5-2024.07.09-000001], backing indices [], and aliases []
   │ info [o.e.c.r.a.AllocationService] [ftr] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.ds-ilm-history-5-2024.07.09-000001][0]]]).
   │ info [o.e.x.i.IndexLifecycleTransition] [ftr] moving index [.ds-ilm-history-5-2024.07.09-000001] from [null] to [{"phase":"new","action":"complete","name":"complete"}] in policy [ilm-history-ilm-policy]
   │ proc [kibana]   log   [15:50:34.285] [info][status] Kibana is now available (was degraded)
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] updated role [system_indices_superuser]
   │ info Remote initialized: chrome-headless-shell 126.0.6478.126
   │ info chromedriver version: 126.0.6478.126 (d36ace6122e0a59570e258d82441395206d60e1c-refs/branch-heads/6478@{#1591})
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] updated user [system_indices_superuser]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_logstash_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_canvas_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_discover_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_dashboard_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_discover_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_visualize_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_visualize_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_dashboard_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_maps_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_maps_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoshape_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [antimeridian_points_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [antimeridian_shapes_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [meta_for_geoshape_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoconnections_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_logs_data_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [geoall_data_writer]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_index_pattern_management_all]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_devtools_read]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_ccr_role]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_upgrade_assistant_role]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_rollups_role]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_rollup_reader]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [test_api_keys]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_security]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [ccr_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [manage_ilm]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [index_management_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [snapshot_restore_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [ingest_pipelines_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [license_management_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [logstash_read_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [remote_clusters_user]
   │ info [o.e.x.s.a.r.TransportPutRoleAction] [ftr] added role [global_alerts_logs_all_else_read]
   │ info [o.e.x.s.a.u.TransportPutUserAction] [ftr] added user [test_user]
   │ info Only running suites which are compatible with ES version 7.17.23
   │ info Only running suites (and their sub-suites) if they include the tag(s): [ 'ciGroup22' ]
   │ info Starting tests
   │ warn debug logs are being captured, only error logs will be written to the console
   │
     └-: maps app
       └-> "before all" hook: beforeTestSuite.trigger in "maps app"
       └-> "before all" hook in "maps app"
       └-: 
         └-> "before all" hook: beforeTestSuite.trigger in ""
         └-: layer geo grid aggregation source
           └-: heatmap
             └-> should re-fetch geotile_grid aggregation with refresh timer
             └-> should decorate feature properties with scaled doc_count property
             └-: geoprecision - requests
               └-> should not rerequest when pan changes do not move map view area outside of buffer
               └-> should not rerequest when zoom changes do not cause geotile_grid precision to change
               └-> should rerequest when zoom changes causes the geotile_grid precision to change
             └-: geotile grid precision - data
               └-> should not return any data when the extent does not cover the data bounds
               └-> should request the data when the map covers the databounds
               └-> should request only partial data when the map only covers part of the databounds
             └-: query bar
               └-> should apply query to geotile_grid aggregation request
             └-: inspector
               └-> should contain geotile_grid aggregation elasticsearch request
               └-> should not contain any elasticsearch request after layer is deleted
           └-: vector(grid)
             └-> should re-fetch geotile_grid aggregation with refresh timer
             └-> should decorate feature properties with metrics properterties
             └-: geoprecision - requests
               └-> should not rerequest when pan changes do not move map view area outside of buffer
               └-> should not rerequest when zoom changes do not cause geotile_grid precision to change
               └-> should rerequest when zoom changes causes the geotile_grid precision to change
             └-: geotile grid precision - data
               └-> should not return any data when the extent does not cover the data bounds
               └-> should request the data when the map covers the databounds
               └-> should request only partial data when the map only covers part of the databounds
             └-: query bar
               └-> should apply query to geotile_grid aggregation request
             └-: inspector
               └-> should contain geotile_grid aggregation elasticsearch request
               └-> should not contain any elasticsearch request after layer is deleted
           └-: vector grid with geo_shape
             └-> should get expected number of grid cells
             └-: inspector
               └-> should contain geotile_grid aggregation elasticsearch request
         └-: embeddable
           └-> "before all" hook: beforeTestSuite.trigger in "embeddable"
           └-: maps add-to-dashboard save flow
             └-> "before all" hook: beforeTestSuite.trigger for "should allow new map be added by value to a new dashboard"
             └-> "before all" hook for "should allow new map be added by value to a new dashboard"
             └-> should allow new map be added by value to a new dashboard
               └-> "before each" hook: global before each for "should allow new map be added by value to a new dashboard"
               └- ✓ pass  (48.2s)
             └-> should allow existing maps be added by value to a new dashboard
               └-> "before each" hook: global before each for "should allow existing maps be added by value to a new dashboard"
               └- ✓ pass  (46.7s)
             └-> should allow new map be added by value to an existing dashboard
               └-> "before each" hook: global before each for "should allow new map be added by value to an existing dashboard"
               └- ✓ pass  (1.0m)
             └-> should allow existing maps be added by value to an existing dashboard
               └-> "before each" hook: global before each for "should allow existing maps be added by value to an existing dashboard"
               └- ✓ pass  (1.0m)
             └-> should allow new map be added by reference to a new dashboard
               └-> "before each" hook: global before each for "should allow new map be added by reference to a new dashboard"
               └- ✓ pass  (40.8s)
             └-> should allow existing maps be added by reference to a new dashboard
               └-> "before each" hook: global before each for "should allow existing maps be added by reference to a new dashboard"
               └- ✓ pass  (45.1s)
             └-> should allow new map be added by reference to an existing dashboard
               └-> "before each" hook: global before each for "should allow new map be added by reference to an existing dashboard"
               └- ✓ pass  (1.0m)
             └-> should allow existing maps be added by reference to an existing dashboard
               └-> "before each" hook: global before each for "should allow existing maps be added by reference to an existing dashboard"
               └- ✓ pass  (1.0m)
             └-> "after all" hook for "should allow existing maps be added by reference to an existing dashboard"
             └-> "after all" hook: afterTestSuite.trigger for "should allow existing maps be added by reference to an existing dashboard"
           └-: save and return work flow
             └-> "before all" hook: beforeTestSuite.trigger in "save and return work flow"
             └-> "before all" hook in "save and return work flow"
             └-: new map
               └-> "before all" hook: beforeTestSuite.trigger in "new map"
               └-: save
                 └-> "before all" hook: beforeTestSuite.trigger for "should return to dashboard and add new panel"
                 └-> should return to dashboard and add new panel
                   └-> "before each" hook: global before each for "should return to dashboard and add new panel"
                   └-> "before each" hook for "should return to dashboard and add new panel"
                   └- ✓ pass  (13.7s)
                 └-> "after all" hook: afterTestSuite.trigger for "should return to dashboard and add new panel"
               └-: save and uncheck return to origin switch
                 └-> "before all" hook: beforeTestSuite.trigger for "should cut the originator and stay in maps application"
                 └-> should cut the originator and stay in maps application
                   └-> "before each" hook: global before each for "should cut the originator and stay in maps application"
                   └-> "before each" hook for "should cut the originator and stay in maps application"
                   └- ✓ pass  (11.3s)
                 └-> "after all" hook: afterTestSuite.trigger for "should cut the originator and stay in maps application"
               └-> "after all" hook: afterTestSuite.trigger in "new map"
             └-: edit existing map
               └-> "before all" hook: beforeTestSuite.trigger in "edit existing map"
               └-: save and return
                 └-> "before all" hook: beforeTestSuite.trigger for "should return to dashboard"
                 └-> should return to dashboard
                   └-> "before each" hook: global before each for "should return to dashboard"
                   └-> "before each" hook for "should return to dashboard"
                   └- ✓ pass  (7.4s)
                 └-> should lose its connection to the dashboard when creating new map
                   └-> "before each" hook: global before each for "should lose its connection to the dashboard when creating new map"
                   └-> "before each" hook for "should lose its connection to the dashboard when creating new map"
                   └- ✓ pass  (19.4s)
                 └-> "after all" hook: afterTestSuite.trigger for "should lose its connection to the dashboard when creating new map"
               └-: save as
                 └-> "before all" hook: beforeTestSuite.trigger for "should return to dashboard and add new panel"
                 └-> should return to dashboard and add new panel
                   └-> "before each" hook: global before each for "should return to dashboard and add new panel"
                   └-> "before each" hook for "should return to dashboard and add new panel"
                   └- ✓ pass  (8.6s)
                 └-> "after all" hook: afterTestSuite.trigger for "should return to dashboard and add new panel"
               └-: save as and uncheck return to origin switch
                 └-> "before all" hook: beforeTestSuite.trigger for "should cut the originator and stay in maps application"
                 └-> should cut the originator and stay in maps application
                   └-> "before each" hook: global before each for "should cut the originator and stay in maps application"
                   └-> "before each" hook for "should cut the originator and stay in maps application"
                   └- ✓ pass  (7.5s)
                 └-> "after all" hook: afterTestSuite.trigger for "should cut the originator and stay in maps application"
               └-> "after all" hook: afterTestSuite.trigger in "edit existing map"
             └-> "after all" hook in "save and return work flow"
             └-> "after all" hook: afterTestSuite.trigger in "save and return work flow"
           └-: embed in dashboard
             └-> "before all" hook: beforeTestSuite.trigger for "should set "data-title" attribute"
             └-> "before all" hook for "should set "data-title" attribute"
             └-> should set "data-title" attribute
               └-> "before each" hook: global before each for "should set "data-title" attribute"
               └- ✓ pass  (29ms)
             └-> should pass index patterns to container
               └-> "before each" hook: global before each for "should pass index patterns to container"
               └- ✓ pass  (2.4s)
             └-> should populate inspector with requests for map embeddable
               └-> "before each" hook: global before each for "should populate inspector with requests for map embeddable"
               └- ✓ pass  (19.0s)
             └-> should apply container state (time, query, filters) to embeddable when loaded
               └-> "before each" hook: global before each for "should apply container state (time, query, filters) to embeddable when loaded"
               └- ✓ pass  (8.7s)
             └-> should apply new container state (time, query, filters) to embeddable
               └-> "before each" hook: global before each for "should apply new container state (time, query, filters) to embeddable"
               └- ✓ pass  (57.5s)
             └-> should re-fetch query when "refresh" is clicked
               └-> "before each" hook: global before each for "should re-fetch query when "refresh" is clicked"
               └- ✓ pass  (20.3s)
             └-> should re-fetch documents with refresh timer
               └-> "before each" hook: global before each for "should re-fetch documents with refresh timer"
               └- ✓ pass  (25.3s)
             └-> dashboard's back button should navigate to previous page
               └-> "before each" hook: global before each for "dashboard's back button should navigate to previous page"
               └- ✓ pass  (16.3s)
             └-> "after all" hook for "dashboard's back button should navigate to previous page"
             └-> "after all" hook: afterTestSuite.trigger for "dashboard's back button should navigate to previous page"
           └-: maps in embeddable library
             └-> "before all" hook: beforeTestSuite.trigger for "save map panel to embeddable library"
             └-> "before all" hook for "save map panel to embeddable library"
             └-> save map panel to embeddable library
               └-> "before each" hook: global before each for "save map panel to embeddable library"
               └- ✓ pass  (8.2s)
             └-> unlink map panel from embeddable library
               └-> "before each" hook: global before each for "unlink map panel from embeddable library"
               └- ✓ pass  (14.2s)
             └-> "after all" hook for "unlink map panel from embeddable library"
             └-> "after all" hook: afterTestSuite.trigger for "unlink map panel from embeddable library"
           └-: embeddable state
             └-> "before all" hook: beforeTestSuite.trigger for "should render map with center and zoom from embeddable state"
             └-> "before all" hook for "should render map with center and zoom from embeddable state"
             └-> should render map with center and zoom from embeddable state
               └-> "before each" hook: global before each for "should render map with center and zoom from embeddable state"
               └- ✓ pass  (2.7s)
             └-> "after all" hook for "should render map with center and zoom from embeddable state"
             └-> "after all" hook: afterTestSuite.trigger for "should render map with center and zoom from embeddable state"
           └-: tooltip filter actions
             └-> "before all" hook: beforeTestSuite.trigger in "tooltip filter actions"
             └-> "before all" hook in "tooltip filter actions"
             └-: apply filter to current view
               └-> "before all" hook: beforeTestSuite.trigger for "should display create filter button when tooltip is locked"
               └-> "before all" hook for "should display create filter button when tooltip is locked"
               └-> should display create filter button when tooltip is locked
                 └-> "before each" hook: global before each for "should display create filter button when tooltip is locked"
                 └- ✓ pass  (28ms)
               └-> should create filters when create filter button is clicked
                 └-> "before each" hook: global before each for "should create filters when create filter button is clicked"
                 └- ✓ pass  (102ms)
               └-> "after all" hook: afterTestSuite.trigger for "should create filters when create filter button is clicked"
             └-: panel actions
               └-> "before all" hook: beforeTestSuite.trigger for "should trigger dashboard drilldown action when clicked"
               └-> should trigger dashboard drilldown action when clicked
                 └-> "before each" hook: global before each for "should trigger dashboard drilldown action when clicked"
                 └-> "before each" hook for "should trigger dashboard drilldown action when clicked"
                 └- ✓ pass  (7.1s)
               └-> should trigger url drilldown action when clicked
                 └-> "before each" hook: global before each for "should trigger url drilldown action when clicked"
                 └-> "before each" hook for "should trigger url drilldown action when clicked"
                 └- ✓ pass  (466ms)
               └-> "after all" hook: afterTestSuite.trigger for "should trigger url drilldown action when clicked"
             └-> "after all" hook in "tooltip filter actions"
             └-> "after all" hook: afterTestSuite.trigger in "tooltip filter actions"
           └-: filter by map extent
             └-> "before all" hook: beforeTestSuite.trigger for "should not filter dashboard by map extent before "filter by map extent" is enabled"
             └-> "before all" hook for "should not filter dashboard by map extent before "filter by map extent" is enabled"
             └-> should not filter dashboard by map extent before "filter by map extent" is enabled
               └-> "before each" hook: global before each for "should not filter dashboard by map extent before "filter by map extent" is enabled"
               └- ✓ pass  (37ms)
             └-> should filter dashboard by map extent when "filter by map extent" is enabled
               └-> "before each" hook: global before each for "should filter dashboard by map extent when "filter by map extent" is enabled"
               └- ✓ pass  (5.6s)
             └-> should filter dashboard by new map extent when map is moved
               └-> "before each" hook: global before each for "should filter dashboard by new map extent when map is moved"
               └- ✓ pass  (12.5s)
             └-> should remove map extent filter dashboard when "filter by map extent" is disabled
               └-> "before each" hook: global before each for "should remove map extent filter dashboard when "filter by map extent" is disabled"
               └- ✓ pass  (5.5s)
             └-> "after all" hook for "should remove map extent filter dashboard when "filter by map extent" is disabled"
             └-> "after all" hook: afterTestSuite.trigger for "should remove map extent filter dashboard when "filter by map extent" is disabled"
           └-> "after all" hook: afterTestSuite.trigger in "embeddable"
         └-> "after all" hook: afterTestSuite.trigger in ""
       └-> "after all" hook in "maps app"
       └-> "after all" hook: afterTestSuite.trigger in "maps app"
   │
   │33 passing (18.0m)
   │24 pending
   │
   │ warn browser[SEVERE] ERROR FETCHING BROWSR LOGS: This driver instance does not have a valid session ID (did you call WebDriver.quit()?) and may no longer be used.
   │ proc [kibana]   log   [16:09:01.528] [info][plugins-system][standard] Stopping all plugins.
   │ proc [kibana]   log   [16:09:01.529] [info][kibana-monitoring][monitoring][monitoring][plugins] Monitoring stats collection is stopped
   │ info [kibana] exited with null after 1132.8 seconds
   │ info [es] stopping node ftr
   │ info [o.e.x.m.p.NativeController] [ftr] Native controller process has stopped - no new native processes can be started
   │ info [o.e.n.Node] [ftr] stopping ...
   │ info [o.e.x.w.WatcherService] [ftr] stopping watch service, reason [shutdown initiated]
   │ info [o.e.x.w.WatcherLifeCycleService] [ftr] watcher has stopped and shutdown
   │ info [o.e.n.Node] [ftr] stopped
   │ info [o.e.n.Node] [ftr] closing ...
   │ info [o.e.n.Node] [ftr] closed
   │ info [es] stopped
   │ info [es] no debug files found, assuming es did not write any
   │ info [es] cleanup complete