Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RAC] Remove rbac on security solution side #110472

Merged
merged 42 commits into from
Sep 1, 2021

Conversation

XavierM
Copy link
Contributor

@XavierM XavierM commented Aug 30, 2021

Summary

From: https://github.com/elastic/security-team/issues/1607#issuecomment-907142876

Users of Elastic Security solutions commonly rely on the application of Elasticsearch index privileges to users' roles to control access to the .siem-signals* indices within their deployments. Any change that causes these privileges to be ignored or bypassed would be considered a security violation or data leakage by these users.

This practice is especially common amongst:

Organizations that have large SOC teams with multiple tiers of analysts
Organizations that operate under strict interpretations of GDPR and other data protection standards and require them to limit the exposure of personal data even to security analysts.
Organizations that provide a managed SIEM service or other managed security service to multiple customers (MSSP)
Organizations that maintain a Center of Excellence (COE) for Elastic Deployments used for Security Solution.
Common use cases include:

Using Elasticsearch index privileges with document level security to prohibit lower tier analysts from accessing alerts created from sensitive source data. One Kibana space is used for T1/T2/T3 analysts, who collaborate to run the SOC. Some detection rules operate on data that contains personal information of organization employees. T1 analysts do not have sufficient security clearance to access that source data. Since alerts contain information copied from the source events, T1 analysts must be blocked from accessing those alerts that contain personal data. Users set up document level security on .siem-signals- indices that prohibit T1 analyst roles from accessing alert documents that have a certain field:value such event.dataset:sensitive_data
Using Elasticsearch index privileges with field level security to prohibit lower tier analysts from accessing fields in alerts that may have been enriched with personal data of organization employees. For example, source data, even from data sources that do not contain personal data, may be enriched with an employee ID, name, or email address at various points in the data ingestion process. Lower tier analysts do not have security clearance to view those fields in alerts, so their roles are configured to deny access to those fields such as ECS field user.full_name. T1 analysts may escalate these alerts to higher tier analysts or a SOC manager, who need to be able to see these fields in order to initiate incident response.
Using Elasticsearch index privileges with document level security as a secondary control (i.e. belt and suspenders) to block access to one tenant's data by other tenants in "single cluster multi-tenant" environment. The user has configured one tenant per Kibana space, and is relying on Elasticsearch index privileges applied to each tenant's roles such that they can access only alerts in their siem-signals- indices. In addition, they configure document-level security so that only alerts that contain a field:value like ECS field organization.name:tenant1 can be accessed. This way, if there is ever an error with regards to space assignments or usage, the DLS privileges will provide a secondary control to protect the privacy of tenant data.
To preserve proper expected operation of security solution environments, the general requirements for adding Kibana feature privileges and sub-feature privileges to Alerts should include all of the following:

All Elasticsearch security settings, including index-level, document-level, field-level, and attribute-based access controls must continue to operate properly on alerts indices.
Alerts indices must continue to be private to the Kibana Space they are created in by default
Alert indices must continue to be accessible via Cross Cluster Search
Alert indices must continue to be available as input data for rules (aka "rules on alerts")
Alert data must continue to be available for all non-solution Kibana apps (discover, lens, graph), and all Elasticsearch analysis.

Checklist

@XavierM XavierM requested review from a team as code owners August 30, 2021 13:15
@marshallmain
Copy link
Contributor

We should remove the alerts-as-data index alias and RBAC const keyword fields from the .siem-signals index as well to prevent any possibility of bypassing existing controls with the new APIs. The index alias is added here and the RBAC fields are added here.

@XavierM XavierM added bug Fixes for quality problems that affect the customer experience impact:critical This issue should be addressed immediately due to a critical level of impact on the product. Team:Threat Hunting Security Solution Threat Hunting Team v7.15.0 v7.16.0 v8.0.0 labels Aug 30, 2021
@elasticmachine
Copy link
Contributor

Pinging @elastic/security-threat-hunting (Team:Threat Hunting)

@XavierM XavierM added auto-backport Deprecated - use backport:version if exact versions are needed Theme: rac label obsolete labels Aug 30, 2021
@XavierM XavierM requested a review from a team as a code owner August 31, 2021 23:44
Copy link
Member

@legrego legrego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes to x-pack/test/api_integration/apis/security/privileges.ts LGTM - we are removing feature privileges that were introduced in v7.15.0 (via #108450), which hasn't been released yet. As such, there is no risk of breaking change or surprise-on-upgrade.

I am approving on code-review of this file only, to unblock this PR. I expect the security solutions team to have reviewed & tested the rest of the changes accordingly

Copy link
Contributor

@marshallmain marshallmain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mapping and template changes LGTM

Comment on lines +70 to +75
source: `if (ctx._source['${ALERT_WORKFLOW_STATUS}'] != null) {
ctx._source['${ALERT_WORKFLOW_STATUS}'] = '${status}'
}
if (ctx._source.signal != null && ctx._source.signal.status != null) {
ctx._source.signal.status = '${status}'
}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this additional logic to check for workflow_status in this route since it's security solution specific, but the route will still function correctly with it here.

Copy link
Contributor

@andrew-goldstein andrew-goldstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this update @XavierM, @yctercero, and @jonathan-buttner 🙏

  • Desk tested functionality locally over a zoom with several other 👀 in the Security Solution and o11y apps with a superuser role in a newly-created Kibana space
  • This review does not cover other authz scenarios
    LGTM 🚀

@XavierM XavierM enabled auto-merge (squash) September 1, 2021 00:32
@peluja1012
Copy link
Contributor

@elasticmachine merge upstream

@@ -532,7 +532,6 @@ export const waitForAlertsToPopulate = async (alertCountThreshold = 1) => {
cy.waitUntil(
() => {
refreshPage();
cy.get(LOADING_INDICATOR).should('exist');
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yctercero is there any reason why this line has been deleted? This was added to fix some tests and remove flakiness

@kibanamachine
Copy link
Contributor

💛 Build succeeded, but was flaky


Test Failures

Kibana Pipeline / general / Chrome X-Pack UI Plugin Functional Tests.x-pack/test/plugin_functional/test_suites/global_search/global_search_providers·ts.GlobalSearch API GlobalSearch providers SavedObject provider can search for index patterns

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has not failed recently on tracked branches

[00:00:00]       │
[00:00:47]         └-: GlobalSearch API
[00:00:47]           └-> "before all" hook in "GlobalSearch API"
[00:00:47]           └-: GlobalSearch providers
[00:00:47]             └-> "before all" hook in "GlobalSearch providers"
[00:00:47]             └-> "before all" hook in "GlobalSearch providers"
[00:00:47]               │ debg navigating to globalSearchTestApp url: http://localhost:61221/app/globalSearchTestApp
[00:00:47]               │ debg navigate to: http://localhost:61221/app/globalSearchTestApp
[00:00:47]               │ debg browser[INFO] http://localhost:61221/app/globalSearchTestApp?_t=1630480310485 281 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:00:47]               │
[00:00:47]               │ debg browser[INFO] http://localhost:61221/bootstrap.js 41:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:00:47]               │ debg ... sleep(700) start
[00:00:48]               │ debg ... sleep(700) end
[00:00:48]               │ debg returned from get, calling refresh
[00:00:49]               │ debg browser[INFO] http://localhost:61221/app/globalSearchTestApp?_t=1630480310485 281 Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-P5polb1UreUSOe5V/Pv7tc+yeZuJXiOi/3fqhGsU7BE='), or a nonce ('nonce-...') is required to enable inline execution.
[00:00:49]               │
[00:00:49]               │ debg browser[INFO] http://localhost:61221/bootstrap.js 41:19 "^ A single error about an inline script not firing due to content security policy is expected!"
[00:00:49]               │ debg currentUrl = http://localhost:61221/app/globalSearchTestApp
[00:00:49]               │          appUrl = http://localhost:61221/app/globalSearchTestApp
[00:00:49]               │ debg TestSubjects.find(kibanaChrome)
[00:00:49]               │ debg Find.findByCssSelector('[data-test-subj="kibanaChrome"]') with timeout=60000
[00:00:50]               │ debg ... sleep(501) start
[00:00:51]               │ debg ... sleep(501) end
[00:00:51]               │ debg in navigateTo url = http://localhost:61221/app/globalSearchTestApp
[00:00:51]             └-: SavedObject provider
[00:00:51]               └-> "before all" hook for "can search for index patterns"
[00:00:51]               └-> "before all" hook for "can search for index patterns"
[00:00:51]                 │ info [x-pack/test/plugin_functional/es_archives/global_search/basic] Loading "mappings.json"
[00:00:51]                 │ info [x-pack/test/plugin_functional/es_archives/global_search/basic] Loading "data.json"
[00:00:51]                 │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.kibana_8.0.0_001/FOjPFuLoRle469_Z6PcpcA] deleting index
[00:00:51]                 │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.kibana_task_manager_8.0.0_001/AHMJbkNtSjGegoeObMUYUg] deleting index
[00:00:51]                 │ info [x-pack/test/plugin_functional/es_archives/global_search/basic] Deleted existing index ".kibana_8.0.0_001"
[00:00:51]                 │ info [x-pack/test/plugin_functional/es_archives/global_search/basic] Deleted existing index ".kibana_task_manager_8.0.0_001"
[00:00:51]                 │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_1] creating index, cause [api], templates [], shards [1]/[0]
[00:00:51]                 │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_1][0]]"
[00:00:51]                 │ info [x-pack/test/plugin_functional/es_archives/global_search/basic] Created index ".kibana_1"
[00:00:51]                 │ debg [x-pack/test/plugin_functional/es_archives/global_search/basic] ".kibana_1" settings {"index":{"number_of_shards":"1","auto_expand_replicas":"0-1","number_of_replicas":"0"}}
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_1/soCt5zk_RwqN9JrK38mg0Q] update_mapping [_doc]
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_1/soCt5zk_RwqN9JrK38mg0Q] update_mapping [_doc]
[00:00:51]                 │ info [x-pack/test/plugin_functional/es_archives/global_search/basic] Indexed 7 docs into ".kibana"
[00:00:51]                 │ debg Migrating saved objects
[00:00:51]                 │ proc [kibana]   log   [07:11:54.487] [info][savedobjects-service] [.kibana_task_manager] INIT -> CREATE_NEW_TARGET. took: 10ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.490] [info][savedobjects-service] [.kibana] INIT -> WAIT_FOR_YELLOW_SOURCE. took: 16ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.494] [info][savedobjects-service] [.kibana] WAIT_FOR_YELLOW_SOURCE -> CHECK_UNKNOWN_DOCUMENTS. took: 4ms.
[00:00:51]                 │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_task_manager_8.0.0_001] creating index, cause [api], templates [], shards [1]/[1]
[00:00:51]                 │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.kibana_task_manager_8.0.0_001]
[00:00:51]                 │ proc [kibana]   log   [07:11:54.505] [info][savedobjects-service] [.kibana] CHECK_UNKNOWN_DOCUMENTS -> SET_SOURCE_WRITE_BLOCK. took: 11ms.
[00:00:51]                 │ info [o.e.c.m.MetadataIndexStateService] [node-01] adding block write to indices [[.kibana_1/soCt5zk_RwqN9JrK38mg0Q]]
[00:00:51]                 │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_task_manager_8.0.0_001][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_task_manager_8.0.0_001][0]]"
[00:00:51]                 │ info [o.e.c.m.MetadataIndexStateService] [node-01] completed adding block write to indices [.kibana_1]
[00:00:51]                 │ proc [kibana]   log   [07:11:54.582] [info][savedobjects-service] [.kibana_task_manager] CREATE_NEW_TARGET -> MARK_VERSION_INDEX_READY. took: 95ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.599] [error][plugins][taskManager] [WorkloadAggregator]: Error: Invalid workload: {"took":0,"timed_out":false,"_shards":{"total":0,"successful":0,"skipped":0,"failed":0},"hits":{"total":{"value":0,"relation":"eq"},"max_score":0,"hits":[]}}
[00:00:51]                 │ proc [kibana]   log   [07:11:54.601] [info][savedobjects-service] [.kibana] SET_SOURCE_WRITE_BLOCK -> CALCULATE_EXCLUDE_FILTERS. took: 96ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.609] [info][savedobjects-service] [.kibana] CALCULATE_EXCLUDE_FILTERS -> CREATE_REINDEX_TEMP. took: 8ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.624] [info][savedobjects-service] [.kibana_task_manager] MARK_VERSION_INDEX_READY -> DONE. took: 42ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.625] [info][savedobjects-service] [.kibana_task_manager] Migration completed after 148ms
[00:00:51]                 │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_8.0.0_reindex_temp] creating index, cause [api], templates [], shards [1]/[1]
[00:00:51]                 │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.kibana_8.0.0_reindex_temp]
[00:00:51]                 │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_8.0.0_reindex_temp][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_8.0.0_reindex_temp][0]]"
[00:00:51]                 │ proc [kibana]   log   [07:11:54.686] [info][savedobjects-service] [.kibana] CREATE_REINDEX_TEMP -> REINDEX_SOURCE_TO_TEMP_OPEN_PIT. took: 77ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.699] [info][savedobjects-service] [.kibana] REINDEX_SOURCE_TO_TEMP_OPEN_PIT -> REINDEX_SOURCE_TO_TEMP_READ. took: 13ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.717] [info][savedobjects-service] [.kibana] Starting to process 7 documents.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.718] [info][savedobjects-service] [.kibana] REINDEX_SOURCE_TO_TEMP_READ -> REINDEX_SOURCE_TO_TEMP_INDEX. took: 18ms.
[00:00:51]                 │ proc [kibana]   log   [07:11:54.748] [info][savedobjects-service] [.kibana] REINDEX_SOURCE_TO_TEMP_INDEX -> REINDEX_SOURCE_TO_TEMP_INDEX_BULK. took: 31ms.
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g] update_mapping [_doc]
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g] update_mapping [_doc]
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g] update_mapping [_doc]
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g] update_mapping [_doc]
[00:00:51]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g] update_mapping [_doc]
[00:00:52]                 │ proc [kibana]   log   [07:11:54.962] [info][savedobjects-service] [.kibana] REINDEX_SOURCE_TO_TEMP_INDEX_BULK -> REINDEX_SOURCE_TO_TEMP_READ. took: 214ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:54.975] [info][savedobjects-service] [.kibana] Processed 7 documents out of 7.
[00:00:52]                 │ proc [kibana]   log   [07:11:54.975] [info][savedobjects-service] [.kibana] REINDEX_SOURCE_TO_TEMP_READ -> REINDEX_SOURCE_TO_TEMP_CLOSE_PIT. took: 13ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:54.983] [info][savedobjects-service] [.kibana] REINDEX_SOURCE_TO_TEMP_CLOSE_PIT -> SET_TEMP_WRITE_BLOCK. took: 8ms.
[00:00:52]                 │ info [o.e.c.m.MetadataIndexStateService] [node-01] adding block write to indices [[.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g]]
[00:00:52]                 │ info [o.e.c.m.MetadataIndexStateService] [node-01] completed adding block write to indices [.kibana_8.0.0_reindex_temp]
[00:00:52]                 │ proc [kibana]   log   [07:11:55.035] [info][savedobjects-service] [.kibana] SET_TEMP_WRITE_BLOCK -> CLONE_TEMP_TO_TARGET. took: 52ms.
[00:00:52]                 │ info [o.e.c.m.MetadataCreateIndexService] [node-01] applying create index request using existing index [.kibana_8.0.0_reindex_temp] metadata
[00:00:52]                 │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.kibana_8.0.0_001] creating index, cause [clone_index], templates [], shards [1]/[1]
[00:00:52]                 │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.kibana_8.0.0_001]
[00:00:52]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_001/Fsznu9U3QUeKRp37sPG9eg] create_mapping
[00:00:52]                 │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_8.0.0_001][0]]])." previous.health="YELLOW" reason="shards started [[.kibana_8.0.0_001][0]]"
[00:00:52]                 │ proc [kibana]   log   [07:11:55.188] [info][savedobjects-service] [.kibana] CLONE_TEMP_TO_TARGET -> REFRESH_TARGET. took: 153ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:55.193] [info][savedobjects-service] [.kibana] REFRESH_TARGET -> OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT. took: 5ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:55.198] [info][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH_OPEN_PIT -> OUTDATED_DOCUMENTS_SEARCH_READ. took: 5ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:55.211] [info][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH_READ -> OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT. took: 13ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:55.214] [info][savedobjects-service] [.kibana] OUTDATED_DOCUMENTS_SEARCH_CLOSE_PIT -> UPDATE_TARGET_MAPPINGS. took: 3ms.
[00:00:52]                 │ info [o.e.c.m.MetadataMappingService] [node-01] [.kibana_8.0.0_001/Fsznu9U3QUeKRp37sPG9eg] update_mapping [_doc]
[00:00:52]                 │ proc [kibana]   log   [07:11:55.297] [info][savedobjects-service] [.kibana] UPDATE_TARGET_MAPPINGS -> UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK. took: 83ms.
[00:00:52]                 │ info [o.e.c.m.MetadataCreateIndexService] [node-01] [.tasks] creating index, cause [auto(bulk api)], templates [], shards [1]/[1]
[00:00:52]                 │ info [o.e.c.r.a.AllocationService] [node-01] updating number_of_replicas to [0] for indices [.tasks]
[00:00:52]                 │ info [o.e.c.r.a.AllocationService] [node-01] current.health="GREEN" message="Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.tasks][0]]])." previous.health="YELLOW" reason="shards started [[.tasks][0]]"
[00:00:52]                 │ info [o.e.t.LoggingTaskListener] [node-01] 2485 finished with response BulkByScrollResponse[took=36.9ms,timed_out=false,sliceId=null,updated=7,created=0,deleted=0,batches=1,versionConflicts=0,noops=0,retries=0,throttledUntil=0s,bulk_failures=[],search_failures=[]]
[00:00:52]                 │ proc [kibana]   log   [07:11:55.516] [info][savedobjects-service] [.kibana] UPDATE_TARGET_MAPPINGS_WAIT_FOR_TASK -> MARK_VERSION_INDEX_READY. took: 219ms.
[00:00:52]                 │ info [o.e.c.m.MetadataDeleteIndexService] [node-01] [.kibana_8.0.0_reindex_temp/38yz4IbkTFi7enBVZtLP8g] deleting index
[00:00:52]                 │ proc [kibana]   log   [07:11:55.570] [info][savedobjects-service] [.kibana] MARK_VERSION_INDEX_READY -> DONE. took: 54ms.
[00:00:52]                 │ proc [kibana]   log   [07:11:55.571] [info][savedobjects-service] [.kibana] Migration completed after 1097ms
[00:00:52]                 │ debg [x-pack/test/plugin_functional/es_archives/global_search/basic] Migrated Kibana index after loading Kibana data
[00:00:53]                 │ debg [x-pack/test/plugin_functional/es_archives/global_search/basic] Ensured that default space exists in .kibana
[00:00:53]                 │ debg applying update to kibana config: {"accessibility:disableAnimations":true,"dateFormat:tz":"UTC","visualization:visualize:legacyChartsLibrary":true,"visualization:visualize:legacyPieChartsLibrary":true}
[00:00:55]               └-> can search for index patterns
[00:00:55]                 └-> "before each" hook: global before each for "can search for index patterns"
[00:01:25]                 │ info Taking screenshot "/dev/shm/workspace/parallel/22/kibana/x-pack/test/plugin_functional/screenshots/failure/GlobalSearch API GlobalSearch providers SavedObject provider can search for index patterns.png"
[00:01:25]                 │ info Current URL is: http://localhost:61221/app/globalSearchTestApp
[00:01:25]                 │ info Saving page source to: /dev/shm/workspace/parallel/22/kibana/x-pack/test/plugin_functional/failure_debug/html/GlobalSearch API GlobalSearch providers SavedObject provider can search for index patterns.html
[00:01:25]                 └- ✖ fail: GlobalSearch API GlobalSearch providers SavedObject provider can search for index patterns
[00:01:25]                 │      ScriptTimeoutError: script timeout
[00:01:25]                 │   (Session info: headless chrome=92.0.4515.159)
[00:01:25]                 │       at Object.throwDecodedError (/dev/shm/workspace/parallel/22/kibana/node_modules/selenium-webdriver/lib/error.js:550:15)
[00:01:25]                 │       at parseHttpResponse (/dev/shm/workspace/parallel/22/kibana/node_modules/selenium-webdriver/lib/http.js:565:13)
[00:01:25]                 │       at Executor.execute (/dev/shm/workspace/parallel/22/kibana/node_modules/selenium-webdriver/lib/http.js:491:26)
[00:01:25]                 │       at processTicksAndRejections (internal/process/task_queues.js:95:5)
[00:01:25]                 │       at Task.exec (/dev/shm/workspace/parallel/22/kibana/test/functional/services/remote/prevent_parallel_calls.ts:28:20)
[00:01:25]                 │ 
[00:01:25]                 │ 

Stack Trace

ScriptTimeoutError: script timeout
  (Session info: headless chrome=92.0.4515.159)
    at Object.throwDecodedError (/dev/shm/workspace/parallel/22/kibana/node_modules/selenium-webdriver/lib/error.js:550:15)
    at parseHttpResponse (/dev/shm/workspace/parallel/22/kibana/node_modules/selenium-webdriver/lib/http.js:565:13)
    at Executor.execute (/dev/shm/workspace/parallel/22/kibana/node_modules/selenium-webdriver/lib/http.js:491:26)
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at Task.exec (/dev/shm/workspace/parallel/22/kibana/test/functional/services/remote/prevent_parallel_calls.ts:28:20) {
  remoteStacktrace: '#0 0x561ad2909a63 <unknown>\n' +
    '#1 0x561ad267e2ef <unknown>\n' +
    '#2 0x561ad26e5332 <unknown>\n' +
    '#3 0x561ad26d10e2 <unknown>\n' +
    '#4 0x561ad26e423c <unknown>\n' +
    '#5 0x561ad26d0fd3 <unknown>\n' +
    '#6 0x561ad26a7514 <unknown>\n' +
    '#7 0x561ad26a8505 <unknown>\n' +
    '#8 0x561ad2935e2e <unknown>\n' +
    '#9 0x561ad294b886 <unknown>\n' +
    '#10 0x561ad2936d75 <unknown>\n' +
    '#11 0x561ad294cd94 <unknown>\n' +
    '#12 0x561ad292d8eb <unknown>\n' +
    '#13 0x561ad2967cd8 <unknown>\n' +
    '#14 0x561ad2967e58 <unknown>\n' +
    '#15 0x561ad2981dfd <unknown>\n' +
    '#16 0x7f6de28b76ba start_thread\n'
}

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
securitySolution 2392 2385 -7

Public APIs missing comments

Total count of every public API that lacks a comment. Target amount is 0. Run node scripts/build_api_docs --plugin [yourplugin] --stats comments for more detailed information.

id before after diff
timelines 845 846 +1

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
securitySolution 6.5MB 6.5MB -5.2KB
timelines 420.2KB 420.4KB +208.0B
total -5.0KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
core 417.5KB 417.6KB +95.0B
securitySolution 207.9KB 207.7KB -264.0B
timelines 307.8KB 308.3KB +528.0B
total +359.0B
Unknown metric groups

API count

id before after diff
timelines 966 967 +1

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

@XavierM XavierM merged commit 16af3e9 into elastic:master Sep 1, 2021
@kibanamachine
Copy link
Contributor

💔 Backport failed

Status Branch Result
7.15 Commit could not be cherrypicked due to conflicts
7.x Commit could not be cherrypicked due to conflicts

To backport manually run:
node scripts/backport --pr 110472

semd pushed a commit to semd/kibana that referenced this pull request Sep 1, 2021
* wip to remove rbac

* Revert "[Cases] Include rule registry client for updating alert statuses (elastic#108588)"

This reverts commit 1fd7038.

This leaves the rule registry mock changes

* remove rbac on Trend/Count alert

* update detection api for status

* remove @kbn-alerts packages

* fix leftover

* Switching cases to leverage update by query for alert status

* Adding missed files

* fix bad logic

* updating tests for use_alerts_privileges

* remove index alias/fields

* fix types

* fix plugin to get the right index names

* left over of alis on template

* forget to use current user for create/read route index

* updated alerts page to not show table when no privileges and updates to tests

* fix bug when switching between o11y and security solution

* updates tests and move to use privileges page when user tries to access alerts without proper access

* updating jest tests

* pairing with yara

* bring back kbn-alerts after discussion with the team

* fix types

* fix index field for o11y

* fix bug with updating index priv state

* fix i18n issue and update api docs

* fix refresh on alerts

* fix render view on alerts

* updating tests and checking for null in alerts page to not show no privileges page before load

* fix details rules

Co-authored-by: Jonathan Buttner <jonathan.buttner@elastic.co>
Co-authored-by: Yara Tercero <yara.tercero@elastic.co>
# Conflicts:
#	x-pack/plugins/security_solution/cypress/integration/detection_alerts/alerts_details.spec.ts
semd pushed a commit to semd/kibana that referenced this pull request Sep 1, 2021
* wip to remove rbac

* Revert "[Cases] Include rule registry client for updating alert statuses (elastic#108588)"

This reverts commit 1fd7038.

This leaves the rule registry mock changes

* remove rbac on Trend/Count alert

* update detection api for status

* remove @kbn-alerts packages

* fix leftover

* Switching cases to leverage update by query for alert status

* Adding missed files

* fix bad logic

* updating tests for use_alerts_privileges

* remove index alias/fields

* fix types

* fix plugin to get the right index names

* left over of alis on template

* forget to use current user for create/read route index

* updated alerts page to not show table when no privileges and updates to tests

* fix bug when switching between o11y and security solution

* updates tests and move to use privileges page when user tries to access alerts without proper access

* updating jest tests

* pairing with yara

* bring back kbn-alerts after discussion with the team

* fix types

* fix index field for o11y

* fix bug with updating index priv state

* fix i18n issue and update api docs

* fix refresh on alerts

* fix render view on alerts

* updating tests and checking for null in alerts page to not show no privileges page before load

* fix details rules

Co-authored-by: Jonathan Buttner <jonathan.buttner@elastic.co>
Co-authored-by: Yara Tercero <yara.tercero@elastic.co>
# Conflicts:
#	x-pack/plugins/security_solution/cypress/integration/detection_alerts/alerts_details.spec.ts
semd added a commit that referenced this pull request Sep 1, 2021
* [RAC] Remove rbac on security solution side (#110472)

* wip to remove rbac

* Revert "[Cases] Include rule registry client for updating alert statuses (#108588)"

This reverts commit 1fd7038.

This leaves the rule registry mock changes

* remove rbac on Trend/Count alert

* update detection api for status

* remove @kbn-alerts packages

* fix leftover

* Switching cases to leverage update by query for alert status

* Adding missed files

* fix bad logic

* updating tests for use_alerts_privileges

* remove index alias/fields

* fix types

* fix plugin to get the right index names

* left over of alis on template

* forget to use current user for create/read route index

* updated alerts page to not show table when no privileges and updates to tests

* fix bug when switching between o11y and security solution

* updates tests and move to use privileges page when user tries to access alerts without proper access

* updating jest tests

* pairing with yara

* bring back kbn-alerts after discussion with the team

* fix types

* fix index field for o11y

* fix bug with updating index priv state

* fix i18n issue and update api docs

* fix refresh on alerts

* fix render view on alerts

* updating tests and checking for null in alerts page to not show no privileges page before load

* fix details rules

Co-authored-by: Jonathan Buttner <jonathan.buttner@elastic.co>
Co-authored-by: Yara Tercero <yara.tercero@elastic.co>
# Conflicts:
#	x-pack/plugins/security_solution/cypress/integration/detection_alerts/alerts_details.spec.ts

* skip test

Co-authored-by: Xavier Mouligneau <189600+XavierM@users.noreply.github.com>
semd added a commit that referenced this pull request Sep 1, 2021
* [Security Solution] Updates loock-back time on Cypress tests (#110609)

* updates loock-back time

* updates loock-back value for 'expectedExportedRule'

* skips tests to unblock 7.15 branch

* [RAC] Remove rbac on security solution side (#110472)

* wip to remove rbac

* Revert "[Cases] Include rule registry client for updating alert statuses (#108588)"

This reverts commit 1fd7038.

This leaves the rule registry mock changes

* remove rbac on Trend/Count alert

* update detection api for status

* remove @kbn-alerts packages

* fix leftover

* Switching cases to leverage update by query for alert status

* Adding missed files

* fix bad logic

* updating tests for use_alerts_privileges

* remove index alias/fields

* fix types

* fix plugin to get the right index names

* left over of alis on template

* forget to use current user for create/read route index

* updated alerts page to not show table when no privileges and updates to tests

* fix bug when switching between o11y and security solution

* updates tests and move to use privileges page when user tries to access alerts without proper access

* updating jest tests

* pairing with yara

* bring back kbn-alerts after discussion with the team

* fix types

* fix index field for o11y

* fix bug with updating index priv state

* fix i18n issue and update api docs

* fix refresh on alerts

* fix render view on alerts

* updating tests and checking for null in alerts page to not show no privileges page before load

* fix details rules

Co-authored-by: Jonathan Buttner <jonathan.buttner@elastic.co>
Co-authored-by: Yara Tercero <yara.tercero@elastic.co>
# Conflicts:
#	x-pack/plugins/security_solution/cypress/integration/detection_alerts/alerts_details.spec.ts

* skip tests

Co-authored-by: Gloria Hornero <snootchie.boochies@gmail.com>
Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
Co-authored-by: Xavier Mouligneau <189600+XavierM@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-backport Deprecated - use backport:version if exact versions are needed bug Fixes for quality problems that affect the customer experience impact:critical This issue should be addressed immediately due to a critical level of impact on the product. release_note:enhancement Team:Threat Hunting Security Solution Threat Hunting Team Theme: rac label obsolete v7.15.0 v7.16.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.