-
Notifications
You must be signed in to change notification settings - Fork 8.2k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[Fleet] [Security Solution] Install prebuilt rules package using stre…
…am-based approach (#195888) **Resolves: #192350 ## Summary Implemented stream-based installation of the detection rules package. **Background**: The installation of the detection rules package was causing OOM (Out of Memory) errors in Serverless environments where the available memory is limited to 1GB. The root cause of the errors was that during installation, the package was being read and unzipped entirely into memory. Given the large package size, this led to OOMs. To address these memory issues, the following changes were made: 1. Added a branching logic to the `installPackageFromRegistry` and `installPackageByUpload` methods, where based on the package name is decided to use streaming or not. Only one `security_detection_engine` package is currently hardcoded to use streaming. 2. In the state machine then defined a separate set of steps for the stream-based package installation. It is reduced to cover only Kibana assets installation at this stage. 3. A new `stepInstallKibanaAssetsWithStreaming` step is added to handle assets installation. While this method still reads the package archive into memory (since unzipping from a readable stream is [not possible due to the design of the .zip format](https://github.com/thejoshwolfe/yauzl?tab=readme-ov-file#no-streaming-unzip-api)), the package is unzipped using streams after being read into a buffer. This allows only a small portion of the archive (100 saved objects at a time) to be unpacked into memory, reducing memory usage. 4. The new method also includes several optimizations, such as only removing previously installed assets if they are missing in the new package and using `savedObjectClient.bulkCreate` instead of the less efficient `savedObjectClient.import`. ### Test environment 1. Prebuilt detection rules package with ~20k saved objects; 118MB zipped. 5. Local package registry. 6. Production build of Kibana running locally with a 700MB max old space limit, pointed to that registry. Setting up a test environment is not completely straightforward. Here's a rough outline of the steps: <details> <summary> How to test this PR </summary> 1. Create a package containing a large number of prebuilt rules. 1. I used the `package-storage` repository to find one of the previously released prebuilt rules packages. 2. Multiplied the number of assets in the package to 20k historical versions. 4. Built the package using `elastic-package build`. 2. Start a local package registry serving the built package using `elastic-package stack up --services package-registry`. 4. Create a production build of Kibana. To speed up the process, unnecessary artifacts can be skipped: ``` node scripts/build --skip-cdn-assets --skip-docker-ubi --skip-docker-ubuntu --skip-docker-wolfi --skip-docker-fips ``` 7. Provide the built Kibana with a config pointing to the local registry. The config is located in `build/default/kibana-9.0.0-SNAPSHOT-darwin-aarch64/config/kibana.yml`. You can use the following config: ``` csp.strict: false xpack.security.encryptionKey: 've4Vohnu oa0Fu9ae Eethee8c oDieg4do Nohrah1u ao9Hu2oh Aeb4Ieyi Aew1aegi' xpack.encryptedSavedObjects.encryptionKey: 'Shah7nai Eew6izai Eir7OoW0 Gewi2ief eiSh8woo shoogh7E Quae6hal ce6Oumah' xpack.fleet.internal.registry.kibanaVersionCheckEnabled: false xpack.fleet.registryUrl: https://localhost:8080 elasticsearch: username: 'kibana_system' password: 'changeme' hosts: 'http://localhost:9200' ``` 8. Override the Node options Kibana starts with to allow it to connect to the local registry and set the memory limit. For this, you need to edit the `build/default/kibana-9.0.0-SNAPSHOT-darwin-aarch64/bin/kibana` file: ``` NODE_OPTIONS="--no-warnings --max-http-header-size=65536 --unhandled-rejections=warn --dns-result-order=ipv4first --openssl-legacy-provider --max_old_space_size=700 --inspect" NODE_ENV=production NODE_EXTRA_CA_CERTS=~/.elastic-package/profiles/default/certs/ca-cert.pem exec "${NODE}" "${DIR}/src/cli/dist" "${@}" ``` 9. Navigate to the build folder: `build/default/kibana-9.0.0-SNAPSHOT-darwin-aarch64`. 10. Start Kibana using `./bin/kibana`. 11. Kibana is now running in debug mode, with the debugger started on port 9229. You can connect to it using VS Code's debug config or Chrome's DevTools. 12. Now you can install prebuilt detection rules by calling the `POST /internal/detection_engine/prebuilt_rules/_bootstrap` endpoint, which uses the new streaming installation under the hood. </details> ### Test results locally **Without the streaming approach** Guaranteed OOM. Even smaller packages, up to 10k rules, caused sporadic OOM errors. So for comparison, tested the package installation without memory limits. ![Screenshot 2024-10-14 at 14 15 26](https://github.com/user-attachments/assets/131cb877-2404-4638-b619-b1370a53659f) 1. Heap memory usage spikes up to 2.5GB 5. External memory consumes up to 450 Mb, which is four times the archive size 13. RSS (Resident Set Size) exceeds 4.5GB **With the streaming approach** No OOM errors observed. The memory consumption chart looks like the following: ![Screenshot 2024-10-14 at 11 15 21](https://github.com/user-attachments/assets/b47ba8c9-2ba7-42de-b921-c33104d4481e) 1. Heap memory remains stable, around 450MB, without any spikes. 2. External memory jumps to around 250MB at the beginning of the installation, then drops to around 120MB, which is roughly equal to the package archive size. I couldn't determine why the external memory consumption exceeds the package size by 2x when the installation starts. I checked the code for places where the package might be loaded into memory twice but found nothing suspicious. This might be worth investigating further. 3. RSS remains stable, peaking slightly above 1GB. I believe this is the upper limit for a package that can be handled without errors in a Serverless environment, where the memory limit is dictated by pod-level settings rather than Node settings and is set to 1GB. I'll verify this on a real Serverless instance to confirm. ### Test results on Serverless ![Screenshot 2024-10-31 at 12 31 34](https://github.com/user-attachments/assets/d20d2860-fa96-4e56-be2b-7b3c0b5c7b77)
- Loading branch information
Showing
41 changed files
with
768 additions
and
83 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
83 changes: 83 additions & 0 deletions
83
x-pack/plugins/fleet/server/services/epm/archive/archive_iterator.ts
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,83 @@ | ||
/* | ||
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one | ||
* or more contributor license agreements. Licensed under the Elastic License | ||
* 2.0; you may not use this file except in compliance with the Elastic License | ||
* 2.0. | ||
*/ | ||
|
||
import type { AssetsMap, ArchiveIterator, ArchiveEntry } from '../../../../common/types'; | ||
|
||
import { traverseArchiveEntries } from '.'; | ||
|
||
/** | ||
* Creates an iterator for traversing and extracting paths from an archive | ||
* buffer. This iterator is intended to be used for memory efficient traversal | ||
* of archive contents without extracting the entire archive into memory. | ||
* | ||
* @param archiveBuffer - The buffer containing the archive data. | ||
* @param contentType - The content type of the archive (e.g., | ||
* 'application/zip'). | ||
* @returns ArchiveIterator instance. | ||
* | ||
*/ | ||
export const createArchiveIterator = ( | ||
archiveBuffer: Buffer, | ||
contentType: string | ||
): ArchiveIterator => { | ||
const paths: string[] = []; | ||
|
||
const traverseEntries = async ( | ||
onEntry: (entry: ArchiveEntry) => Promise<void> | ||
): Promise<void> => { | ||
await traverseArchiveEntries(archiveBuffer, contentType, async (entry) => { | ||
await onEntry(entry); | ||
}); | ||
}; | ||
|
||
const getPaths = async (): Promise<string[]> => { | ||
if (paths.length) { | ||
return paths; | ||
} | ||
|
||
await traverseEntries(async (entry) => { | ||
paths.push(entry.path); | ||
}); | ||
|
||
return paths; | ||
}; | ||
|
||
return { | ||
traverseEntries, | ||
getPaths, | ||
}; | ||
}; | ||
|
||
/** | ||
* Creates an archive iterator from the assetsMap. This is a stop-gap solution | ||
* to provide a uniform interface for traversing assets while assetsMap is still | ||
* in use. It works with a map of assets loaded into memory and is not intended | ||
* for use with large archives. | ||
* | ||
* @param assetsMap - A map where the keys are asset paths and the values are | ||
* asset buffers. | ||
* @returns ArchiveIterator instance. | ||
* | ||
*/ | ||
export const createArchiveIteratorFromMap = (assetsMap: AssetsMap): ArchiveIterator => { | ||
const traverseEntries = async ( | ||
onEntry: (entry: ArchiveEntry) => Promise<void> | ||
): Promise<void> => { | ||
for (const [path, buffer] of assetsMap) { | ||
await onEntry({ path, buffer }); | ||
} | ||
}; | ||
|
||
const getPaths = async (): Promise<string[]> => { | ||
return [...assetsMap.keys()]; | ||
}; | ||
|
||
return { | ||
traverseEntries, | ||
getPaths, | ||
}; | ||
}; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.