Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Stack Monitoring] Testing strategy for agent/integration data #119658

Closed
neptunian opened this issue Nov 24, 2021 · 21 comments
Closed

[Stack Monitoring] Testing strategy for agent/integration data #119658

neptunian opened this issue Nov 24, 2021 · 21 comments
Assignees
Labels
Feature:Stack Monitoring Team:Infra Monitoring UI - DEPRECATED DEPRECATED - Label for the Infra Monitoring UI team. Use Team:obs-ux-infra_services

Comments

@neptunian
Copy link
Contributor

neptunian commented Nov 24, 2021

Aside from manual testing we need to write functional and api integration tests that use the new metrics-* agent data for each integration. There are a few ways to go about it

  • @chrisronline locally modified the es_archiver to take the existing .monitoring archive data and rebuild it into metricbeat data. See Part 2. This means he was able to copy over all the existing functional and api integration tests without having to change anything except the archive that was to be loaded. We could do something similar and convert the existing metricbeat-* data to metrics-* data. Since we never actually supported metricbeat-* or ensured the UI worked with it, I don't know if this data is totally correct.
  • Generate new data. This would mean we'd need the es_archiver to support us saving new data, which it does not yet fully Add support for data streams in ES Archiver #69061 . It would also mean going over our tests and creating new tests and have them match this new data. It would also provide us an opportunity to make sure the data is correct and understand the app better. I am leaning towards this.
  • @klacabane Suggested using live data in the tests but I think this would probably add too much time and flakiness

@klacabane and I are going to investigate the best approach, see what others have done, and see how difficult it would be use/modify the es_archiver to work with data streams.

** Update **

After some discussion with @klacabane, we decided the following:

  • Use the package registry docker image to load actual packages so we can have the latest mappings. We can pass in a custom configuration yaml that points to a server and install the latest package with the endpoint /api/fleet/epm/packages/${pkg}/ which should install the latest version
  • generate agent/integration data by transform the existing _mb archived data, adding the correct datastream index value or any other values that are necessary (using something similar to this script)
@neptunian neptunian added the Team:Infra Monitoring UI - DEPRECATED DEPRECATED - Label for the Infra Monitoring UI team. Use Team:obs-ux-infra_services label Nov 24, 2021
@elasticmachine
Copy link
Contributor

Pinging @elastic/infra-monitoring-ui (Team:Infra Monitoring UI)

@matschaffer
Copy link
Contributor

I like parts of both the first two bullet points. Being able to run the same test code over two data sets sounds like a nice move.

But also if the new code will be data streams, it seems like the test should be loaded as a data stream too. Of course, it might not matter much.

The UI will query metrics-* which could match indices or data streams and I wouldn't expect the query responses to change.

@miltonhultgren
Copy link
Contributor

I know we're still early on this but @matschaffer and I are looking into how to simulate Stack Monitoring data with apm-synthtrace, which supports writing to data streams. Maybe it's an option to consider even though it would require a bit more investment in our data generation tooling but I think we're on that path anyway and this would be for a concrete problem.

@klacabane
Copy link
Contributor

I'm wondering if we currently have validation of the mappings, or if that's even necessary with our versioning model.
In an agent world the mappings are defined by the Integrations package and we have a frontend component that expects a specific mapping definition to query and present data. What if the mapping changes, for example removing a property. Would that be guaranteed to be caught by end-to-end tests (or another mechanism), or are there scenarios where it could go unnoticed ?

@matschaffer
Copy link
Contributor

are there scenarios where it could go unnoticed ?

I'm fairly certain the answer to that is "yes". It's sounds very close to the failure scenario I demoed during the nov 15th team meeting though granted, in your scenario the mappings come from packages rather than ES itself.

@klacabane
Copy link
Contributor

I guess we could benefit from contract/schema tests on the mappings. It will be valuable to initially validate an integration package mappings against our expectations, but I don't know how doable it is currently - maybe we can leverage type definitions as a source of truth for monitoring expectations. Parsing our queries would be another (complex) solution.

Regarding the strategy we could start with option 1 which sounds cheap to implement and would provide an initial coverage that quickly surface bugs in the datastream usage. Once datastream support is added to esarchiver (or we have an alternative to load data in stream), we could work on the second option

@klacabane
Copy link
Contributor

klacabane commented Dec 2, 2021

So it looks like we could use esArchiver if data streams are already created (see #68794). In our case the datastreams and relevant assets are installed by the fleet application on request. I'm thinking about hitting that endpoint during the test suite setup - while it will add coupling and complexity to our testing environment (we'll need a package registry running in a docker env?), I like the idea of a tighter integrations with the packages assets, comparing to defining static mappings that could go out-of-sync.

With datastreams available, we can use esArchiver to insert the archives already defined. Transforming metricbeat-* into metrics-* documents should be straightforward and serve as a first step. As a follow up we can replace the esArchiver with another data-generation solution.

@matschaffer
Copy link
Contributor

we'll need a package registry running in a docker env

I don't think anything in the current yarn-powered testing requires docker yet. Might be worth finding out what it'd take to make something like https://github.com/elastic/kibana/blob/main/packages/kbn-es/README.md for the package registry. I forget who to talk to about this, but I remember @ycombinator presenting about it at the last GAH so maybe he knows.

@ycombinator
Copy link
Contributor

I'm not sure what the state of the art is with the package registry and package tooling. But I suspect @mtojek and/or @jsoriano might be able to provide the necessary guidance here.

@mtojek
Copy link
Contributor

mtojek commented Dec 3, 2021

FYI Package Registry/Storage will be under active development soon. We'd like to replace the Git repository with true object storage.

Regarding testing, tools we have can help with policy- and agent-oriented testing, but they don't include any Selenium or Kibana UI tests. They just depend on the available Fleet API. Keep in mind that we don't exercise Kibana, but Elastic packages. You can find more information here, especially system tests.

@miltonhultgren
Copy link
Contributor

@dgieselaar Can you share some insight on how the APM team handles this since you also need to launch APM server in your tests? (maybe you don't do it for E2E tests?)

@klacabane
Copy link
Contributor

klacabane commented Dec 3, 2021

@matschaffer this is already supported but I need to verify if we actually need this or not

I was able to get a subset of elasticsearch functional tests running and succeeding against datastreams with the esArchiver approach:

  • install elasticsearch integration assets (index templates..) through the fleet api
  • transform metricbeat-* archived data into metrics-* data and remove mappings.json from the archive
  • update esArchiver to load new datastream data with { useCreate: true } option
  • update test_user permissions to allow access to datastream

@dgieselaar
Copy link
Member

@dgieselaar Can you share some insight on how the APM team handles this since you also need to launch APM server in your tests?

We use an empty esArchive that only contains mappings/ empty indices for APM data. in the future we'll probably install the APM integration package. We don't use APM Server with Synthtrace.

Separately we might have integration tests that spin up the full APM stack (Kibana, ES, APM Server, APM Agents), but I have not worked with those before.

@matschaffer
Copy link
Contributor

@matschaffer this is already supported but I need to verify if we actually need this or not

Oh, wow. Nice find! Seems like it's main usage is for the package registry, so seems like you're on the right track.

@miltonhultgren
Copy link
Contributor

This issue is related: #123345 (action point from our team meeting on data generation tooling)

@neptunian
Copy link
Contributor Author

neptunian commented Jan 31, 2022

After discussing with @klacabane we decided:

  • Use the package registry docker image to load actual packages so we can have the latest mappings. We can pass in a custom configuration yaml that points to a server and install the latest package with the endpoint /api/fleet/epm/packages/${pkg}/ which should install the latest version
  • generate agent/integration data by transform the existing _mb archived data, adding the correct datastream index value or any other values that are necessary (using something similar to this script)
  • copy over the tests, similar to _mb, we'll have something like _agent copies

We can start with elasticsearch #119109

@matschaffer
Copy link
Contributor

Is this maybe a good opportunity to try to work out a way to do testing without copying the tests themselves?

I'm thinking maybe something like a helper that sets up the same test for different setup methods.

@jsoriano
Copy link
Member

jsoriano commented Feb 1, 2022

  • Use the package registry docker image to load actual packages so we can have the latest mappings.

If the only reason to start the registry is to load packages, take into account that there may be additional methods in the near future that don't require a registry, such as #122297 or #70582.

@neptunian
Copy link
Contributor Author

neptunian commented Feb 1, 2022

  • Use the package registry docker image to load actual packages so we can have the latest mappings.

If the only reason to start the registry is to load packages, take into account that there may be additional methods in the near future that don't require a registry, such as #122297 or #70582.

Thanks @jsoriano . Yes, that would be our only reason. In #122297, it looks like packages that are being shipped with Kibana are being automatically installed during Kibana setup. Is this going to change to allow packages to be shipped that aren't automatically installed? Also, are our packages considered "stack-aligned"? As in we don't need the ability to upgrade our packages out-of-band from stack releases (to fix a bug for instance) and want to make sure our packages upgrade with Kibana ("for example new features in the UI that depend on new fields that were added in the same Stack version") @sayden

@klacabane
Copy link
Contributor

Rethinking this we can skip the functional tests and only focus on api tests: we already have functional (e2e) coverage of the .monitoring-* data, so if we can validate that API responses are similar when reading from metrics-* we have the guarantee that UI will respond similarly with both data sources. This will save significant effort and computing power.

The following steps will setup an initial coverage:

  1. Transform .monitoring-* data into metrics-* data. Here's a script that transforms the data https://gist.github.com/klacabane/f2ef21b0c2722f312f2d983d9870dc68:
    • it adds the field used by esArchiver to detect whether the target is a datastream or an indice
    • it adds the data_stream object that replaces metricset construct for query filtering
  2. Extract metrics-* mappings with esArchiver and bundle them with the output of 1.
  3. Create a copy of the _mb tests that loads the bundle created in 2. and make the same assertions

As a follow up we can replace the static mappings created in 2. by installing the packages as a setup step to the test suite. This will require spawning a local package registry and hitting the fleet api. This step will allow easier updates to the mappings so that we have continuous integration in the SM tests since it will only require an update of the packages (we'll have to bundle the packages in the kibana repo to avoid any network reliance)

klacabane added a commit that referenced this issue Nov 17, 2022
## Summary
Part of #119658

Add api integration tests for kibana routes to validate behavior when
reading data ingested by elastic-agent.

We currently have a testing suite for legacy and another one for
metricbeat. Since metricbeat and agent documents only differ in their
metadata, for example agent will populate a `data_stream.*` property to
identify the document types while metricbeat uses `metricset.*`, the
tests assertion validating _business_ data should pass regardless of the
documents source. With this in mind the metricbeat tests were updated to
run the tests twice, one time with metricbeat data and a second time
with package data.

To generate the archives the `metrics-*` mappings were extracted with
esArchiver from an elasticsearch with the package installed, and the
documents were transformed from the metricbeat documents with [this
script](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
kibanamachine pushed a commit to kibanamachine/kibana that referenced this issue Nov 17, 2022
## Summary
Part of elastic#119658

Add api integration tests for kibana routes to validate behavior when
reading data ingested by elastic-agent.

We currently have a testing suite for legacy and another one for
metricbeat. Since metricbeat and agent documents only differ in their
metadata, for example agent will populate a `data_stream.*` property to
identify the document types while metricbeat uses `metricset.*`, the
tests assertion validating _business_ data should pass regardless of the
documents source. With this in mind the metricbeat tests were updated to
run the tests twice, one time with metricbeat data and a second time
with package data.

To generate the archives the `metrics-*` mappings were extracted with
esArchiver from an elasticsearch with the package installed, and the
documents were transformed from the metricbeat documents with [this
script](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
(cherry picked from commit 90f0ae2)
kibanamachine referenced this issue Nov 17, 2022
# Backport

This will backport the following commits from `main` to `8.6`:
- [[Stack Monitoring] api tests for kibana
(#145230)](#145230)

<!--- Backport version: 8.9.7 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Kevin
Lacabane","email":"kevin.lacabane@elastic.co"},"sourceCommit":{"committedDate":"2022-11-17T10:25:24Z","message":"[Stack
Monitoring] api tests for kibana (#145230)\n\n## Summary\r\nPart of
https://github.com/elastic/kibana/issues/119658\r\n\r\nAdd api
integration tests for kibana routes to validate behavior when\r\nreading
data ingested by elastic-agent.\r\n\r\nWe currently have a testing suite
for legacy and another one for\r\nmetricbeat. Since metricbeat and agent
documents only differ in their\r\nmetadata, for example agent will
populate a `data_stream.*` property to\r\nidentify the document types
while metricbeat uses `metricset.*`, the\r\ntests assertion validating
_business_ data should pass regardless of the\r\ndocuments source. With
this in mind the metricbeat tests were updated to\r\nrun the tests
twice, one time with metricbeat data and a second time\r\nwith package
data.\r\n\r\nTo generate the archives the `metrics-*` mappings were
extracted with\r\nesArchiver from an elasticsearch with the package
installed, and the\r\ndocuments were transformed from the metricbeat
documents with
[this\r\nscript](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).\r\n\r\nCo-authored-by:
Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"90f0ae2966016e0a010caedc9c83d02947d4f4f0","branchLabelMapping":{"^v8.7.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["Team:Infra
Monitoring UI","release_note:skip","Feature:Stack
Monitoring","v8.6.0","v8.7.0"],"number":145230,"url":"https://github.com/elastic/kibana/pull/145230","mergeCommit":{"message":"[Stack
Monitoring] api tests for kibana (#145230)\n\n## Summary\r\nPart of
https://github.com/elastic/kibana/issues/119658\r\n\r\nAdd api
integration tests for kibana routes to validate behavior when\r\nreading
data ingested by elastic-agent.\r\n\r\nWe currently have a testing suite
for legacy and another one for\r\nmetricbeat. Since metricbeat and agent
documents only differ in their\r\nmetadata, for example agent will
populate a `data_stream.*` property to\r\nidentify the document types
while metricbeat uses `metricset.*`, the\r\ntests assertion validating
_business_ data should pass regardless of the\r\ndocuments source. With
this in mind the metricbeat tests were updated to\r\nrun the tests
twice, one time with metricbeat data and a second time\r\nwith package
data.\r\n\r\nTo generate the archives the `metrics-*` mappings were
extracted with\r\nesArchiver from an elasticsearch with the package
installed, and the\r\ndocuments were transformed from the metricbeat
documents with
[this\r\nscript](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).\r\n\r\nCo-authored-by:
Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"90f0ae2966016e0a010caedc9c83d02947d4f4f0"}},"sourceBranch":"main","suggestedTargetBranches":["8.6"],"targetPullRequestStates":[{"branch":"8.6","label":"v8.6.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.7.0","labelRegex":"^v8.7.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/145230","number":145230,"mergeCommit":{"message":"[Stack
Monitoring] api tests for kibana (#145230)\n\n## Summary\r\nPart of
https://github.com/elastic/kibana/issues/119658\r\n\r\nAdd api
integration tests for kibana routes to validate behavior when\r\nreading
data ingested by elastic-agent.\r\n\r\nWe currently have a testing suite
for legacy and another one for\r\nmetricbeat. Since metricbeat and agent
documents only differ in their\r\nmetadata, for example agent will
populate a `data_stream.*` property to\r\nidentify the document types
while metricbeat uses `metricset.*`, the\r\ntests assertion validating
_business_ data should pass regardless of the\r\ndocuments source. With
this in mind the metricbeat tests were updated to\r\nrun the tests
twice, one time with metricbeat data and a second time\r\nwith package
data.\r\n\r\nTo generate the archives the `metrics-*` mappings were
extracted with\r\nesArchiver from an elasticsearch with the package
installed, and the\r\ndocuments were transformed from the metricbeat
documents with
[this\r\nscript](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).\r\n\r\nCo-authored-by:
Kibana Machine
<42973632+kibanamachine@users.noreply.github.com>","sha":"90f0ae2966016e0a010caedc9c83d02947d4f4f0"}}]}]
BACKPORT-->

Co-authored-by: Kevin Lacabane <kevin.lacabane@elastic.co>
klacabane added a commit that referenced this issue Nov 21, 2022
### Summary
Part of #119658

Add api integration tests for logstash routes to validate behavior when
reading data ingested by elastic-agent.

We currently have a testing suite for legacy and another one for
metricbeat. Since metricbeat and agent documents only differ in their
metadata, for example agent will populate a `data_stream.*` property to
identify the document types while metricbeat uses `metricset.*`, the
tests assertion validating _business_ data should pass regardless of the
documents source. With this in mind the metricbeat tests were updated to
run the tests twice, one time with metricbeat data and a second time
with package data.

To generate the archives the `metrics-*` mappings were extracted with
esArchiver from an elasticsearch with the package installed, and the
documents were transformed from the metricbeat documents with [this
script](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).

Co-authored-by: Kibana Machine <42973632+kibanamachine@users.noreply.github.com>
klacabane added a commit that referenced this issue Nov 22, 2022
### Summary
Part of #119658

Add api integration tests for cluster and elasticsearch routes to
validate behavior when reading data ingested by elastic-agent.

We currently have a testing suite for legacy and another one for
metricbeat. Since metricbeat and agent documents only differ in their
metadata, for example agent will populate a `data_stream.*` property to
identify the document types while metricbeat uses `metricset.*`, the
tests assertion validating _business_ data should pass regardless of the
documents source. With this in mind the metricbeat tests were updated to
run the tests twice, one time with metricbeat data and a second time
with package data.

To generate the archives the `metrics-*` mappings were extracted with
esArchiver from an elasticsearch with the package installed, and the
documents were transformed from the metricbeat documents with [this
script](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
kibanamachine pushed a commit to kibanamachine/kibana that referenced this issue Nov 22, 2022
…45138)

### Summary
Part of elastic#119658

Add api integration tests for cluster and elasticsearch routes to
validate behavior when reading data ingested by elastic-agent.

We currently have a testing suite for legacy and another one for
metricbeat. Since metricbeat and agent documents only differ in their
metadata, for example agent will populate a `data_stream.*` property to
identify the document types while metricbeat uses `metricset.*`, the
tests assertion validating _business_ data should pass regardless of the
documents source. With this in mind the metricbeat tests were updated to
run the tests twice, one time with metricbeat data and a second time
with package data.

To generate the archives the `metrics-*` mappings were extracted with
esArchiver from an elasticsearch with the package installed, and the
documents were transformed from the metricbeat documents with [this
script](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
(cherry picked from commit 5cf0d0f)
kibanamachine referenced this issue Nov 22, 2022
…5138) (#145985)

# Backport

This will backport the following commits from `main` to `8.6`:
- [[Stack Monitoring] api tests for cluster and elasticsearch
(#145138)](#145138)

<!--- Backport version: 8.9.7 -->

### Questions ?
Please refer to the [Backport tool
documentation](https://github.com/sqren/backport)

<!--BACKPORT [{"author":{"name":"Kevin
Lacabane","email":"kevin.lacabane@elastic.co"},"sourceCommit":{"committedDate":"2022-11-22T13:02:50Z","message":"[Stack
Monitoring] api tests for cluster and elasticsearch (#145138)\n\n###
Summary\r\nPart of
https://github.com/elastic/kibana/issues/119658\r\n\r\nAdd api
integration tests for cluster and elasticsearch routes to\r\nvalidate
behavior when reading data ingested by elastic-agent.\r\n\r\nWe
currently have a testing suite for legacy and another one
for\r\nmetricbeat. Since metricbeat and agent documents only differ in
their\r\nmetadata, for example agent will populate a `data_stream.*`
property to\r\nidentify the document types while metricbeat uses
`metricset.*`, the\r\ntests assertion validating _business_ data should
pass regardless of the\r\ndocuments source. With this in mind the
metricbeat tests were updated to\r\nrun the tests twice, one time with
metricbeat data and a second time\r\nwith package data.\r\n\r\nTo
generate the archives the `metrics-*` mappings were extracted
with\r\nesArchiver from an elasticsearch with the package installed, and
the\r\ndocuments were transformed from the metricbeat documents with
[this\r\nscript](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).\r\n\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"5cf0d0f24817db4b36501c6c06f6cfb4cd61c296","branchLabelMapping":{"^v8.7.0$":"main","^v(\\d+).(\\d+).\\d+$":"$1.$2"}},"sourcePullRequest":{"labels":["Team:Infra
Monitoring UI","release_note:skip","Feature:Stack
Monitoring","v8.6.0","v8.7.0"],"number":145138,"url":"https://github.com/elastic/kibana/pull/145138","mergeCommit":{"message":"[Stack
Monitoring] api tests for cluster and elasticsearch (#145138)\n\n###
Summary\r\nPart of
https://github.com/elastic/kibana/issues/119658\r\n\r\nAdd api
integration tests for cluster and elasticsearch routes to\r\nvalidate
behavior when reading data ingested by elastic-agent.\r\n\r\nWe
currently have a testing suite for legacy and another one
for\r\nmetricbeat. Since metricbeat and agent documents only differ in
their\r\nmetadata, for example agent will populate a `data_stream.*`
property to\r\nidentify the document types while metricbeat uses
`metricset.*`, the\r\ntests assertion validating _business_ data should
pass regardless of the\r\ndocuments source. With this in mind the
metricbeat tests were updated to\r\nrun the tests twice, one time with
metricbeat data and a second time\r\nwith package data.\r\n\r\nTo
generate the archives the `metrics-*` mappings were extracted
with\r\nesArchiver from an elasticsearch with the package installed, and
the\r\ndocuments were transformed from the metricbeat documents with
[this\r\nscript](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).\r\n\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"5cf0d0f24817db4b36501c6c06f6cfb4cd61c296"}},"sourceBranch":"main","suggestedTargetBranches":["8.6"],"targetPullRequestStates":[{"branch":"8.6","label":"v8.6.0","labelRegex":"^v(\\d+).(\\d+).\\d+$","isSourceBranch":false,"state":"NOT_CREATED"},{"branch":"main","label":"v8.7.0","labelRegex":"^v8.7.0$","isSourceBranch":true,"state":"MERGED","url":"https://github.com/elastic/kibana/pull/145138","number":145138,"mergeCommit":{"message":"[Stack
Monitoring] api tests for cluster and elasticsearch (#145138)\n\n###
Summary\r\nPart of
https://github.com/elastic/kibana/issues/119658\r\n\r\nAdd api
integration tests for cluster and elasticsearch routes to\r\nvalidate
behavior when reading data ingested by elastic-agent.\r\n\r\nWe
currently have a testing suite for legacy and another one
for\r\nmetricbeat. Since metricbeat and agent documents only differ in
their\r\nmetadata, for example agent will populate a `data_stream.*`
property to\r\nidentify the document types while metricbeat uses
`metricset.*`, the\r\ntests assertion validating _business_ data should
pass regardless of the\r\ndocuments source. With this in mind the
metricbeat tests were updated to\r\nrun the tests twice, one time with
metricbeat data and a second time\r\nwith package data.\r\n\r\nTo
generate the archives the `metrics-*` mappings were extracted
with\r\nesArchiver from an elasticsearch with the package installed, and
the\r\ndocuments were transformed from the metricbeat documents with
[this\r\nscript](https://gist.github.com/klacabane/654497ff86053c60af6df15fa6f6f657).\r\n\r\nCo-authored-by:
kibanamachine
<42973632+kibanamachine@users.noreply.github.com>","sha":"5cf0d0f24817db4b36501c6c06f6cfb4cd61c296"}}]}]
BACKPORT-->

Co-authored-by: Kevin Lacabane <kevin.lacabane@elastic.co>
@klacabane
Copy link
Contributor

Closing this as initial test coverage is merged. Follow up in #146000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature:Stack Monitoring Team:Infra Monitoring UI - DEPRECATED DEPRECATED - Label for the Infra Monitoring UI team. Use Team:obs-ux-infra_services
Projects
None yet
Development

No branches or pull requests

9 participants