-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fleet] Support input level variable for granular integrations #112272
Comments
Pinging @elastic/fleet (Team:Fleet) |
@jen-huang I'm guessing we need some UX work here to define how to display these variables? @endorama is blocked on adding metrics to the GCP integration (which currently only supports logs) on this problem. @endorama I think we also need to define the upgrade issue you're facing in moving from input variables to policy_template ones. Could you create a simple example of the structure of the variables before/after the change you're trying to make? This would help us determine how we can solve this problem by ensuring we're aligned on the exact upgrade path. |
Variables in policy_templates:
- name: gcp
title: Google Cloud Platform (GCP) logs
description: Collect logs from Google Cloud Platform (GCP) instances
inputs:
- type: gcp-pubsub
vars:
- name: alternative_host
type: text
title: Alternative host
multi: false
required: false
show_user: false
- name: project_id
type: text
title: Project Id
description: Your Google Cloud project ID where the resources exist.
multi: false
required: true
show_user: true
default: SET_PROJECT_NAME
- name: credentials_file
type: text
title: Credentials File
description: The path to the JSON file with the private key. Make sure that the Elastic Agent has at least read-only privileges to this file.
multi: false
required: false
show_user: true
- name: credentials_json
type: text
title: Credentials JSON
description: The content of the JSON file you downloaded from Google Cloud Platform.
multi: false
required: false
show_user: true
title: "Collect Google Cloud Platform (GCP) audit, firewall and vpcflow logs (input: gcp-pubsub)"
description: "Collecting audit, firewall and vpcflow logs from Google Cloud Platform (GCP) instances (input: gcp-pubsub)"
Variables moved from input to package root (excerpt): vars:
- name: project_id
type: text
title: Project Id
multi: false
required: true
show_user: true
default: SET_PROJECT_NAME
- name: credentials_file
type: text
title: Credentials File
multi: false
required: false
show_user: true
- name: credentials_json
type: text
title: Credentials Json
multi: false
required: false
show_user: true
policy_templates:
- name: audit
title: Google Cloud Platform (GCP) Audit logs
description: Collect audit logs from Google Cloud Platform (GCP) with Elastic Agent
categories:
- security
data_streams:
- audit
inputs:
- type: gcp-pubsub
title: "Collect Google Cloud Platform (GCP) audit logs (input: gcp-pubsub)"
description: "Collecting audit logs from Google Cloud Platform (GCP) instances (input: gcp-pubsub)"
input_group: logs
screenshots:
- src: /img/filebeat-gcp-audit.png
title: filebeat gcp audit
size: 1702x996
type: image/png |
@endorama For other packages that have moved to exporting multiple integrations, it didn't seem necessary to support input-level vars as package-level and data stream-level vars seemed to suffice: AWS Cloudwatch integration examples: Cloudwatch
For the GCP package changes, is the |
This is correct!
It is, but this is not the reason we need this.
Unfortunately no. The reason GCP migration is blocked is elastic/integrations#2987 |
Reading elastic/integrations#2987, the upgrade path refers to upgrading the GCP policies once the package is upgraded. There is no machine-readable "migration" schema available in packages between versions, which means that Fleet's mechanism for upgrading policies is naive. Apart from minor package updates, it is very easy to run into a conflict with upgrading policies. Certainly with this kind of big migration to exporting multiple integrations: all the previous packages that did this migration resulting in policy upgrade conflicts. Even if we introduce support for input-level variables and templates, it does not guarantee that GCP policy upgrade will be conflict-free as it is possible that some of Fleet's data models will need to be changed. So the case for supporting input-level vars in order to produce conflict-free upgrades is not a strong one from Fleet's perspective. A stronger argument would be that these vars should live on the input level from a UX perspective, but are instead having to be moved to package/data stream-level as workaround. If we were to add this support, it would probably take 2 weeks or so, but it is currently not on our foreseeable roadmap due to other product priorities that are higher. |
@jen-huang thanks a lot for the answer! 🎉 |
I have a few clarifications that would help me understand what's going on here, as this is an area of Fleet & Integrations I'm still unfamiliar with:
Am I right in understanding this as a summary of where we are?
|
@endorama did you get a chance to take a look at the above comments? From product perspective for GCP I would say the experience would be fine if:
The overall usage of GCP is low and hence doing the right thing and keeping a single package for both logs and metrics would be desirable. |
Yes! Actually that wording is imprecise, sorry for that, as the issue is with "input level variables" within policy templates. (I admit inputs in this context got me confused).
Yes.
correct
Yes, the main issue here is that trying out this some variable just disappear and are no longer fillable (they don't show up in the UI anymore, so there is no way to insert their value).
I'm not sure that's ideal due to the complexity involved and due to general human intervention in updates with breaking changes (if just to approve them).
One reason is that the Package Specification support this use case and this incongruence degrades the developer experience, but to the point discussed we can move forward without this issue being solved if the upgrade path works correctly. |
We already have a "conflict resolution" UX. This kicks in when a user upgrades to a package version with breaking changes. In this case, the associated policies cannot be upgraded automatically, the user needs to click Upgrade for those policies and are presented with a policy editor that allows them to view the previous configuration to copy values from. The below screenshots show the experience upgrading from a pre-integrations AWS policy to post-integrations: |
This experience was not working for my use case, but I don't want to steer the conversation there in this issue, so I will open a separate one with details. |
@jen-huang Is there someone in your team that you would recommend Edo work with closely so that we can get to the bottom of these issues and if applicable create appropriate issue to get that fixed? This "conflict resolution" flow I guess is critical and potentially may be needed for other packages as we continue to make enhancements and architectural changes around all things Agent & Fleet. @endorama Can you please link the issue here when opened so that we have some level of continuation on the discussion and folks can follow up as needed. |
@endorama @ravikesarwani - @kpollich worked on the conflict resolution experience previously, so he will follow up to help investigate the gaps that we're missing. |
Given that #131251 was resolved (thanks Kyle!) and unblocks the next step for GCP package (elastic/integrations#490 (comment)), I'm going to close this issue. |
Description
For package that contains multiple integrations (AWS, Azure, ...) we do not support input level variables in the policy editor, (we got type errors if the package contains input level variables)
Looks like it's expected:
kibana/x-pack/plugins/fleet/common/services/package_to_package_policy.ts
Lines 112 to 113 in 18af3de
We should probably support that.
Related to elastic/integrations#1570
The text was updated successfully, but these errors were encountered: